Site icon Revoi.in

Roving Periscope: Amid the Gaza, and Ukraine wars, the world tries to chain AI, the latest ‘technology monster’

Social Share

Virendra Pandit

 

New Delhi: Although Artificial Intelligence (AI) was in the works for nearly five decades, its sudden ‘explosion’ a year ago took the world by surprise…so much so that its major stakeholders, including the US, the UK, and the European Union, are trying to chain the ‘monster’ lest it damage the world amid the raging Hamas-Israel and Russia-Ukraine wars.

After the AI tools came into public domain in December 2022, they have spread like wildfire across the world and emerged as the technology versions of “Dr. Jackyll and Mr Hide.”

The urgency of containing AI could be judged by the fact that US President Joe Biden issued a sweeping executive order on Monday to regulate the development of AI amid growing concern about its potential impact on everything from national security to public health, the media reported on Tuesday.

Not just the US, the EU has edged closer to passing its own AI Act, the G-7 agreed its own code of conduct for companies using the technology, and the UK is holding a two-day global event on the dangers of AI from tomorrow.

President Biden’s executive order enables wide-ranging regulation of AI and comes days before British Prime Minister Rishi Sunak leads an AI Safety Summit on November 1 and 2, as countries race to keep up with rapidly evolving AI technologies, despite apprehensions that hackers and terrorists could weaponize this monster.

The President’s order invokes the Defense Production Act, last used to give the US federal government powers to direct production during the COVID-19 pandemic. Using this law, companies developing AI systems will be required to notify the US federal government of technologies that have implications for America’s national security, national economic security, or public health and share the results of certain safety tests.

“To realize the promise of AI and avoid the risk, we need to govern this technology,” Biden said. “In the wrong hands, AI can make it easier for hackers to exploit vulnerabilities in the software that makes our society run.”

The executive order includes a provision that developers of the most powerful AI models must notify the government of their work and share safety test results.

The order directed the National Institute of Standards and Technology (NIST) to establish “rigorous standards” for testing AI prior to its release, the Department of Commerce to develop guidelines for identifying AI-generated content, and agencies funding “life science projects” to establish “strong new standards of biological synthesis screening” to ensure AI cannot engineer biohazards.

President Biden also called on the US Congress to pass data privacy legislation and for the Department of Justice to address “algorithmic discrimination” among landlords and federal benefits programs.

White House Deputy Chief of Staff Bruce Reed hailed the measures as “the strongest set of actions any government in the world has ever taken on AI safety, security, and trust” and the “next step in an aggressive strategy to do everything on all fronts to harness the benefits of AI and mitigate the risks”.

Fears about the risks of AI have grown exponentially since the release last year of Open AI’s ChatGPT, whose capabilities caught regulators and government officials around the world off guard, reports said.

Expressing the growing concerns about the potential misuse of AI and its implications for the planet, Britain is hosting the world’s first global AI Safety Summit from tomorrow to examine the risks of the fast-growing technology and kickstart an international dialogue on its regulation.

The meet will take place at Bletchley Park, where Britain’s World War Two code-breakers worked, in Southern England, on November 1 and 2.

Nearly 100 guests, including world leaders, technology savants, academics, and nonprofits are attending the crucial event.

They include Tesla chief Elon Musk, US Vice President Kamala Harris, European Commission President Ursula von der Leyen, China’s Technology Vice Minister Wu Zhaohui, and United Nations Secretary-General Antonio Guterres.

Executives from the top AI companies in the world, including Google Deepmind CEO Demis Hassabis and Sam Altman, who founded the Microsoft-backed ChatGPT creator OpenAI, will also attend, besides the representatives from Alibaba, and Tencent.

Academics and nonprofits, which have warned of the risk of the rise of AI, will also take a leading role, represented by AI “godfathers” such as Stuart Russell and Geoffrey Hinton, alongside the Alan Turing Institute and the Future of Life Institute.

The crucial Summit aims to start a global conversation on the future regulation of AI. Currently, there are no broad-based global regulations focusing on AI safety, although some governments have started drawing up their own rules. For instance, the European Union has written the first set of legislation governing its use for the bloc.

According to the Summit agenda, there will be a series of roundtable discussions on threats posed by future developments in AI.

Topics include how AI systems might be weaponized by hackers, or used by terrorists to build bioweapons, as well as the technology’s potential to gain sentience and wreak havoc on the world.

Experts and regulators appear split on how to prioritize these threats, with the EU’s long-awaited AI Act prioritizing potential infringements of human rights – such as data privacy and protection from surveillance – versus the so-called existential risks that dominate much of the Summit’s agenda.

PM Sunak wants Britain to be a global leader in AI safety, carving out a role for a post-Brexit London between the competing economic blocs of the United States, China, and the European Union.

The event comes almost a year after OpenAI released the AI-powered ChatGPT to the public, sparking international debates over the rapidly developing technology’s potential, with some experts comparing it to climate change or nuclear weapons.

PM Sunak plans to launch a global advisory board for AI regulation, modeled on the Intergovernmental Panel on Climate Change (IPCC), the media reported.

Last week, the UN announced it had formed its own AI advisory board, made up of a few experts from industry, research, and different governments.