New York: The Biden administration announced plans on Wednesday to hold a Global AI Safety Summit, as Congress continues to grapple with regulating the technology.
The event will take place on November 20-21 in San Francisco, hosted by Commerce Secretary Gina Raimondo and Secretary of State Anthony Blinken.
This will be the first gathering of the International Network of AI Safety Institutes with name Global AI Safety Summit , aimed at promoting global collaboration for the safe, secure, and reliable development of AI.
Members of the network include nations such as Australia, Canada, the European Union, France, Japan, Kenya, South Korea, Singapore, the United Kingdom, and the United States.
Generative AI, which can create content like text, images, and videos from prompts, has raised both excitement and concerns.
There are fears that this technology could eliminate jobs, disrupt elections, and even potentially surpass human control, leading to catastrophic consequences.
In May, Raimondo introduced the International Network of AI Safety Institutes at the AI Seoul Summit, where global leaders agreed to focus on AI safety, innovation, and inclusivity.
The San Francisco meeting is intended to kickstart technical collaboration ahead of the AI Action Summit scheduled for February in Paris.
Raimondo emphasized the importance of close cooperation with allies and like-minded partners, stressing that AI development should be governed by principles of safety, security, and trust.
The San Francisco summit will bring together technical experts from each country’s AI safety institute or an equivalent scientific body to discuss priority areas and enhance global collaboration on AI safety measures.
In a related development, the Commerce Department recently proposed new rules requiring advanced AI developers and cloud computing providers to report detailed information about their technologies to ensure they are secure and resilient to cyberattacks.
This regulatory effort comes as Congress struggles to pass AI-related legislation. In October 2023, President Joe Biden signed an executive order mandating that AI developers whose systems pose risks to national security, the economy, or public health must share safety test results with the government before the technologies are released to the public.