The UK and South Korean governments have secured voluntary commitments from 16 global artificial intelligence (AI) companies to safely develop the technology, including firms from the US, China and UAE.
Signed during the first day of the AI Seoul Summit, the Frontier AI Safety Commitments said the companies will not develop or deploy AI systems if the risks cannot be sufficiently mitigated, and outlined a range of measures they must take to ensure their approaches are transparent and accountable.
This includes assessing the risks posed by their models across every stage of the entire AI lifecycle; setting unacceptable risk thresholds to deal with the most severe threats; articulating how mitigations will be identified and implements to ensure the thresholds are not breached; and continually investing in their safety evaluation capabilities.
The signatories – which includes the likes of Google, Meta, Amazon, Microsoft, Anthropic, OpenAI, Mistral AI, IBM, Samsung, xAI, Naver, Cohere and Zhipu.ai – have also voluntarily committed to explaining how external actors from government, civil society and the public are involved in the risk assessment process, as well as providing public transparency over the whole process.
However, the commitment on public transparency is limited, as companies will not have to provide any information if “doing so would increase risk or divulge sensitive commercial information to a degree disproportionate to the societal benefit”; although they will still be expected to divulge more detailed information with “trusted actors” like governments or appointed bodies in these instances.
The companies also affirmed their commitment to implement current industry best practice on AI safety, including internal and external red-teaming of frontier AI models; investing in cyber security and insider threat safeguards to protect proprietary and unreleased model weights; incentivising third-party discovery and reporting of issues and vulnerabilities; prioritising research on societal risks posed by frontier AI models and systems; and developing and deploying frontier AI models and systems to help address the world’s greatest challenges.
All 16 have said they will publish their safety frameworks on how they are going to manage all of these issues ahead of the next AI Summit in France.
“It’s a world first to have so many leading AI companies from so many different parts of the globe all agreeing to the same commitments on AI safety,” said UK prime minister Rishi Sunak.
“These commitments ensure the world’s leading AI companies will provide transparency and accountability on their plans to develop safe AI. It sets a precedent for global standards on AI safety that will unlock the benefits of this transformative technology.”
Digital secretary Michelle Donelan added that the true potential of AI will only be realised if the risks are properly grasped: “It is on all of us to make sure AI is developed safely and today’s agreement means we now have bolstered commitments from AI companies and better representation across the globe.
“With more powerful AI models coming online, and more safety testing set to happen around the world, we are leading the charge to manage AI risks so we can seize its transformative potential for economic growth.”
The voluntary commitments made in Seoul build on previous commitments made by countries and companies during the UK government’s first AI Safety Summit in Bletchley Park six months ago.
This included all 28 governments in attendance signing the Bletchley Declaration – a non-binding communique that committed them to deepening their cooperation around the risks associated with AI – and a number of AI firms agreeing to open up their models to the UK’s AI Safety Institute (AISI) for pre-deployment testing.
Countries also agreed to support Yoshua Bengio – a Turing Award winning AI academic and member of the UN’s Scientific Advisory Board – to lead the first-ever frontier AI ‘State of the Science’ report assessing existing research on the risks and capabilities of frontier AI; an interim version of which was published in May 2024.
Commenting on the new safety commitments, Bengio said that while he is pleased to see so many leading AI companies sign up – and particularly welcomes their commitments to halt models where they present extreme risks – they will need to be backed up by more formal regulatory measures down the line.
“This voluntary commitment will obviously have to be accompanied by other regulatory measures, but it nonetheless marks an important step forward in establishing an international governance regime to promote AI safety,” he said.
Commenting on the companies commitment to set risk thresholds, Beth Barnes, founder and head of research at non-profit for AI model safety METR, added: “It’s vital to get international agreement on the ‘red lines’ where AI development would become unacceptably dangerous to public safety.”
While four major AI foundation model developers agreed during the Bletchley summit to open up their systems for pre-deployment testing, Politico reported in late April 2024 that three of them are yet to provide the agreed pre-release access to the AISI.
Computer Weekly contacted the Department for Science, Innovation and Technology (DSIT) about when it will push for mandatory commitments from AI companies, as well as whether it believes the voluntary commitments made are enough given the issues around pre-deployment access, but did not receive a response by time of publication.