Moving ahead with its declared plans for responsible AI development, US President Joe Biden is joining forces with seven leading AI companies at the White House on Friday.
According to the Biden-Harris Administration, the companies will extend their voluntary commitments towards advancing safe, secure, and responsible AI development.
This strategic partnership involves major tech firms, including Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI.
AI regulations have been in the cybersecurity news particularly after instance of malicious use of generative AI started coming to light.
The latest announcement comes hot on the heels of an executive decision to issue a cybersecurity label for smart devices.
Responsible AI development: Ensuring safeguards in products
“The companies commit to internal and external security testing of their AI systems before their release,” the announcement said.
“This testing, which will be carried out in part by independent experts, guards against some of the most significant sources of AI risks, such as biosecurity and cybersecurity, as well as its broader societal effects.”
Additionally, the companies will promote information sharing across the industry and collaborate with governments, civil society, and academia to manage AI risks, exemplifying their dedication to responsible AI development.
This involves sharing best practices for safety, insights on circumventing safeguards, and fostering technical collaboration.
Inbuilt systems that put security first
Investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights is a top priority for the AI companies, showcasing their emphasis on responsible AI development.
“These model weights are the most essential part of an AI system, and the companies agree that it is vital that the model weights be released only when intended and when security risks are considered,” said the announcement.
Furthermore, the companies will encourage third-party discovery and reporting of vulnerabilities in their AI systems, signifying their proactive approach to responsible AI development.
According to the government announcement, this ensures quick identification and resolution of potential issues, even after the AI systems are deployed.
Responsible AI development: Earning the public’s trust
“There are legitimate concerns about the power of the technology and the potential for it to be used to cause harm rather than benefits,” said Microsoft’s statement on its AI customer commitments, issued in June.
“It’s not surprising, in this context, that governments around the world are looking at how existing laws and regulations can be applied to AI and are considering what new legal frameworks may be needed.”
Starting with building public trust in AI-generated content, the companies have committed to developing robust technical mechanisms, like watermarking systems, to identify AI-generated content clearly.
The companies will also commit to publicly reporting their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use.
“This report will cover both security risks and societal risks, such as the effects on fairness and bias,” said the announcement.
Moreover, the companies will prioritize research on the societal risks posed by AI systems, with a focus on avoiding harmful bias, discrimination, and safeguarding privacy, further underscoring their dedication to responsible AI development.
By mitigating these dangers, AI can be harnessed to address significant global challenges, from cancer prevention to climate change mitigation, said the announcement.
Broader commitment and international cooperation
The Biden-Harris Administration’s latest announcement also covers developing an executive order and bipartisan legislation to foster responsible innovation.
According to the announcement, the administration is actively working with allies and partners worldwide to establish a robust international framework governing responsible AI development and usage.
The US government has consulted with several countries, including Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK.
These voluntary commitments align with other global initiatives like Japan’s G-7 Hiroshima Process, the United Kingdom’s AI Safety Summit, and India’s leadership in the Global Partnership on AI.
Previous AI initiatives by the Biden-Harris administration
Earlier this year, Vice President Kamala Harris convened discussions with consumer protection, labor, and civil rights leaders to address AI-related risks and protect the public from harm and discrimination, reinforcing their dedication to responsible AI development.
In May, US President Biden met with leading AI experts in San Francisco, demonstrating his dedication to seizing AI opportunities while managing associated risks, showcasing the administration’s commitment to responsible AI development.
The President also engaged in discussions with CEOs from Google, Anthropic, Microsoft, and OpenAI, emphasizing their responsibility in driving responsible and ethical innovation with safeguards against potential harm.
The administration has already published a landmark “Blueprint for an AI Bill of Rights” and directed federal agencies to root out bias in the design and use of new technologies, including AI, further showcasing their dedication to responsible AI development.
Furthermore, the National Science Foundation has made an investment of $140 million to establish seven new National AI Research Institutes, bolstering responsible AI development across the country, underlining the administration’s commitment to responsible AI development.
The Office of Management and Budget is also set to release draft policy guidance for federal agencies, emphasizing the importance of safeguarding the rights and safety of the American people in the development, procurement, and use of AI systems, reflecting their dedication to responsible AI development.