The US government regulation of AI businesses went a step further, with the Biden-Harris administration announcing new actions to promote responsible American innovation in artificial intelligence and protect people’s rights and safety.
The announcement was hours after the UK Competition and Markets Authority (CMA) announced an initial review into the competition and consumer protection considerations surrounding the development and use of artificial intelligence foundation models.
Hidden among the administrative jargon was one crucial step in the US government regulation of AI businesses: the White House will support a mass hacking exercise of popular and upcoming AI systems at the Defcon security conference this summer.
The goal is to put generative AI systems from various companies, including Google, to test and determine potential vulnerabilities.
In addition to this, the White House Office of Science and Technology Policy has announced that $140 million will be invested in launching seven new National AI Research Institutes that will focus on creating ethical and transformative AI for public use.
This brings the total number of institutes to 25 nationwide.
US government regulation of AI businesses and the Big Tech
According to the White House statement, Vice President Harris and other senior officials from the US government are scheduled to meet with the CEOs of four American companies – Alphabet, Anthropic, Microsoft, and OpenAI – that are at the forefront of AI innovation.
The declared objective of the meeting is to emphasize the responsibility of these companies in driving responsible, trustworthy, and ethical innovation that mitigates the risks and potential harms to individuals and society.
“The meeting is part of a broader, ongoing effort to engage with advocates, companies, researchers, civil rights organizations, not-for-profit organizations, communities, international partners, and others on critical AI issues,” said the statement.
The administration has called on all tech majors involved in generative AI projects to put their products to open scrutiny.
“Testing of AI models independent of government or the companies that have developed them is an important component in their effective evaluation,” said the White House statement.
“This independent exercise will provide critical information to researchers and the public about the impacts of these models and will enable AI companies and developers take steps to fix issues found in those models.”
The National Science Foundation is announcing $140 million in funding to launch seven new National AI Research Institutes, bringing the total number of Institutes to 25 across the country.
The new Institutes announced today will advance AI R&D to drive breakthroughs in critical areas, including climate, agriculture, energy, public health, education, and cybersecurity.
Government regulation of AI businesses: Tightening norms
Additionally, the Office of Management and Budget (OMB) will release draft policy guidance on the use of AI systems by the U.S. government for public comment.
“This guidance will empower agencies to responsibly leverage AI to advance their missions and strengthen their ability to equitably serve Americans—and serve as a model for state and local governments, businesses and others to follow in their own procurement and use of AI,” the announcement said.
OMB will release this draft guidance for public comment this summer to benefit from input from advocates, civil society, industry, and other stakeholders before finalization.
The Biden-Harris administration pushing ahead with the plans on government regulation of AI businesses seemed to have a ripple effect across the Atlantic.
The UK government had announced in March that it intends to divide the responsibility for regulating artificial intelligence (AI) among existing bodies responsible for human rights, health and safety, and competition, rather than creating a new entity dedicated solely to this technology.
Today, the UK Competition and Markets Authority (CMA) has sought views and evidence from stakeholders, with a deadline for submissions set for June 2, 2023.
Following evidence gathering and analysis, the CMA plans to publish a report in September 2023 that will set out its findings. The review is in line with the UK government’s AI white paper, which seeks a pro-innovation and proportionate approach to regulating the use of AI.