MPs say UK at real risk of falling behind on AI regulation


The government should introduce artificial intelligence (AI)-specific legislation in the next session of Parliament or there is a danger the UK will be “left behind” by legislation being developed elsewhere, MPs have warned.

In an interim report published 31 August 2023 by the House of Commons Science, Innovation and Technology Committee (SITC) as part of its ongoing inquiry into the UK’s governance proposals for AI, MPs urged the government to accelerate its implementation of a regulatory regime for AI, noting this new session of Parliament will be the last opportunity to introduce AI legislation before the next general election.

The government published an AI whitepaper in March 2023 outlining its “adaptable” and “pro-innovation” approach to regulating AI, which it claimed will drive responsible development of the technology while maintaining public trust.

In the whitepaper, the government also noted the need for any legislation to include “a statutory duty on our regulators requiring them to have due regard to the [five AI governance] principles”, which include safety and security; transparency and explainability; fairness; accountability and governance; and contestability and redress.

“That commitment [for due regard] alone – in addition to any further requirements that may emerge – suggests that there should be a tightly focused AI Bill in the new session of Parliament,” said the SITC in its report. “We see a danger that if the UK does not bring in any new statutory regulation for three years it risks the government’s good intentions being left behind by other legislation – like the [European Union’s] EU AI Act – that could become the de facto standard and be hard to displace.”

It added that the government should therefore confirm whether AI legislation will be included in the upcoming King’s Speech in November 2022.

Noting current trialogue negotiations taking place in the EU over its forthcoming AI Act and the Biden administration’s voluntary agreements with major tech firms over AI safety, the SITC chair Greg Clark told reporters at a briefing that time is running out for the government to establish its own AI-related powers and oversight mechanisms.

“If there isn’t even quite a targeted and minimal enabling legislation in this session, in other words in the next few months, then the reality [for the introduction of UK AI legislation] is probably going to be 2025,” he said, adding it would be “galling” if the chance to enact new legislation was not taken simply because “we are timed out”.

“If the government’s ambitions are to be realised and its approach is to go beyond talks, it may well need to move with greater urgency in enacting the legislative powers it says will be needed.”

He further added that any legislation would need to be attuned to the 12 AI governance challenges laid out in the committee’s report, which relate to various competition, accountability and social issues associated with AI’s operation.

The specific challenge areas include bias, privacy, misrepresentation, access to data, access to computing power, black-box AI models, open source, intellectual property and copyright, liability, employment, international coordination and existential risks.

“We’re now saying that we need to go beyond the five principles that the government set out in March… [which] are at a higher level of abstraction,” said Clark, adding that the 12 challenges are “more concrete and more specific” suggestions based on evidence collected by the committee so far.

In its recommendations, the committee said the government should address each of the challenge areas outlined, both through domestic policy and international engagements.

Given the government’s intention (as per the whitepaper) to rely on existing regulators – including the Information Commissioner’s Office (ICO), the Health and Safety Executive, Equality and Human Rights Commission (EHRC) and Competition and Markets Authority (CMA) – to create tailored, context-specific rules that suit the ways AI is being used in the sectors they scrutinise, the committee further recommended conducting a “gap analysis” of their capacities and powers before any legislative attempts are made.  

“We have heard that many regulators are already actively engaged with the implications of AI for their respective remits, both individually and through initiatives such as the Digital Regulation Cooperation Forum,” it said. “However, it is already clear that the resolution of all the challenges set out in this report may require a more well-developed central coordinating function.”

Clark confirmed that the committee is due to take evidence from UK regulators on 25 October 2023.

Relating the gap analysis to the example of pre-existing copyright laws already on the books, Clark said there are “particular” challenges presented by AI that may require the existing powers to be updated, such as whether it’s possible to trace the use of copyrighted material in AI models or what degree of dilution from the original copyrighted material is acceptable.

“It’s one thing if you if you take a piece of music or a piece of writing…and dust it off as your own or someone else’s, the case law is well established,” he said. “But there isn’t much case law, at the moment as I understand it, against the use of music in a new composition that draws on hundreds of thousands of contributors. That is quite a new challenge.”

The UK government has already committed to creating a code of practice for generative AI companies to facilitate their access to copyrighted material, and following up with specific legislation if a satisfactory agreement cannot be reached between AI firms and those in creative sectors.

Clark also warned that it will be important to avoid party political positions around AI governance in the next general election, because any suggestion of the UK’s AI regulation being a time-limited approach could negatively impact investment decisions.

“We will study the government’s response to our interim report, and the AI whitepaper consultation, with interest, and will publish a final set of policy recommendations in due course,” he said.

A gap analysis already conducted by the Ada Lovelace Institute in July 2023, found that because “large swathes” of the UK economy are either unregulated or only partially regulated, it is not clear who would be responsible for scrutinising AI deployments in a range of different contexts.

This includes recruitment and employment practices, which are not comprehensively monitored; education and policing, which are monitored and enforced by an uneven network of regulators; and activities carried out by central government departments that are not directly regulated.

“In these contexts, there will be no existing, domain-specific regulator with clear overall oversight to ensure that the new AI principles are embedded in the practice of organisations deploying or using AI systems,” it said, adding that independent legal analysis conducted for the institute by data rights agency AWO found that, in these contexts, the protections currently offered by cross-cutting legislation such as the UK’s implementation of the General Data Protection Regulation (GDPR) and the Equality Act often fail to protect people from harm or give them an effective route to redress.

“This enforcement gap frequently leaves individuals dependent on court action to enforce their rights, which is costly and time consuming and often not an option for the most vulnerable.”



Source link