UK regulators confident they are ready for AI safety governance


The Science, Innovation and Technology Committee recently took evidence from Ofcom and other regulators looking at the governance of artificial intelligence (AI).

Regulator readiness

Ofcom sees part of its role as the regulator for online safety and it supports the government’s proposed non-statutory approach to AI regulation. In written evidence submitted to the committee, Ofcom said this approach provides flexibility and should help avoid risk of overlap, duplication and conflict with existing statutory regulatory regimes.

When asked how far along Ofcom was in its readiness to take on responsibilities as the regulator for AI, Ofcom CEO Melanie Dawes told members of Parliament at the committee hearing there was a work programme across the organisation, coordinated by Ofcom’s strategy team.

Five years ago, she said, Ofcom started building a specialised AI team, comprising 15 experts on large language models (LLMs). Of the 1,350 or so staff now at Ofcom, Dawes said the team of AI experts numbered about 50. This team comprises specialists in data science and machine learning, and those with expertise in some of the new forms of AI. Dawes added there were “quite a lot of different streams of expertise”, for instance, there is a team of 350 people focused on online safety.

She said: “We do need new skills. We’ve always needed to keep building new technology expertise.”

Asked whether she felt Ofcom was equipped, Dawes said: “Yes, we feel equipped, but there is a huge amount of uncertainty how this tech will disrupt the markets. We are open to change and adapt because Ofcom’s underlying statute is tech-neutral and not dictated by the type of tech. We can adapt our approach accordingly.”

One MP at the committee meeting raised concerns over whether Ofcom had enough people with the right experience and capability to regulate.

Dawes said: “We have had a flat cash budget cap from the Treasury for many years and, at some point, this will start to create real constraints for us. We’ve become very good at driving efficiency, but if the government were to ask us to do more in the field of AI, we would need new resources. As far as our existing remit is concerned, our current resourcing is broadly adequate right now.”

The other regulators present at the committee meeting were also questioned on their readiness for AI regulations. Information commissioner John Edwards said: “The ICO has to ensure that we are communicating to all parts of the supply chain in AI –  whether they are developing models, training models or deploying applications – to the extent that personal data is involved.”

“I do believe we’re well placed to address the regulatory challenges that are presented by the new technologies”
John Edwards, ICO

He said the existing regulatory framework already applied, and this required certain remediations of risk identified. “There are accountability principles. There are transparency principles. There are explainability principles. So it’s very important I reassure the committee that there is in no sense a regulatory lacuna in respect to the developments that we have seen in recent times on AI,” added Edwards.

He added that the ICO had issued guidance on generative AI and explainability as part of a collaboration with the Alan Turing Institute. “I do believe we’re well placed to address the regulatory challenges that are presented by the new technologies,” said Edwards.

Jessica Rusu, chief data, information and intelligence officer at the Financial Conduct Authority (FCA), added: “There’s a lot of collaboration both domestically and internationally, and I’ve spent quite a bit of time with my European counterparts.”

She said the FCA’s interim report recommends that regulators conduct gap analysis to see if there were any additional powers that they would need to implement the principles that have been outlined in the government’s paper to identify any gaps.

She said the FCA had looked at assurance of cyber security and algorithmic trading in the financial sector. “We’re quite confident that we have the tools and the regulatory toolkit at the FCA to step into this new area, in particular the consumer duty.”

“I believe, from an FCA perspective, we are content that we have the ability to regulate both market oversight as well as the conduct of firms. We have done quite a lot of work over time looking at algorithms, for example.” she added.

Consumer safety

The main challenges regulators are likely to experience when looking at AI safety are covered in a government paper published this week ahead of November’s Bletchley Park AI Summit.

The Capabilities and risks from frontier AI paper from the Department for Science, Innovation and Technology points out that AI is a global effort and that safe AI development may be hindered by market failure among AI developers and collective action problems among countries because many of the harms are incurred by society as a whole. This means individual companies may not be sufficiently incentivised to address all the potential harms of their systems. 

The report’s authors warn that due to intense competition between AI developers to build products quickly, there is the potential of a “race to the bottom” scenario, where firms developing AI-based systems compete to develop AI systems as quickly as possible and under-invest in safety measures.

“In such scenarios, it could be challenging even for AI developers to commit unilaterally to stringent safety standards, lest their commitments put them at a competitive disadvantage,” the report stated.

The government’s ambition is to take a pro-innovation approach to AI safety. In his speech about AI safety and the report, prime minister Rishi Sunak said: “Doing the right thing, not the easy thing, means being honest with people about the risks from these technologies.”

During the committee’s governance of artificial intelligence committee meeting, Will Hayter, senior director of the digital markets unit at the Competition and Markets Authority (CMA), was asked whether the government’s proposals provided adequate consumer protection.

He responded by saying: “We’re still trying to understand this market as it develops. We feel very confident the bill does give the right flexibility to be able to handle the market power that emerges in digital markets, and that could include an AI-driven market.”

As the proposed legislation for AI safety makes its way through Parliament, Hayter said the CMA would be working with the government on what he described as “important improvement on the consumer protection side”.

The AI Safety Summit is due to take place at Bletchley Park on 1-2 November 2023.



Source link