ICO responds to UK government AI regulation plans


The Information Commissioner’s Office (ICO) has welcomed the UK government’s proposals to regulate artificial intelligence (AI), but called for greater clarity on how regulators should collaborate and how the suggested AI principles will align with existing data protection rules.

Published on 29 March, the government’s AI whitepaper outlined its “adaptable” approach to regulating AI, which it claimed will drive responsible innovation while maintaining public trust in the technology.

As part of its proposed “pro-innovation” framework, the government said it would empower existing regulations – including the ICO, the Health and Safety Executive, Equality and Human Rights Commission and Competition and Markets Authority – to create tailored, context-specific rules that suit the ways AI is being used in the sectors they scrutinise.

These regulators will also be expected to collaboratively produce guidance for businesses using AI, as well as run regulatory sandboxes to trial AI in real-life situations under their close supervision, which the government suggested could be done through existing cross-regulatory initiatives like the Digital Regulation Cooperation Forum (DRCF).

The whitepaper further outlined five principles that these regulators should consider when discharging their oversight duties, which are safety and security, transparency and explainability, fairness, accountability and governance, and contestability and redress.

Responding to the whitepaper, the ICO said that while it welcomes the governments intention to implement a joined-up regulatory approach, the government should prioritise research into the kinds of guidance that different AI developers would find helpful.

“For example, it is likely that sector- or use case-specific guidance will be of greater usefulness to AI developers than joined-up guidance on each non-statutory principle,” it said.  

“The latter may be too high level, and therefore require a large degree of interpretation by AI developers, to provide practical guidance on a specific issue that a business faces. Research could surface the most helpful focus for future guidance.”

It added that clarification is also needed on the respective roles of government and regulators in the issuing of guidance, as “businesses will require confidence that implementing any guidance or advice will minimise the risk of legal or enforcement action by regulators.”

To clarify and deliver its AI ambitions, the ICO is therefore encouraging the government to work through regulators, particularly the DRCF, which it said already plays a horizon-scanning role in identifying and examining the implications of new AI applications.

On the suggested AI principles, the ICO said while they already map closely to those found in the UK’s data protection framework, it is important that they are interpreted in a way that’s compatible so as to avoid additional burdens or complexity for businesses.

As an example, the ICO said the concept of fairness should be extended to cover every stage of an AI systems development, and that clarification is needed on whether it is regulators or organisations themselves that are expected to clarify routes to redress under the contestability principle.

“Typically, it is organisations using AI and that have oversight over their own systems that are expected to clarify routes to, and implement, contestability,” it said. “We would welcome clarity around this sentence, and would like to understand whether the scope for regulators such as the ICO may be better described as making people more aware of their rights in the context of AI.”

It added that further clarify is also needed around how the AI regulations will interact with Article 22 of the UK General Data Protection Regulation (GDPR), which protects people from being subject to solely automated decision-making.

“The paper notes that regulators are expected, where a decision involving the use of an AI system has a legal or similarly significant effect on an individual, to consider the suitability of requiring AI system operators to provide an appropriate justification for that decision to affected parties,” it said.

“We would like to highlight that where an AI system uses personal data, if UK GDPR Article 22 is engaged, it will be a requirement for AI system operators to be able to provide a justification, not a consideration. We suggest clarifying this to ensure this does not create confusion for industry.”

Over the next six months, the government has said it would consult with a range of actors on its whitepaper proposals, work with regulators to help them develop guidance, design and publish an AI regulation roadmap, and analyse findings from commissioned research projects to better inform its understanding of the regulatory challenges around AI.

In October 2022, the House of Commons Science and Technology Committee launched an inquiry into the UK’s AI governance, to scrutinise whether the government’s proposed approach – now formalised in the whitepaper – is the right one.

The inquiry will be particularly focused on bias in algorithms and the lack of transparency around both public and private sector AI deployments, and is set to explore how automated decisions can be effectively challenged by ordinary people.

A House of Lords AI in Weapon Systems Committee was also established in January 2023 to explore the ethics of developing and deploying autonomous weapons, including how they can be used safely and reliably, their potential for conflict escalation, and their compliance with international laws. The first session was held at the end of March 2023.

At the start of March, the government introduced a revised version of its post-Brexit data protection reforms to Parliament.

Known as the Data Protection and Digital Information Bill, the government said it will support increased international trade without creating extra costs for businesses already compliant with existing data protection rules, as well as boost public confidence in the use of AI technologies by clarifying the circumstances in which safeguards apply to automated decision-making.

For instance, if an automated decision has been taken without “meaningful human involvement”, an individual will be able to challenge that decision and request that another person review the outcome instead. However, the government has not specified what meaningful human input would look like.



Source link