In this Help Net Security interview, Sarah Pearce, Partner at Hunton Andrews Kurth, offers insights into the evolving landscape of AI legislation and its global impact.
Pearce explores key principles, public participation, the future of AI laws in a world of rapid technological advancements, and how to balance fostering innovation and ensuring effective regulation.
We’re observing a global shift towards AI-specific legislation. Can you provide an overview of the major developments?
Indeed, various governments worldwide are looking to legislate the field somehow.
Recently, G7 leaders have agreed on the International Guiding Principles on Artificial Intelligence (AI) and a voluntary Code of Conduct for AI developers under the Hiroshima AI Process. The Hiroshima AI Process was established at the G7 Summit in May 2023 to promote guardrails for advanced AI systems on a global level. This is evidence of a shift – and on a global scale.
In terms of legislation, the EU AI Act is by far the most advanced. Importantly, the AI Act will be an EU regulation (like the GDPR), meaning it will be directly applicable in all EU Member States, and Member States do not have to enact local law to make the AI Act effective. The AI Act aims to establish a legal framework for developing and deploying AI systems in the EU.
EU lawmakers are yet to agree on the text of the legislation fully, it is unlikely the Act will be agreed before December 2023. The fourth meeting was held recently, and certain areas are still not agreed upon, including the existence and form of exemptions to “high risk” classification of an AI system; the right of authorities or employers to use AI-powered emotional recognition technology whereby facial expressions of anger, sadness, happiness and boredom, as well as other biometric data, is monitored to spot tired drivers or workers; and whether the use of real-time facial recognition cameras on streets and public spaces should be a right for member states.
In the UK, while there is currently no legislation directly regulating the use of AI, the UK government has, this year, issued a white paper, “A pro-innovation approach to AI regulation”. The pro-innovation approach means the UK government does not propose directly regulating AI use at this stage. Instead, it proposes a principle-based approach based on six core principles that regulators must apply with flexibility to implement and enforce in ways that best meet the use of AI in their sectors.
Regulators, such as Ofcom, the Competition and Markets Authority, the Information Commissioner’s Office, the Financial Conduct Authority, and the Medicine and Healthcare Products Regulatory Agency, will be asked to interpret and implement the principles.
Much like the G7 International Guiding Principles agreed today, the UK government’s principles bear resemblances to those set out by the OECD some time ago, including safety and security, appropriate transparency and explainability, fairness, accountability and governance, and contestability and redress. The timeline for implementing these principles is unclear, but, according to the government consultation, progress will be made within the next 6 to 12 months. The upcoming AI Summit (another indication of a shift in global attention on AI) may help speed up that process.
Finally, we have also just seen the White House unveiling its own plan for AI in the form of an Executive Order.
Based on your expertise, what key principles should legislators consider when framing AI regulations to ensure they are effective, fair, and foster innovation?
From a privacy perspective, legislators need to consider the core data protection principles contained in the GDPR. However, this is not without challenge, and significant tensions exist between the protection of personal data and the mass use of personal data that is inevitable in any AI technology, to name just one.
Generally speaking, the principles outlined by the OECD, reflected in the G7’s code of conduct and the UK government’s proposal, encompass a good selection of considerations on which to base any legislation/regulation in this field.
How important is it for the public to be involved in discussions and decisions around AI legislation? And how can we ensure their voice is heard and considered in the regulatory process?
It is vital that the public – but moreover, all stakeholders – be involved in discussions around AI. The technology companies developing AI, for example, are likely the best placed to understand the technology fully and can help guide any such discussion. Those organizations deploying the technology must also be closely involved, as they have a particular viewpoint to offer.
Governments also need to be a part of the discussion. The position of various nations can offer value and help steer the decision-making of all those governments represented in this context. Finally, let’s not forget the general public, the individuals whose data will likely be processed by the technology. All play valuable yet different roles and will come with different viewpoints that should be aired and considered.
Many companies view regulation as hindering innovation, especially in tech. What factors contribute to this perception, and is there any merit to these concerns?
Legislation or any form of regulation is often seen as restrictive: by its very nature, it comprises a set of rules that govern. That is often interpreted as “restrictive” and hinders development, innovation, and technological advancement in this context. That is a generalist, simplistic, and somewhat dismissive view.
While such concerns are true of certain legislation, it need not be the case for all. Much depends on the form that legislation takes. It will be interesting to see, for example, how the framework proposed by the UK government of a principle-based approach will play out in practice. In theory, such an approach appears not restrictive and is intended to allow for flexibility and promote innovation.
How do you see AI legislation evolving in the next 5-10 years, especially considering the rapid advancements in AI capabilities?
I think we will see more legislators worldwide looking to develop some form of legislation or regulatory framework. Ideally, there would be alignment amongst the big global players and the G7 announcement today is a step in the right direction. It will be interesting to see how the upcoming summit in the UK also helps nudge that supra-national development.
I hope the geo-political environment doesn’t push governments into taking action too quickly and independently, without coordination. Let’s also hope that whatever form legislation takes, whether locally or globally, it will be sufficiently agile to withstand the rapid technological advancements.