AI governance: mapping the road ahead  


Cast your eye over the tech landscape and in all likelihood you will readily agree that the (data) clouds on the horizon look vastly different today, compared to just a few short years ago.

If the focus was on data protection yesterday, today that feels too limiting. We have reached a point where personal data, data protection rights and data in the broadest sense of the word collectively have to grapple with a myriad of laws and regulations. Some, inevitably, touch on data specifically, particularly in the digital regulation space, but increasingly – and understandably – this attention is spilling over into artificial intelligence (AI).

Now, with the UK government’s new Data Use and Access Bill in its embryonic stages, you might be forgiven for thinking that the tech sector is on the cusp of something approaching a cohesive strategy. Yet, it seems that the evolving global landscape in the regulation of both data and AI more resembles Spaghetti Junction, than an ordered approach.

The fact is that, contrary to popular belief, regulation of personal data didn’t actually begin with the GDPR. Twenty years prior to its enactment in the UK we were regulated by the 1998 Data Protection Act which, itself, implemented the European Data Protection Directive. And that wasn’t even the first attempt to govern our use of data. After all, to borrow lyrics from the rock band, Talking Heads, we know where we’re going.

On that note I’d go as far as to argue that the advent of the GDPR in 2018 did not really represent a fundamental shift in the data protection rights and obligations that we had been managing for years – but it did achieve two very significant things.

On one hand it massively increased the sanctions available to regulators to penalise companies for non-compliance.

On the other hand – and perhaps more importantly – it put data protection on the global agenda in a way that we had not seen before. Data protection concepts and data protection rights became common parlance for individuals and legislators around the world. The GDPR put data on the map.

EU has led the way in data protection but will it in AI?

Since the days of 2018, we have seen a steady influx of data protection laws being enacted far and wide, which appear, at least, to be influenced by the European model of personal data legislation. These laws may not be exactly the same but they are often based on similar concepts, rights and obligations. And more recently, we have seen a proliferation of EU style standard contractual clauses being required by jurisdictions around the world to legitimise the transfer of personal data outside of their borders. In this way we see that the EU has led the way in data protection regulation.

The question this begs an answer to is whether we are seeing or are going to see the same evolution in the regulation of AI?

In simple terms the answer is ‘no’. AI has been around for a long time without specific regulation. But the introduction of generative AI into the mainstream relatively recently definitely seems to have accelerated discussions around both the desire and the need to regulate AI in some way, shape or form. Certainly, in the last few years, a growing number of countries around the world have been grappling with the question of whether to legislate – and how to legislate – for AI.

Turning our focus once again to Europe, like we did with data protection regulation, on the face of it, this could look like a case of history repeating itself. Europe has recently adopted its AI Act and, once again, appears to be trying to leverage a first mover advantage in the evolving AI regulatory landscape. It even managed to push through the legislation at a much faster pace than was achieved with the GDPR despite, or perhaps because of, the incredible amount of hype surrounding AI technology at the moment.

The problem is that hype is often followed by hyperbole.

It seems clear that Europe would like its AI Act to have the same kind of global influence as the GDPR before it. But do we think it will? Some commentators believe that will be the case but the truth is that it’s impossible to know … yet. The EU AI Act is still nascent but the early signs are that the global mood is not yet even aligned on the question of whether or not specific regulation is required, let alone what form that regulation should take.

Three approaches to AI regulation

As Columbia University’s Professor Anu Bradford argues, the legal frameworks currently being adopted by different regions can be identified in three ways. A company-driven approach is predominant in the US; this compares to the State-led focus such as seen in China and a rights-driven approach, as seen in Europe.

Each, of course, has advantages and disadvantages but the disparity is emblematic of a wider issue, yet to be resolved. Will success, for example, be judged on whether regulation is risk based or focused on outcomes? Much, I suspect, depends on whether legislators and regulators are more concerned about the risks associated with AI applications or allowing flexibility so that AI can be used to its full potential and innovation not stifled.

There’s also the question of how proscriptive they wish to be. Clear legal frameworks allow for better enforcement but don’t allow the freedom that guidelines or codes of conduct offer – something which is important for such a rapidly evolving technology.

The good old fashioned approach

Somewhere in the middle is what Professor Lilian Edwards refers to as ‘the good old fashioned law approach,’ meaning we already have laws in place that deal with issues such as data protection, IP, consumer protection, anti-discrimination. Her point is fair – if these exist already, why do we need new laws that are specific to this new(ish) technology?

Of course, the challenge doesn’t end there. Within each approach there are a myriad of nuances and differences around the world. Where this currently appears to be heading is towards an evolving global landscape of AI regulation which looks a bit like a giant and complex jigsaw puzzle, but even more frustratingly, a jigsaw puzzle where the pieces don’t actually fit together to create a global regulatory image.

It may have been challenging to navigate the complexities of multiple data protection laws that are similar but not the same. But the global AI regulatory landscape appears to be evolving in an entirely different and even more challenging direction, leaving global organisations struggling to identify where and when they may be subject to AI regulatory obligations.

All of which brings me back to Talking Heads’ famous lyrics. When it comes to AI it would be a mistake to think we’re on a road to nowhere, but we do need time to work it out.

Miriam Everett is partner and global head of data and privacy at Herbert Smith Freehills

 



Source link