Inclusive approaches to AI governance needed to engage public


People’s inclusive participation in both the public and private management of artificial intelligence (AI) systems is key to making the technology work for the benefit of all, but there are currently no avenues to meaningful public engagement.

During the fourth annual AIUK event run by the Alan Turing Institute (ATI), delegates speaking across the day stressed the importance of the people’s “structural participation” in public AI projects, from ideation all the way to completion and final delivery.

However, according to the government chief scientific adviser, Angela McClean, there are no viable channels available to the public that would allow them to have their voices heard around matters of science and technology, such as AI, that clearly affect them. “Successful adoption [of AI] is going to need public participation and the earning of public trust, and that means it’s up to us in the civil service to be trustworthy,” she said in her keynote address.

Responding to questions about current opportunities for meaningful citizen engagement, McLean said she is not aware of any consultation processes that would achieve this, but that it would be “very interesting” if there was one: “What we have here is the chance to review and reset some of the ways that the government interacts with people; I think this is a very big deal.”

Noting a popular phrase of the disability rights movements, “Nothing about us without us”, ATI research associate Georgina Aitkenhead said in a panel about inclusivity that it means “we shouldn’t be doing things that will impact people in society without those people being involved in setting priorities and contributing to decisions.”

She added that this emphasis on inclusivity needs to be brought into the development, design and deployment of technology from the outset.

James Scott, a senior advisor to the government’s Responsible Technology Adoption Unit, agreed, noting that “often we see surface-level engagement, it’s often quite late, and therefore it has really limited capacity to change direction because the key decisions have already been taken”.

Participatory approaches

Scott – who has also co-led the development of the ATI’s AutSpaces, a completely open source “citizen science” platform built around supporting the needs of autistic people – added that participatory research projects have real potential to allow communities to set the direction of travel in decisions that affect them.

“Essentially, that platform will help autistic people who know their sensory experiences to create a better data set to be used for research and to improve public services,” he said, adding that the ultimate goal is to improve the process of public services design by including the needs of non-neurotypical people from the outset, which then benefits everyone.

Similar sentiments around the importance of meaningful engagement with different communities at an early stage were echoed by Helena Hollis, a social researcher at advocacy group Connected By Data, which organised an experimental People’s Panel on AI in November 2023 to engage with representative members of the public in a deliberative process around what to do with AI.

Partly done in response to the narrow sectoral focus of the government’s AI Safety Summit in Bletchley Park taking place at the same time, Hollis said “it turned out to be a really effective, meaningful and useful thing to have done”.

She added that inclusivity has to be considered from the very outset, and baked into the questions researchers and policymakers ask. “If we’re going to ask, ‘What can AI do, and how can we develop this technology?’ you are inherently only going to be inviting certain people to answer your question,” said Hollis. “If you ask, ‘What sorts of life do we want to live, and how does AI support that, or how could it support that?’ Well, then anybody who lives a life can answer that question through their lived experience.”

Hollis also noted that members of the panel continue to engage with Connected by Data, and have expressed a strong desire to see similar participatory practices scaled up.

Commenting on the experience, People’s Panel participant Margaret Colling said that following an intensive four-day learning process, there was a sense of excitement and empowerment among members.

“I would never have expected to have any cohesive opinions on such technically advanced subject matter,” she said. “But there was so much I wanted to say – suddenly we were all powerful representatives of our various communities, and our voices mattered.

“We the general public are perfectly capable of forming opinions, on any matter pertaining to our daily lives, if given the relevant information. We can offer a different perspective, often voicing concerns that otherwise might not be mentioned.”

Power discrepancies

Speaking on a panel about the potential development of artificial general intelligence (AGI), speakers expressed concerns that AI technology is largely owned by a small number of foreign companies (mainly from the US and China).

Michael Wooldridge, director of foundational AI research at the ATI, for example, said it was “potentially the most consequential technology of the 21st century, and we’re fundamentally not part of that in the sense of owning it”.

John McDermid, a computer scientist at the University of York, also said the concentration of AI technology in relatively few hands is a “real problem” for a number of reasons.

First, it means that from a safety perspective, engineers independent of the AI-owning firms cannot assess the potential risks and harms because the models are largely locked in corporate silos. Given the fast-changing nature of AI models as well, McDermid added that small changes to the systems can have a litany of knock-on effects that cannot be known due to a lack of access.

He also noted that only giant corporations currently have the financial and computational resources to research and develop the most cutting-edge models, giving them incredible sway over what will get deployed in the world.

“The concern is that the power ends up with a small number of organisations, which behave as organisations do – they try to maximise their profit rather than necessarily do social good,” said McDermid.

Input from workers

While McLean said there also needs to be broader input on AI-related decision-making from a more diverse range of expert voices from other disciplines (and not just computer science, given the socio-technical nature of the technology), other speakers throughout the day said there also needs to be input from workers due to their expertise in the details of day-to-day business operations.

Commenting on the lack of protections workers throughout the gig economy receive from both their employers and the state – including in the labour-intensive data labelling work necessary for AI to function – sociologist Karen Gregory said they have had no choice but to turn to one another for support on the job. “There’s a tremendous amount of mutual aid here,” she said. “Workers in these fields simply want work to work, and it doesn’t.”

“They are the experts of these systems,” said Gregory. “They know precisely where they’re failing. If you talk to a delivery rider in Edinburgh, he or she can tell you where every pot hole is in the city, but there’s no ‘voice’ mechanism there [from workers’ informal back channels] to regulation.”

Linking this back to more “professionalised”, white-collar parts of the tech sector, ethical AI researcher and member of the United Tech and Allied Workers (UTAW) union Matt Buckley added that he finds it “bizarre” that in discussions around AI, it’s executives and salespeople that are consulted first, rather than workers on the ground that actually build, maintain and deeply understand the technology.

For Gregory, the answer lies in the creation of effective communication channels that capture workers’ voices and ideas for improving AI, which for Buckley extends to having two-way communication between workers and bosses that gives ordinary people more of a say over how AI is deployed in workplaces, too.



Source link