AI Safety Summit review | Computer Weekly


After all the hype and debate about the AI Safety Summit, there’s one question that really matters: What will change as a result? While there was general consensus among attendees around the summit being a good first step, particularly due to the involvement of China amid rising tensions with western governments, many others were concerned about what comes next and whether more diverse perspectives will be included in the future.

On the first day of the summit, for example, Matt Clifford – a former prominent investor in the technology and the prime minister’s representative on artificial intelligence (AI) – said it was “remarkable” to have pulled off so much in the 10 weeks since the summit was announced, from the inclusion of the Chinese government and the confirmation of two future summits (in South Korea and France), to getting 28 countries and the European Union (EU) to sign the Bletchley Declaration affirming the need for an inclusive, human-centric approach to ensure AI’s trustworthiness and safety.

Speaking with press on the second day of the summit, Tino Cuéllar, president of the Carnegie Endowment for International Peace, was similarly positive about “so many different countries” being brought to the table, adding that the presence of China was particularly important because it indicated the conversation around AI going forward would be “truly global”.

Cuéllar further added there was “good progress” in thinking about the long- to medium-term risks of more advanced models, and praised the commitments made to deepening international cooperation on AI safety.

In particular, Cuéllar noted a number of “concrete outcomes” from the summit, including the US’s announcement of its own AI Safety Institute (which the US secretary of commerce confirmed would establish a “formal partnership” with the UK’s version announced by Sunak a week earlier); the “real enthusiasm” he saw for establishing a panel of scientists to provide international consensus on the science behind AI; and recognition that supranational bodies such as the United Nations and G7 need to play a central role in global AI governance efforts.

In the wake of the summit, the UK government announced it had commissioned an independent State of the science report on the capabilities and risks of frontier AI, which will be advised by an “expert advisory panel” comprising representatives from countries attending the summit.

“Rather than producing new material, it will summarise the best of existing research and identify areas of research priority, providing a synthesis of the existing knowledge of frontier AI risks. It will not make policy or regulatory recommendations, but will instead help to inform both international and domestic policy making,” said the UK government in a policy note, adding the report would be published before the next summit in South Korea.

Similar sentiments were echoed by London Tech Advocates CEO Russ Shaw, who – while not in attendance at the summit – said it was “a positive step” to get governments with such differing views in the room, adding the UK’s leadership role in convening them “has a nice halo effect on the UK tech sector” and puts it in a good arbitrating position between the US, the EU and China going forward.

Commenting on the signing of the Bletchley Declaration, Shaw said we shouldn’t “underestimate the degree of difficulty of just getting those people in the room to do that” and that the common ground found could act as a foundation for future action.

Julian David, CEO of trade association TechUK, also praised the UK government’s ability to bring China and the US to the same table, as well as the breadth of delegates from around the world.

“The willingness of each of the regulatory regimes to talk to each other about what they’re doing and why they’re doing it, and to try to bring some sort of consensus out of that, is what the AI Safety Summit concept can achieve”
Russ Shaw, London Tech Advocates

“If you’ve got China and the US in the same discussion, you’re going to have a difference about what they think exactly AI should be doing for good… many people have said this, and I’ve wondered about it myself – would we do better to just have people like us together, and I think the answer is no,” he said.

However, Shaw said he would like to see the governments involved “put more flesh on the bones” in terms of what form international cooperation will take on AI safety going forward, while David similarly emphasised the need for “continued dialogue” to deliver on those commitments to deepening inter-state cooperation.

“If it just becomes a showpiece every six months… that’s not going to do it,” said David, adding there needs to be particular focus on how any testing regimes will evolve so they are not dominated by the more powerful countries involved.

“The willingness of each of the regulatory regimes to talk to each other about what they’re doing and why they’re doing it, and to try to bring some sort of consensus out of that, I think, is what the Bletchley Park process, the Safety Summit concept, can achieve,” he added. “But you’ve got to get back to practical implementation and outcomes. That was a big thing at all the roundtables I was in.”

Despite the positivity of government officials and tech industry representatives, there was also concern from some about the inclusivity and utility of the event.

More than 100 trade unions and civil society organisations, for example, panned the summit as a “missed opportunity” in an open letter because of its focus on future speculative risks over real-world harms already happening, workers’ exclusion from the event, and its dominance by big tech firms.

While signatories were mostly organisations, some prominent individuals also signed, including Tabitha Goldstaub, former chair of the UK’s AI Council, and Neil Lawrence, a professor of machine learning at the University of Cambridge, who was previously interim chair of the Centre for Data Ethics and Innovation’s (CDEI) advisory board before it was quietly disbanded by the government in early September 2023.

The balance of risks

On the first day of the summit, Ian Hogarth, entrepreneur and chair of the UK government’s £100m Frontier AI Taskforce, said a number of experts were concerned about uncontrolled advances in AI leading to “catastrophic consequences”, and that he was personally worried about a situation where progress in the technology outstrips our ability to safeguard society.

“There’s a wide range of beliefs in this room as to the certainty and severity of these risks – no one in this room knows, for sure, how or if these next jumps in computational power will translate to new model capabilities or harms,” he said, adding that the Taskforce he heads up has been trying to ground an understanding of these risks in “empiricism and rigour”.

Speaking about the focus on future risks, Shaw said it was positive because many governments get “caught up in the here and now”, meaning they are not devoting as much time to horizon scanning and forward planning.

“Hopefully, inside meetings and discussions afterwards, they can bring it back to the here and now, but at least they’ve got a framework about where to go,” he said.

Speaking at the AI and Society Forum a day before the government summit, in a session about the risks of large-scale datasets and models scaling hate and dehumanisation, Abeba Birhane, a senior advisor on AI accountability at the Mozilla Foundation, relayed how extinction risk narratives around AI are scientifically ungrounded and act as smokescreens for those developing and deploying the technology to evade responsibility.

Commenting on the mainstreaming of AI since the release of large language models (LLMs) like Dall.E and ChatGPT – both of which were picked up by millions of users within a matter of weeks – Birhane said while lots of concerns have surfaced, a lot of the discussions have been dominated by the spectre of a “superintelligence” and the potential of human extinction.

“Framing all the ethical or societal concerns around non-existent issues such as existential risk is driven by the people who are developing and deploying the technology,” she said, adding they tend to take these “ungrounded concerns and run with them”.

She said while the conversation is largely dominated by such abstract concerns, there is much less appetite to discuss the “real, actual, concrete harms” taking place right now.

Testing and evaluating for safety

While the AI Safety Summit’s roundtable discussions were off-limits to the press, meeting chairs – including Cuéllar and UK digital secretary Michelle Donelan, among others – outlined in their feedback the consensus around the need for proper testing and evaluation of AI models going forward to ensure their safety and reliability.

“We need more comprehensive and better quality technical evaluations of AI models, which include societal metrics and recognise the context of their application in the real world. Evaluations need to be continuous and handle workflows, not just static datasets,” said Marietje Schaake, international policy director at the Stanford Cyber Policy Institute, in her feedback from a roundtable on the risks of integrating frontier models into society.

“We should invest in basic research, including in governments’ own systems. Public procurement is an opportunity to put into practice how we will evaluate and use technology,” she added.

“We need more comprehensive and better quality technical evaluations of AI models, which include societal metrics and recognise the context of their application in the real world”
Marietje Schaake, Stanford Cyber Policy Institute

Outlining the idea that LLMs are simply “stochastic parrots” – whereby they produce outputs that seem correct but are in fact probabilistically fabricated from the training data because of its inability to parse meaning – Birhane said the technology was highly unreliable, with textual outputs requiring extensive vetting and image outputs tending to produce the most stereotypical and even caricatured versions of different races and ethnicities.

Giving an example, she noted how one model produced a scientific paper on the health benefits of eating crushed glass, complete with a nutritional breakdown.

Part of the reason for their unreliability, Birhane said, is the intense competition between firms developing the models: “Because the race is to produce the biggest model with the most parameters trained on the biggest dataset, there tends to be very little attention, or very little time, to do proper auditing, proper examining… before these models are released.”

In her own research, Birhane has examined whether the likelihood of LLMs producing toxic or problematic outputs also scales with the size of the dataset in use, finding through audits that the presence of hateful or aggressive content did actually increase when using bigger datasets.

She added this undermines the “common assumption” that using larger datasets will somehow balance out the good and bad outputs so they cancel each other out: “Unfortunately, a lot of the [AI] field is driven by assumption, very little of it is tested.”

A union perspective

Mary Towers, a policy officer at the Trades Union Congress (TUC), also speaking at the AI and Society Forum, provided insights into how AI was currently being used in workplaces.

“Firstly, [AI is being used] to carry out all the functions at work that a human manager would have done in the past – so to recruit, line manage, direct, determine and assess how people carry out their work,” she said, adding it was further transforming work by either fully automating some jobs, or requiring people in other jobs to work alongside it in new ways.

“For example, generative AI is being used to carry out clerical tasks, write code, respond to queries, even to make music and art… [while] teachers are working with AI-assisted learning for their pupils, healthcare professionals are using AI for diagnosis and treatment, and warehouse operatives are working alongside AI-enabled robots.”

On the impacts these changes are having, Towers said AI-powered algorithmic management could result in a range of negative outcomes, including unfairness where it fails to take into account context, privacy and data rights violations, difficulties people face in challenging AI-assisted decision-making, and the intensification of work to unsustainable levels.

In terms of automation and the transformation of roles, she added the negative impacts include “potential job loss, significant training and reskilling requirements, and many ethical, professional dilemmas”.

However, for Towers, while AI also provides clear opportunities to workers and unions, it needs effective regulation both to protect against harms and to “provide the certainty that we need for fair and socially useful innovation to flourish”.

She added any new approach should place a strong emphasis on collective bargaining rights: “Collective agreements with employers, we say, are the ideal vehicle for the co-governance of AI at work – they’re flexible, they can be adapted for sector and workplace requirements, but can also be easily adapted and quickly adapted to technological developments.”

In terms of what these agreements should include, Towers said they should place an emphasis on power dynamics in the workplace, especially regarding the collection and use of data; transparency; explainability; human review and consent; and key principles like, for example, the tech not being used to dehumanise workers by turning them into data points on a screen.

Further stressing the importance of collectivism, Towers noted that people organising together play an integral role in providing a counterbalance to the corporate interests currently dominating AI.

“It’s in the spirit of collaboration and collectivism that we’re so proud of the open letter to the prime minister… objecting to the marginalisation of civil society from the AI Summit. In our letter, we outlined why it’s so crucial for a range of voices to be heard at the summit and beyond, because if we’re going to make sure AI works, then we all need to have a say.”

A range of others that Computer Weekly discussed the summit with – including Shaw and Dutch digital minister Alexandra van Huffelen – said they would like to see more involvement from trade unions and other diverse communities in future summits and collaboration efforts.



Source link