Key European politicians gathered at the AI Action Summit have committed to cutting “red tape” to ensure artificial intelligence (AI) is able to flourish throughout the continent, signalling closer alignment with the US’s light-touch approach to regulation.
The Paris Summit follows the inaugural AI Safety Summit hosted by the UK government at Bletchley Park in November 2023, and the second AI Seoul Summit in South Korea in May 2024, both of which largely focused on risks associated with the technology and placed an emphasis on improving its safety through international scientific cooperation and research.
However, there are concerns from some civil society groups and AI practitioners there has been a shift away from this focus on safety during the latest AI Summit, as politicians and industry figures are now seemingly prioritising speed and innovation over safety and regulation.
US vice-president JD Vance, for example, told the summit on 11 February: “Excessive regulation of the AI sector could kill a transformative industry … we need international regulatory regimes that foster the creation of AI technology rather than strangle it, and we need our European friends in particular to look to this new frontier with optimism rather than trepidation.”
Vance’s comments follow US president Donald Trump revoking an Executive Order on 20 January 2025 signed by predecessor Joe Biden that required AI developers to share safety test results with the US government for systems that posed risks to national security, the economy or public health, which prompted concerns at the time about regulatory divergence between the US, Europe and China.
Vance added that while a light-touch approach does not mean throwing all safety concerns out the window, “focus matters, and we must focus now on the opportunity to catch lightning in a bottle”.
Adopting a more aligned, light-touch regulatory approach was also encouraged by industry figures, on the basis it would boost productivity and innovation.
During a speech delivered on the first day of the AI Summit, Google CEO Sundar Pichai said it was important for different regulatory regimes to be aligned: “AI can’t flourish if there is a fragmented regulatory environment, with different rules across different countries and regions.”
He added that while history will look back on today as “the beginning of a golden age of innovation”, positive outcomes cannot be guaranteed: “European competitiveness depends on productivity, so driving adoption is key … The biggest risk could be missing out.”
Pichai also called for governments to invest more in AI innovation ecosystems, highlighting rapid adoption of the technology throughout France: “How do we create more of these pockets in more places?”
Similar sentiments were shared by OpenAI CEO Sam Altman in an op-ed for Le Monde published ahead of the summit, who encouraged European politicians to focus on innovation over regualtion: “If we want growth, jobs and progress, we must allow innovators to innovate, builders to build and developers to develop.”
He added: “In Europe, much of the conversation has focused on what former European Central Bank president Mario Draghi has called a European ‘innovation gap’ with the United States and China that poses an ‘existential challenge’ to the EU’s future.”
Both French president Emmanuel Macron and European Union (EU) digital chief Henna Virkkunen strongly indicated that the bloc would simplify its rules and implement them in a business-friendly way to help AI on the continent scale.
“It’s very clear we have to resynchronise with the rest of the world,” said Macron, adding that the French government will adopt a “Notre Dame” strategy, referring to how the cathedral was rebuilt within five years of the 2019 fire: “We showed the rest of the world that when we commit to a clear timeline, we can deliver … The Notre-Dame approach will be adopted for datacentres, for authorisation to go to the market, for AI and attractiveness.”
Virkkunen added: “I agree with industries on the fact that now, we also have to look at our rules, that we have too much overlapping regulation …We will cut red tape and the administrative burden from our industries.”
Following the announcement that France is set to invest around €109bn in datacentres and AI-related projects over the next few years, Macron declared that France is “back in the AI race”.
European Commissioner Ursula von der Leyen, however, dismissed the idea that Europe had been left behind in any way: “The AI race is far from over. Truth is, we are only at the beginning. The frontier is constantly moving, and global leadership is still up for grabs”, adding that Europe’s own “distinctive approach” should focus on collaborative, open-source solutions.
She ended her speech by announcing an additional €200bn for EU AI investment, €20bn of which she indicated would be used on gigafactories to help train very large models: “We provide the infrastructure for large computational power. Researchers, entrepreneurs and investors will be able to join forces.”
However, she added: “At the same time, I know that we have to make it easier, and we have to cut red tape – and we will.”
While it is hoped that world leaders attending the summit will sign a joint, non-binding declaration – a draft of which highlights the importance of inclusive, sustainable approaches to AI, as well as the risks of market concentration around the technology – it has been reported that the US and UK are unlikely to sign.
A ‘worrying shift’
Some are concerned that the rhetoric coming out of the summit indicates a worrying shift in the global AI landscape.
Kasia Borowska, managing director and co-founder of Brainpool AI – a global network of 500 AI and machine learning (ML) experts that build custom AI tools for businesses – said that Vance’s speech in particular means governments are “prioritising innovation over regulation”, adding there are serious questions around the safety of AI’s further development.
“If we rush to win the ‘AI arms race’ without establishing robust control mechanisms for existing AI technologies, we will be ill-prepared to manage AGI [artificial general intelligence],” she said. “Regardless of who achieves AGI first, a race-to-the-top approach that prioritises speed over safety could lead to disastrous consequences for everyone. We must implement proper safeguards now, before we reach AGI, when it may be too late.”
Chris Williams, a partner at global law firm Clyde & Co, added that while there remains enormous hype around what AI can actually achieve, the focus has clearly shifted away from balancing AI safety and innovation.
“The ‘safety first’ narrative around AI, which was once prevalent among those now in government has clearly given way to a focus on doing what is necessary to foster innovation, and a good example of this is the UK which aims to become an ‘AI superpower’. No matter the jurisdiction, whether it be the UK or US, the need to create legislative safeguards are being viewed as ‘nice to haves’ rather than essential cornerstones to developing AI in a way that is safe, responsible and ethical,” he said.
“At this stage, the regulatory response might need to be more fluid and less prescriptive to avoid stifling innovation, but it would likely need to include a long-term view of gradually stepping up checks and balances as AI becomes more advanced.”
Commenting on the draft deceleration, Gaia Marcus, director of the Ada Lovelace Institute, said that governments must re-focus on the technology’s safety, which dominated the previous two international AI Summits.
“Based on the initial draft, we are concerned that the scaffolding provided by the official summit declaration is not strong enough,” he said, adding that while it highlights “widespread consensus” on key structural risks such as AI market concentration and sustainability challenges, “it fails to build on the mission of making AI safe and trustworthy, and the safety commitments of previous summits. There are no tools to ensure tech companies are held accountable for harms. And there is a growing gap between public expectations of safety and government action to regulate.
“There will be no greater barrier to the transformative potential of AI than a failure in public confidence … like-minded countries that recognise the costs of unaddressed risks must find other forums to continue building the safety agenda.”