Open source approaches to artificial intelligence (AI) development are gaining momentum in the wake of the India AI Impact Summit, which positioned the technology as a vehicle for inclusive development across the Global South.
The event’s tagline, “Welfare for all, happiness for all”, signalled a deliberate pivot away from the existential and safety risks of AI, towards economic development and infrastructure expansion.
The summit ended with the New Delhi Declaration on AI Impact, a non-binding agreement backed by 88 countries and international organisations built around principles of inclusive, human-centric AI development. China, the US and the UK were among the signatories.
The other tangible output was the New Delhi Frontier AI Impact Commitments, a set of voluntary agreements announced by the Indian government and endorsed by leading frontier AI companies.
Those who signed agreed to two commitments: transparency around real-world AI usage, and a commitment to strengthening the testing of AI systems across underrepresented languages and cultural contexts. The latter aims to ensure frontier AI models are reliable and accessible beyond English-speaking markets, particularly in the Global South.
Within these developments, open source AI emerged as one of the most politically significant themes of the week. At earlier global summits, open source had been portrayed as a security risk, but in Delhi, it moved to the centre of sovereignty and development debates.
The summit’s programme was organised around seven interconnected focus areas (or Chakras), aiming to foster greater multilateral cooperation around AI development, while translating three broader principles (or Sutras) of people, planet and progress into concrete areas of action.
While these commitments made no overt mention of previous summits’ attempts to coordinate government action on addressing AI risks, there was a lack of resistance to open source compared with previous years. In Paris, the US and UK previously refused to sign a declaration on inclusive and sustainable AI.
Open source takes centre stage
UK AI minister Kanishka Narayan formally endorsed the UK becoming the “home of open source AI” and OpenUK in a video documentary shared at a British High Commission in India event at the summit. The US ambassador to India, Sergio Gor, the president of France, Emmanuel Macron, and India’s president, Narendra Modi, all showed some level of support for open source.
Amanda Brock, CEO of OpenUK, said: “At the very highest level, there is understanding that open source is valuable for sovereignty, for access for all and for innovators, for collaboration, but we saw a lack of understanding of what exactly it is and how it works if you are to be successful.”
Raffi Krikorian, chief technology officer at Mozilla, noted that the global AI industry is currently dominated by a handful of corporations offering vertically integrated proprietary models.
The consensus agreed that it met no one’s definition of sovereignty for a select few companies to own and control AI Linda Griffin, Mozilla
He argued that closed systems cannot adequately reflect the contextual nuances, languages and customisations different societies require.
“A state concerned with AI sovereignty in 2026 cannot credibly justify financing a foreign, vertically integrated AI stack while neglecting investment in domestic and open source alternatives,” said Krikorian.
The scale of power wielded by large tech companies has been likened to the East India Company in the 19th century, when it controlled half of global trade and maintained its own army. The ability of tech giants to set rules, adjudicate disputes, police speech, and shape labour markets and elections are functions previously associated with sovereign states.
Linda Griffin, vice-president of global policy at Mozilla, noted that attendees were less starry-eyed over big tech than at previous summits: “The consensus agreed that it met no one’s definition of sovereignty for a select few companies to own and control AI.”
She added that summit discussions made it clear that dependency-oriented partnerships are not true partnerships and that they don’t work in the long term. While many countries have expressed a desire for autonomy of their data and choice in their suppliers to lessen harmful impact on citizens, “that’s not today’s reality”.
Griffin told Computer Weekly that discussion of open source was unavoidable, in part because the summit was held in the Global South, where it has become increasingly clear that only open source models will give them a fighting chance at cashing in on the developmental and economic opportunities afforded by AI.
She stressed the progress that has been made in taking the importance of open source seriously, noting that at the first AI Safety Summit in 2023, open source was vilified as a security risk.
“At the France AI Action Summit, the consensus began to shift meaningfully. At the India AI Impact Summit, we saw undeniable recognition of the vital role that open source plays in our collective AI future,” said Griffin.
Comparing open versus closed, she argued that with proprietary systems, winning means owning. However, with open source approaches, winning means not just renting AI from a few companies and countries, but “enabling countries to build, share, secure and inspect systems on their own terms”.
She warned that market concentration remained the elephant in the room.
Anti-trust and competition law
Mozilla advocated for stronger competition enforcement and user-centric regulation at the summit, noting that traditional antitrust mechanisms have struggled to keep pace with fast-moving digital markets.
Griffin noted that Mozilla was one of the few organisations that ran a competition panel at the summit, stressing that these frameworks are essential to prevent policy capture and ensure AI ecosystems remain open, resilient and accountable.
She added that competition law will give open models a fighting chance, preventing a few AI giants from monopolising model hosting, cloud compute or inference pipelines. In theory, they would level the playing field, ensuring smaller players, startups and governments have access to AI capabilities without being dependent on a few corporations.
Griffin added that web browsers, for example, represent a critical chokepoint in the enclosed AI stack, highlighting how control of popular browser infrastructure by just a few firms threatens to entrench their positions on AI by virtue of the access to compute and data it gives them.
Regulation vs competitiveness
Globally, governments remain wary of imposing AI regulation that they believe could undermine economic competitiveness or military advantage.
India’s push to widen access and introduce a framework for global AI governance was largely dismissed by Washington and the country’s leading tech companies, as White House official Michael Kratsios said, “we totally reject global governance of AI” on the last day of the summit.
Griffin said the narrative against regulation became a blanket mantra, applied to anything from AI governance to competition action.
She added: “What’s more likely to kill a startup: the cost of compliance, or the concentration of market power in the hands of a few dominant players? It’s true that regulation can absolutely create challenges. However, it is also worth looking at whether the greater obstacle is the control a small number of tech companies hold.”
The European Union (EU) was a magnet for criticism at the summit, given its recent attempts to regulate the technology through its AI Act. This aims to provide developers and deployers with “clear requirements and obligations regarding specific use of AI” through a regulatory framework that defines four levels of risk for AI systems: unacceptable risk, high risk, limited risk and minimal risk.
Griffin argued that much of the public commentary on EU AI regulation has been factually incorrect. “It’s hard not to see invalid criticisms as a strategic PR effort by those who philosophically (and financially) oppose governance,” she said.
In practice, the EU AI Act does not introduce rules for AI that are deemed minimal or no risk – the vast majority of AI systems currently used in the EU fall into this category. This includes applications such as AI-enabled video games or spam filters.
For all the criticism levied against EU regulation, the strict compliance regime for high-risk AI systems – including for biometrics, education, law enforcement, immigration and critical infrastructure – is still being phased in.
Bans on unacceptable risk systems, including biometric categorisation to deduce certain protected characteristics, and social scoring have been in place since February 2025. By contrast, the US has struggled to regulate AI at the federal level and has seen efforts to preempt more ambitious state-level legislation.
Meanwhile, China has pursued one of the most assertive regulatory approaches, breaking up major firms between 2020 and 2023 – including dividing Alibaba Group into six new entities.
China’s model is actor-based, requiring security assessments and algorithm filings with the Cyberspace Administration of China, embedding content control and political compliance into the regulatory framework.
Griffin said regulation is unavoidable and that a risk-based regulatory framework has been years in the making. She’s optimistic it will be harder for critics to dismiss it outright once the EU Act has been fully implemented, and we can observe how it plays out in practice.
Legal academic Simon Chesterman has previously compared AI regulation to nuclear governance.
“In the 1950s, nuclear governance emerged against the backdrop of unmistakable devastation and a clear existential threat. AI presents no such singular moment of reckoning. Its harms are diffuse: disinformation, labour displacement, surveillance and market concentration. Without a catalytic crisis, coordination remains elusive,” he said.
Chesterman has warned that the first AI emergency may not be an “existential catastrophe” but “the steady hollowing out of public authority”.
Scale over substance
Despite the summit’s rhetoric of inclusivity, civil society representatives questioned the depth and balance of participation.
OpenAI’s Brock criticised what she described as a focus on spectacle over substance. Many discussions, she said, prioritised scale and high-profile speakers over technical expertise and meaningful engagement.
The declaration, including open source AI, is a great start, but we have to see this go from overall policy statements into real-world impact Amanda Brock, OpenUK
“A pre-summit paper with an ontology would have been helpful,” she said. “As an overview of topics of interest, the Sutras and Chakras were meaningful, but the conversations that followed were often not clearly defined, and there was a lack of clarity on the meanings of topics.”
Brock argued that this dynamic risked distorting policy conversations: “This inevitably leads to a form of ‘policy capture’ where not only is the policy conversation captured by money (wealthy companies that want to drive the agenda buy their way into the room), but it’s captured by those who have the ability to be in the room because of their policy roles.”
She said the imbalance was particularly visible in conversations about open source. “We were talked at, rather than asked to participate, and that includes those of us listed on the site as key attendees.”
One example cited was Sarvam AI, a government-funded initiative that launched what were described as “smaller, efficient, open source AI models” during the summit. According to Brock, closer inspection of the licensing revealed that the models were neither open source nor open weights, but covered by a proprietary licence – a pattern she characterised as “open washing”.
Concerns going forward
While open source undeniably gained political legitimacy, participants stressed that recognition alone is insufficient.
“The declaration, including open source AI, is a great start, but we have to see this go from overall policy statements into real-world impact,” said Brock.
She expressed that OpenUK would begin engaging with Switzerland, next year’s host, in the coming days. She noted its law that open source is the default in publicly funded code.
Griffin stressed how important these summits are for bringing often disparate international stakeholders together. She said she is conscious that voluntary agreements are always operating within larger geopolitical climates, warning that they are meaningless unless commitments are held to a benchmark and progress is tracked over time.