Google drops pledge not to develop AI weapons


Google parent company Alphabet has dropped its pledge to not use artificial intelligence (AI) in weapons systems or surveillance tools, citing a need to support the national security of “democracies”.

Google CEO Sundar Pichai, in a blogpost published Jun 2018, previously outlined how the company would “not pursue” AI applications that “cause or are likely to cause overall harm”, and specifically committed to not developing AI for use in “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people”.

He added that Google would also not pursue “technologies that gather or use information for surveillance violating internationally accepted norms”.

Google – whose company motto ‘Don’t be Evil’ was replaced in 2015 with ‘Do the right thing’ – defended the decision to remove these goals from its AI principles webpage in a blogpost co-authored by Demis Hassabis, CEO of Google DeepMind; and James Manyika, the company’s senior vice-president for technology and society.

“There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape. We believe democracies should lead in AI development, guided by core values like freedom, equality and respect for human rights,” they wrote on 4 February.

“And we believe that companies, governments and organisations sharing these values should work together to create AI that protects people, promotes global growth and supports national security.”

They added that Google’s AI principles will now focus on three core tenants, including “bold innovation”, which aims to “assist, empower, and inspire people in almost every field of human endeavour” and address humanity’s biggest challenges; “responsible development and deployment”, which means pursuing AI responsibly throughout a systems entire lifecycle; and “collaborative progress, together”, which is focused on empowering “others to harness AI positively”.

Commenting on Google’s policy change, Elke Schwarz – a professor of political theory at Queen Mary University London and author of Death machines: The ethics of violent technologies – said that while it is “not at all surprising” given the company has already been supplying the US military (and reportedly the IDF) with cloud services, she is still concerned about the shifting mood among big tech firms towards military AI; many of which are now arguing it is “unethical not to get stuck in” developing AI applications for this context.

“Google now feels comfortable enough to make such a substantial public change without having to face a significant backlash or repercussions gives you a sense of where we are with ethical concerns about profiteering from violence (to put it somewhat crudely). It indicates a worrying acceptance of building out a war economy,” she told Computer Weekly, adding that Google’s policy change highlights a clear shift that the global tech industry is now also a global military industry.

“It suggests an encroaching militarisation of everything. It also signals that there is a significant market position in making AI for military purposes and that there is a significant share of financial gains up for grabs for which the current top companies compete. How useful this drive is toward AI for military purposes is still very much speculative.”

Experts on military AI have previously raised concerns about the ethical implications of algorithmically enabled killing, including the potential for dehumanisation when people on the receiving end of lethal force are reduced to data points and numbers on a screen; the risk of discrimination during target selection due to biases in the programming or criteria used; as well as the emotional and psychological detachment of operators from the human consequences of their actions.

There are also concerns over whether there can ever be meaningful human control over autonomous weapons systems (AWS), due to the combination of automation bias and how such weapons increase the velocity of warfare beyond human cognition.

Throughout 2024, a range of other AI developers – including OpenAI, Anthropic and Meta – walked back their own AI usage policies to allow US intelligence and defence agencies use their AI systems. They still claim they do not allow their AI to harm humans.

Computer Weekly contacted Google about the change – including how it intends to approach AI development responsibly in the context of national security, and if it intends to place any limits on the kinds of applications its AI systems can be used in – but received no response.

‘Don’t be Evil’

The move by Google has attracted strong criticism, including from human rights organisations concerned about the use of AI for autonomous weapons or mass surveillance. Amnesty International, for example, has called the decision “shameful” and said it would set a “dangerous” precedent.

“AI-powered technologies could fuel surveillance and lethal killing systems at a vast scale, potentially leading to mass violations and infringing on the fundamental right to privacy,” said Matt Mahmoudi, a researcher and adviser on AI and human rights at Amnesty.

“Google’s decision to reverse its ban on AI weapons enables the company to sell products that power technologies including mass surveillance, drones developed for semi-automated signature strikes, and target generation software that is designed to speed up the decision to kill.

“Google must urgently reverse recent changes in AI principles and recommit to refraining from developing or selling systems that could enable serious human rights violations. It is also essential that state actors establish binding regulations governing the deployment of these technologies grounded in human rights principles. The facade of self-regulation perpetuated by tech companies must not distract us from urgent need to create robust legislation that protects human rights.”

Human Rights Watch similarly highlighted the problematic nature of self-regulation through voluntary principles.

“That a global industry leader like Google can suddenly abandon self-proclaimed forbidden practices underscores why voluntary guidelines are not a substitute for regulation and enforceable law. Existing international human rights law and standards do apply in the use of AI, and regulation can be crucial in translating norms into practice,” it said, noting while it is unclear to what extent Google was adhering to its previous principles, Google workers have at least been able to cite them when pushing back on irresponsible AI practices.

For example, in September 2022, Google workers and Palestinian activists called on the tech giant to end its involvement in the secretive Project Nimbus cloud computing contract, which involves the provision of AI and machine learning (ML) tools to the Israeli government.

They specifically accused the tech giant of “complicity in Israeli apartheid”, and said they feared how the technology would be used against Palestinians, citing Google’s own AI principles. A Google spokesperson told Computer Weekly at the time: “The project includes making Google Cloud Platform available to government agencies for everyday workloads such as finance, healthcare, transportation and education, but it is not directed to highly sensitive or classified workloads.”

Human Rights Watch added: “Google’s pivot from refusing to build AI for weapons to stating an intent to create AI that supports national security ventures is stark. Militaries are increasingly using AI in war, where their reliance on incomplete or faulty data and flawed calculations increases the risk of civilian harm. Such digital tools complicate accountability for battlefield decisions that may have life-or-death consequences.”

While the vast majority of countries are in favour of multilateral controls on AI-powered weapons systems, European foreign ministers and civil society representatives noted during an April 2024 conference in Vienna that a small number of powerful players – including the UK, US and Israel – are holding back progress by being part of the select few countries to oppose binding measures.

Timothy Musa Kabba, the minister of foreign affairs and international cooperation in Sierra Leone, said at the time that for multilateralism to work in the modern world, there is a pressing need to reform the UN Security Council, which is dominated by the interests of its five permanent members (China, France, Russia, the UK, and the US).

“I think with the emergence of new realities, from climate change to autonomous weapons systems, we need to look at multilateralism once again,” he said, noting any new or reformed institutions will need to be inclusive, democratic and adaptable.



Source link