The ethical justifications for developing and deploying artificial intelligence (AI) in the military do not hold up to scrutiny, especially those regarding the use of autonomous weapon systems, says AI ethics expert Elke Schwarz.
An associate professor of political theory at Queen Mary University London and author of Death machines: The ethics of violent technologies, Schwarz says that voices urging caution and restraint when it comes to the deployment of AI in the military are “increasingly drowned out” by a mixture of businesses selling products and policymakers “enthralled and enamoured with the potential” of AI.
Governments around the world have long expressed clear interest in developing and deploying a range of AI systems in their military operations, from logistics and resource management to precision-guided munitions and lethal autonomous weapons (LAWS) that can select, detect and engage targets with little or no human intervention.
Although the justifications for military AI are varied, proponents will often argue that its development and deployment is a “moral imperative” because it will reduce casualties, protect civilians, and generally prevent protracted wars.
Military AI is also framed as a geopolitical necessity in that it is needed to maintain a technological advantage over current and potential adversaries.
“You have to think about what war is and what the activity of war is,” Schwarz tells Computer Weekly. “It’s not an engineering problem. It’s not a technological problem either. It’s a socio-political problem, which you can’t solve with technology, or even more technology – you do quite the contrary.”
She adds that, while attending the Responsible Artificial Intelligence in the Military Domain conference in mid-February 2023, a global summit to raise awareness and discuss issues around AI in armed conflicts, government delegates from around the world were very excited – if slightly trepidatious – about the prospect of using AI in the military. Only one person – a delegate from the Philippines – spoke about what AI can do for peace.
“There was one voice that actually thought about how we can achieve a peaceful context,” she says.
Ethical killing machines
Schwarz says that the notion of “ethical weapons” only really took off after the Obama administration started heavily using drones to conduct remote strikes in Iraq and Afghanistan, which defenders claimed would reduce civilian casualties.
“Over a decade’s worth of drone warfare has given us a clear indication that civilian casualties are not necessarily lessened,” she says, adding that the convenience enabled by the technology actually lowers the threshold of resorting to force. “Perhaps you can order a slightly more precise strike, but if you are more inclined to use violence than before, then of course civilians will suffer.”
She adds that the massive expansion of drone warfare under Obama also led to many arguing that the use of advanced technologies in the military is a “moral imperative” because it safeguards the lives of their own soldiers, and that similar arguments are now being made in favour of LAWS.
“We have these weapons that allow us great distance, and with distance comes risk-lessness for one party, but it doesn’t necessarily translate into less risk for others – only if you use them in a way that is very pinpointed, which never happens in warfare,” she says, adding the effects of this are clear: “Some lives have been spared and others not.”
For Schwarz, these developments are worrying, because it has created a situation in which people are having a “quasi-moral discourse about a weapon – an instrument of killing – as something ethical”.
Elke Schwarz, Queen Mary University London
She adds: “It’s a strange turn to take, but that’s where we are…the issue is really how we use them and what for. They’re ultimately instruments for killing, so if it becomes easy to use them, it is very likely that they’re being used more, and that’s not an indication of restraint but quite the opposite – that can’t be framed as ethical in any kind of way.”
On the claim that new military technologies such as autonomous weapons are ethical because they helps end wars quicker, Schwarz says “we have seen quite the contrary” over the past few decades with the protracted wars of Western powers, which are invariably using highly advanced weaponry against combatants with a clear technological disadvantage.
She adds that use of AI in the military to monitor human activity and take “preventative measures” is also a worrying development, because it reduces human beings to data points and completely flattens out any nuance or complexity while massively increasing risk for those on the receiving end.
“That urgency of having to identify where something might happen [before it happens] in a really weird Minority Report way will become paramount because that’s the logic with which one works, ultimately,” she says.
“I see the greater focus on artificial intelligence as the as the ultimate substrate for military operations as making everything a lot more unstable.”
A game of thrones
Another weakness of the current discourse around military AI is the under-discussion of power differentials between states in geopolitical terms.
In a report on “emerging military technologies” published November 2022 by the Congressional Research Service, analysts noted that roughly 30 countries and 165 nongovernmental organisations (NGOs) have called for a pre-emptive ban on the use of LAWS due to the ethical concerns surrounding their use, including the potential lack of accountability and inability to comply with international laws around conflict.
In contrast, a small number of powerful governments – primarily the US, which according to a 2019 study is “the outright leader in autonomous hardware development and investment capacity”, but also China, Russia, South Korea, and the European Union – have been key players in pushing military AI.
“It’s a really, really crucial point that the balance of power is entirely off,” says Schwarz. “The narrative is great power conflict will happen [between] China, Russia, America, so we need military AI because if China has military AI, they’d be much faster and every everything else will perish.”
Noting that none of these great powers have been on the receiving end of the last half century’s expeditionary wars, Schwarz says it should be the countries most affected by war that have a bigger say over AI in the military.
“It’s those countries that are more likely to be the target that clearly need to have a large stake and a say,” she says, adding that most of these states are in relatively uniform agreement that we should not have LAWS.
“[They argue] there should be a robust international legal framework to ban or at least heavily regulate such systems, and of course it’s the usual suspects that say ‘No, no, no, that stifles innovation’, so there is a huge power differential.”
Schwarz adds that power differentials could also emerge between allied states implementing military AI, as certain players’ approach will likely have to conform to whoever the most powerful actor is to achieve the desired level of connectedness and interoperability.
“Already, the US is doing some exercises for Project Convergence [with the UK], which is part of this overall networking of various domains and various types of technologies. I would venture to say that the US will have more of a say in what happens, how the technology should be rolled out and what the limits to the technology are than the UK, ultimately,” she says.
“Even within allied networks, I would suggest that there will always be power differentials that, at the moment, when everybody is so enthralled with the possibility of AI, are not really taken into account sufficiently.”
Shaping the military in the image of Silicon Valley
A major problem with the development and deployment of military AI is that it is happening with little debate or oversight, and is being shaped by a narrow corporate and political agenda.
Highlighting the efforts of former Google CEO Eric Schmidt – who co-authored The age of AI: and our human future in December 2021 with former US secretary of state Henry Kissinger, and who has been instrumental in pushing AI to the US military – Schwarz says while these issues cannot be reduced to Schmidt alone, he is an instructive example given his prominence.
“They’re position themselves as the ‘knowers’ and the experts about these systems,” she says. “With Schmidt in particular, I’ve been tracing his journey and advocacy for military artificial intelligence for the past seven to five years, and he has been a driving force behind pushing the idea that all militaries, but specifically the US military and its allies, need to be AI ready…in order to be competitive and stay competitive, always visa vie, Russia and China.”
However, she adds how this could work in practice and the pitfalls of AI-powered militaries are sometimes addressed, but always “pushed to the margins” of the conversation.
“Ultimately, it’s about making everything AI interconnected and making military processes, from acquisition to operations, super-fast and agile – basically shaping the military in the image of Silicon Valley” she says. “What happens when you accelerate warfare like this?”
Part of the problem is that, generally, private companies and actors have a massively disproportionate say over what digital technologies militaries are deploying, especially when compared to ordinary people.
Elke Schwarz, Queen Mary University London
In June 2022, for example, the UK Ministry of Defence (MoD) unveiled its Defence artificial intelligence strategy, outlining how the government will work closely with the private sector to prioritise research, development and experimentation in AI to “revolutionise our Armed Forces capabilities”.
“We don’t have a direct democratic say into how military technologies are built or constituted or constructed, and that’s not necessarily the big problem,” says Schwarz. “I think a frank conversation needs to be had about a the role of private actors, and what kind of responsibility they have to meet because at the moment…it’s very unregulated.”
She adds that public debate around military AI is especially important given the seismic, humanity-altering effect proponents of the technology say it will have.
“The way it can process this data and find patterns, there’s something magnificent about that, but it’s just a computational machine,” she says. “I think elevating that to a natural law, a kind of next iteration of humanity, elevating it to a necessity and inevitability is useful for some who will make money off of it, but I have yet to really better understand how we as humans and our social context, our political context and our ethical shared life, can benefited so tremendously from it.”
Schwarz adds that while these narratives may be useful for some, most ordinary people will simply have to submit to the use of AI technologies “that usually have proprietary underpinnings that we know nothing off, and that ultimately won’t benefit us deeply”.
Instead, the “sense of urgency” with which proponents approach military AI, and which Schwarz says “disallows a frank and nuanced and detailed debate”, should be replaced with a slower, more deliberative approach that allows people to collectively decide on the future they want.
She concludes: “What affects everyone should be decided by everyone, ultimately, and that should apply to any democracy.”