Big news: Lock and Code is nominated for a Webby Award! You can help us win the People’s Voice Award by voting here.
This week on the Lock and Code podcast…
We have to talk about killer robots. No, not the Terminator, and not some Boston Dynamics robot run amok. We have to talk instead about a technological reality that is very much already here.
In late February, the artificial intelligence developer Anthropic made a perhaps surprising statement for those who are only familiar with its helpful chatbot tool Claude: The company would not allow the government to use its technology to kill people without proper safety controls.
Hold on… what?
Despite Anthropic’s reputation amongst most everyday people as the creator of a collaborative AI-powered assistant for coding, writing, and searching, the company had already deployed Claude across the US government for strategic military needs. According to Anthropic, Claude was used by the US Department of Defense and other national security agencies for “mission-critical applications, such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more.”
But behind the scenes, the US government was asking for even more applications, and it wrapped all of its requests under a broad, vague term: “Any lawful use.” Anthropic bristled at the government’s request, defining two use-cases that were simply off limits: Mass surveillance of Americans and fully autonomous weapons—or, put another way, the powering of independent killer robots.
As Anthropic said in its statement:
“Frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk. We have offered to work directly with the Department of War on R&D to improve the reliability of these systems, but they have not accepted this offer. In addition, without proper oversight, fully autonomous weapons cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day. They need to be deployed with proper guardrails, which don’t exist today.”
Sure, the guardrails may not exist today, but do they—can they—exist at all?
Today, on the Lock and Code podcast with host David Ruiz, we speak with Peter Asaro, chair of the Campaign to Stop Killer Robots, about what a killer robot actually is, how close we are to seeing them deployed, and what some of the hidden consequences are to rolling out impossibly-quick, decision-making technology into a landscape where deescalation requires time, space, and human judgment.
”This mass proliferation of targets, it just accelerates the speed of destruction and the intensity of destruction of warfare, and it doesn’t necessarily give you any kind of military or political advantage.”
Tune in today to listen to the full conversation.
Show notes and credits:
Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)
Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.
Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium Security for Lock and Code listeners.

