Anthropic Files Lawsuit Against U.S. Government Over Claude Risk Designation


Anthropic has launched an unprecedented lawsuit against the U.S. government after being designated a “supply chain risk“.

The legal action, filed in a California federal court, targets the executive office of President Donald Trump, Defense Secretary Pete Hegseth, and 16 government agencies.

The dispute centers on Anthropic CEO Dario Amodei’s refusal to allow the military unrestricted access to its Claude AI models.

The Core Conflict and Retaliation

The friction escalated when Defense Secretary Hegseth demanded Anthropic remove usage restrictions from its defense contracts.

Anthropic insisted on maintaining its established guardrails against “lethal autonomous warfare” and “surveillance of Americans en masse.”

The company noted that these safety limitations were always a standard part of its government agreements.

In response to Anthropic’s refusal to compromise on these terms, the Department of Defense abruptly labeled the company a “supply chain risk”.

This aggressive designation prohibited all government agencies and federal contractors from utilizing Anthropic’s tools for official work.​

The White House, through spokeswoman Liz Huston, characterized Anthropic as a “radical left, woke company” attempting to control military operations.

Huston asserted that the military will obey the Constitution over any corporate terms of service, as reported by BBC.

Conversely, Anthropic argues the government’s retaliation directly violates its First Amendment rights.

The AI firm claims the administration is unlawfully using its enormous federal power to punish protected speech without statutory authorization.

The government’s sweeping ban has triggered significant economic fallout for the AI developer.

Anthropic states that current and future contracts with private parties are in jeopardy, threatening hundreds of millions of dollars in near-term revenue.

Despite the federal ban, major technology partners, including Google, Meta, Amazon, and Microsoft, confirmed they will continue utilizing Claude Code outside of defense-related projects.

The broader AI industry is actively reacting to the situation, noting the “chilling effect” the administration’s actions have on free speech.

Following the fallout, rival OpenAI rapidly finalized a new contract with the Department of Defense. OpenAI CEO Sam Altman admitted to rushing the agreement to fill the void left by Anthropic.

Meanwhile, nearly 40 employees from Google and OpenAI submitted a supporting brief for Anthropic.

The group emphasized the critical need for technical safeguards on advanced AI systems, warning against the risks of deploying AI for domestic mass surveillance or autonomous lethal weapons without human oversight.

Anthropic is not seeking monetary damages. Instead, the company is asking the court to declare the presidential directive unconstitutional immediately and to revoke the “supply chain risk” designation.

Carl Tobias, a chair at the University of Richmond School of Law, noted that the Trump Administration will likely take a “scorched earth” approach to the litigation, predicting the high-stakes case will eventually reach the Supreme Court.

Follow us on Google News, LinkedIn, and X to Get Instant Updates and Set GBH as a Preferred Source in Google.



Source link