Artificial intelligence leader Anthropic has filed an unprecedented lawsuit against the United States government after being designated a “supply chain risk”.
The legal action, filed in a California federal court on Monday, targets the executive office of President Donald Trump, Defense Secretary Pete Hegseth, and 16 federal agencies.
The core of the dispute revolves around Anthropic’s refusal to grant the military unrestricted access to its AI tools, specifically fighting to keep its safeguards against lethal autonomous warfare and domestic mass surveillance.
Anthropic claims that the government’s retaliatory actions violate the Constitution and unlawfully punish the company for its protected speech.
While the Department of Defense declined to comment due to active litigation, the White House characterized Anthropic as a company attempting to control military operations, stating the military will obey the Constitution rather than an AI company’s terms of service.
Anthropic Sued U.S. Government
Anthropic’s AI assistant, Claude, has been deployed in classified government environments since 2024.
The conflict escalated when Defense Secretary Hegseth demanded the removal of all usage restrictions from Anthropic’s defense contracts.
While the company was actively negotiating to find a compromise that would maintain its core guardrails against mass surveillance and weaponization while meeting military needs, the talks collapsed.
Following the breakdown in negotiations, President Trump publicly criticized the company and ordered all government agencies to stop using Anthropic’s tools. Subsequently, Hegseth officially labelled the company a “supply chain risk.”
This designation means tools like Claude are suddenly considered insecure for federal use, and contractors are prohibited from utilizing them for government work.
Anthropic argues this public castigation lacks statutory authority and has caused immediate, irreparable economic and reputational harm, jeopardizing hundreds of millions of dollars in near-term private contracts.
The government’s hardline approach has sent shockwaves through the tech industry, highlighting the tension between AI safety and national defense.
Despite the federal ban, major technology partners, including Microsoft, Google, and Amazon, have confirmed they will continue integrating Claude into their non-defense operations.
In a show of cross-industry solidarity, nearly 40 employees from rival AI firms Google and OpenAI filed a legal brief in support of Anthropic.
The group emphasized that frontier AI systems require strict technical safeguards and usage restrictions to prevent uncontrolled deployment in lethal operations.
Meanwhile, competitors are capitalizing on the fallout. OpenAI CEO Sam Altman recently expedited a new Department of Defense contract following Anthropic’s dismissal.
Anthropic is not seeking damages but asks the federal court to remove the “supply chain risk” label, arguing the directive exceeds presidential authority and violates First Amendment rights, according to a BBC News report.
Legal experts anticipate a protracted battle. Carl Tobias, a chair at the University of Richmond School of Law, expects the Trump administration to adopt a “scorched earth” legal strategy.
Even if Anthropic secures an initial victory in federal court, the administration is highly likely to appeal, potentially escalating the landmark case to the Supreme Court.
Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.



