Trump Bans Anthropic AI in Federal Agencies Amid Growing Security Concerns


The United States government has taken a massive step by banning federal agencies from using Anthropic, a domestic AI company known for its model, Claude.

For the first time, a U.S. firm has been classified as a supply chain risk to national security, a label usually given to foreign companies like Huawei.

President Donald Trump announced the decision on Truth Social on February 28, 2026, ordering all federal agencies to “IMMEDIATELY CEASE” using Anthropic’s technology.

Announcement (Source: CSN)

Departments deeply relying on Claude, such as the Department of War (DoW), have been given a six-month period to phase out the AI.

Defense Secretary Pete Hegseth quickly followed up on X, declaring Anthropic a national security risk.

He stated that no contractor or supplier doing business with the U.S. military is allowed to engage in commercial activity with Anthropic.

The Conflict Over AI Use

The ban stems from Anthropic’s refusal to grant the Pentagon unrestricted access to Claude. Anthropic requested two exceptions to the government’s use of its AI: no mass surveillance of Americans and no fully autonomous weapons.

According to Cybersecuritynews, the Pentagon demanded full access for “all lawful purposes,” but Anthropic CEO Dario Amodei refused.

Amodei argued that current AI models are not reliable enough for fully autonomous weapons, noting that safeguards protect both troops and civilians. He also stated that mass surveillance violates Americans’ civil rights.

Anthropic previously deployed its AI on the U.S. government’s classified networks under a $200 million DoW contract starting in June 2024.

After private negotiations failed, the Pentagon issued an ultimatum: comply or face a blacklist.

Anthropic claims the Pentagon’s final offer included legal language that could ignore agreed-upon safeguards.

Anthropic plans to challenge the supply chain risk designation in court. The company argues the move is legally unsound because the law used (10 USC 3252) only applies to Department of War contracts, not general business operations.

As a result, individual users and non-DoW contractors can continue using Claude.

However, the ban could still harm the broader tech industry. Anthropic relies on cloud services from Amazon, Microsoft, and Google all of which have military contracts.

If the ban forces these cloud providers to cut ties with Anthropic, the company’s operations could be severely impacted.

Legal experts warn this decision sets a dangerous precedent by applying a foreign security tool to a U.S. business.

President Trump has warned Anthropic of civil and criminal consequences if it does not cooperate during the transition period.

Despite the pressure, Anthropic remains firm on its refusal to allow its AI to be used for autonomous weapons or domestic surveillance.

Follow us on Google News, LinkedIn, and X to Get Instant Updates and Set GBH as a Preferred Source in Google.





Source link