- Department of Defense and Anthropic feud intensifies.
- Industry leaders sign an accord on online scams and fraud.
- The Department of Defense says Anthropic is a national security risk.
- THE NEWS
- THE KNOWLEDGE
- THE IMPACT
- Companies sign a new accord on global scams.
- THE NEWS
- THE KNOWLEDGE
- THE IMPACT
- This Week's Caveat Podcast: Section 702: The debate continues.
- OTHER NOTEWORTHY STORIES
- UK watchdog looks to tighten cyber incident reporting rules.
- China increases scrutiny of Meta AI deal.

8-minute read | 1,600 words
Department of Defense and Anthropic feud intensifies.
In a filing, the Department of Defense (DoD) labeled the artificial intelligence (AI) company as a supply chain risk.
Industry leaders sign an accord on online scams and fraud.
Major technology and retail companies come together to better combat global scam operations.
The Department of Defense says Anthropic is a national security risk.
THE NEWS
On Tuesday, the DoD filed a legal response to Anthropic’s legal challenge, defending the government’s decision to list the company as a supply chain risk. In this filing, the government stated that it began to question if Anthropic was a “trusted partner,” given concerns that AI systems were vulnerable to manipulation.
Additionally, the government wrote that utilizing and integrating Anthropic’s systems would “introduce unacceptable risk into [DoD] supply chains.”
The government also stated that this dispute originated during contract negotiations and not from Anthropic’s claims that the fallout emerged due to concerns about mass domestic surveillance and autonomous weapon systems.
This filing comes after Anthropic filed two separate lawsuits, one in California and one in Washington, D.C., challenging this label. In both suits, Anthropic alleged that the Pentagon was using this label to punish the company for its ideological beliefs and that the company’s First Amendment rights were being violated. Additionally, the company requested that the courts block the government’s designation, as it could cause the company to lose over 100 business customers.
In support of Anthropic, Microsoft filed a friend-of-the-court brief urging the court to block the Pentagon’s designation of Anthropic. Additionally, the American Civil Liberties Union and the Center for Democracy and Technology also filed briefs supporting Anthropic.
THE KNOWLEDGE
Anthropic’s lawsuits were filed on March 9, 2026, after relations between the company and the federal government grew tense at the end of February. For context, the sides began working together in July 2025 after Anthropic was awarded a $200 million contract. In the contract, Anthropic agreed to:
- Work directly with the DoD to find areas where frontier AI can deliver “impact.”
- Collaborating with defense experts to anticipate and mitigate adversarial AI usage.
- Exchanging data and feedback to improve AI’s adoption in the defense ecosystem.
Notably, in that statement, Anthropic did have a section outlining the company’s commitment to responsible AI deployment. Anthropic wrote:
“We believe democracies must work together to ensure AI development strengthens democratic values globally by maintaining technological leadership to protect against authoritarian misuse.”
However, this contract was eventually embroiled in a dispute regarding ethical AI usage. This dispute eventually resulted in the DoD imposing a deadline on Anthropic to comply with the DoD’s requests by 5:01 pm on February 27, 2026. Anthropic did not comply, and the DoD ended its relationship with Anthropic, labeling the company as a supply-chain risk.
This incident drew significant attention as Anthropic alleged that the fallout emerged due to a lack of safeguards to prevent the use of its model for mass domestic surveillance or implementation in fully autonomous weapons. After the dispute, the government then partnered with OpenAI to meet its AI needs.
THE IMPACT
In this dispute, there are two immediate legal and policy questions, but the long-term implications extend beyond Anthropic itself.
First, the outcome of these lawsuits will determine how far the federal government can go in labeling domestic technology companies as national security risks. If the DoD’s designation is upheld, it would mark a significant expansion of federal authority over federal contracting. Further, this precedent would also significantly impact how the federal government could impose these labels not just for vulnerabilities but also for disputes over cooperation or alignment. However, if the courts overturn the classification, it would place notable limits on the government’s use of national security justifications for contracts and reinforce protections for private companies, especially around sensitive use cases like surveillance.
Secondly, the dispute raises broad questions about how the US utilizes AI systems within the military. While this initial dispute with Anthropic and subsequent pivot to OpenAI may secure short-term access to AI, it could run the risk of narrowing the pool of potential partners and scare other providers from working with an administration that is willing to drop contractors with whom they do not ideologically align.
Companies sign a new accord on global scams.
THE NEWS
On Monday, eleven major technology companies signed a new accord to better address online scamming operations. With this pledge, the companies have agreed to work together to share threat intelligence and work collectively to address how scammers are abusing their services. The accord comes after consumers were estimated to have lost $442 billion in 2025 due to scammer operations.
The accord wrote that the companies aim to “set expectations for how signatories will work across online services to counter scammers” and “drive a united industry response alongside governments, law enforcement, NGOS, and others working to combat fraud and scams.”
In the accord, the companies agreed to voluntary actions to better prevent scams, improve cooperation and collective learning, increase resiliency, and promote stronger public awareness to avoid scams. Additionally, the companies have suggested the following to better combat scams:
- Elevating scam prevention as a national priority with dedicated resources.
- Modernizing government data capabilities and streamlining reporting processes.
- Fostering cross-border and cross-sector information sharing.
- Implementing deconflict laws and providing safe harbors.
The accords signatories include Google, Microsoft, LinkedIn, Meta, Amazon, OpenAI, Adobe, Pinterest, Target, Levi’s Strauss & Co., and Match Group.
THE KNOWLEDGE
In the accord, companies identified four key goals to work towards. For prevention, the companies outlined deploying stronger technical and in-production solutions to improve security, enforcing anti-scam usage policies, and using stronger verification systems.
When working together, companies outlined the need for better collective efforts to share best practices through international forums and work with government partners to spur greater law enforcement action to crack down on scam activity.
To improve resiliency, the companies emphasized the need for faster response efforts to adapt to shifting adversarial tactics alongside deploying more secure technologies across all sectors.
Lastly, the companies emphasized the need to engage with the public to improve educational efforts to better identify scams and provide better channels to report scams within platforms.
Alongside these industry-led efforts, the Trump administration has been working to better crack down on scam operations as they have continued to prove to be a growing problem year-over-year. On March 6, 2026, President Trump signed an Executive Order to establish a new task force aimed at combating these cyberscams. Specifically, the order directs administration officials to conduct a comprehensive review to “determine what operational, technical, diplomatic, and regulatory tools could be improved to combat transnational criminal organizations (TCOs) engaged in cyber-enabled crime and predatory schemes.”
Additionally, the order requires agencies to submit a plan to identify which TCOs are responsible for operating scam centers and propose solutions to prevent, disrupt, and dismantle operations.
THE IMPACT
Given the continued rise of global scam operations and the sheer scale of consumer losses, this coordinated push from both the public and private sectors is not surprising. However, the real significance of this accord lies in what it signals about how scam prevention is evolving from a fragmented approach to a coordinated, cross-sector security challenge.
Through the agreement, companies are now recognizing scams as a systemic ecosystem problem, rather than a series of isolated platform abuses. If implemented effectively, the accord’s provisions could significantly improve intelligence sharing and faster response efforts to better target and dismantle scam networks.
Additionally, the alignment with the administration’s new task force also gives this effort significant legal support in developing public-private operational coordination. While it is unclear how effective this collaboration will be, it could make significant efforts to improve information sharing and create better accountability mechanisms.
This Week’s Caveat Podcast: Section 702: The debate continues.
Dave Bitner and Ben Yelin break down the ongoing battle to reauthorize Section 702. These battles involve lawmakers looking to end the federal government’s ability to engage in warrantless wiretapping capabilities. Additionally, our hosts talk about a new New York state bill that would restrict the way AI chatbots are used in matters that traditionally require licensed professionals.
Listen to the episode now.
OTHER NOTEWORTHY STORIES
UK watchdog looks to tighten cyber incident reporting rules.
What: Britain’s finance regulator announced new incident and third-party reporting rules.
Why: On Wednesday, Britain’s regulator confirmed these new rules, which give firms one year to prepare for these new requirements. With these new rules, the United Kingdom will strengthen its regulations to improve resiliency efforts for cyber attacks and third-party disruptions.
The new regulations come after the Financial Conduct Authority found that in 2025, forty percent of cyber incidents involved a third party.
These new rules are set to take effect on March 18, 2027.
MAR 18, 2026 | Source: Reuters
China increases scrutiny of Meta AI deal.
What: China is looking to crack down on people involved in Meta’s acquisition of Manus.
Why: On Tuesday, the Chinese government is taking actions to penalize people linked to Meta’s recent $2 billion acquisition of Manus, a Singaporean AI start-up. These crackdowns come as the Chinese government is looking to discourage Chinese AI executives from moving businesses offshore.
While it is unclear what the full scope of the Chinese government’s actions is, some reports suggest that the government is looking to restrict Manus executives from leaving China.
Andy Stone, a Meta spokesman, stated:
“The transaction complied fully with applicable law. The outstanding team at Manus is now deeply integrated into Meta.”
MAR 17, 2026 | Source: New York Times
