Anthropic today accused three prominent Chinese artificial intelligence companies DeepSeek, Moonshot AI, and MiniMax of running coordinated “distillation” campaigns to steal advanced capabilities from its Claude models.
The San Francisco-based lab said the operations involved roughly 24,000 fraudulent accounts and generated more than 16 million exchanges with Claude, violating its terms of service and bypassing regional access restrictions.
The company said the labs used proxy services and networks of fake accounts dubbed “hydra clusters” to mask their activity and evade detection.
What is Distillation and Why it Matters Here
Distillation is a standard AI training technique in which a smaller “student” model learns from the outputs of a larger “teacher” model. Frontier labs routinely use it to create cheaper, faster versions of their own systems.
But when applied illicitly to a competitor’s model, it allows rapid capability transfer at a fraction of the original development cost and time.
Anthropic emphasized that distilled copies of Claude are unlikely to retain the robust safety safeguards built into U.S. frontier models — safeguards designed to prevent misuse in areas such as bioweapons development or malicious cyber operations.
The company warned that these unprotected capabilities could be fed into military, intelligence, or surveillance systems by authoritarian governments or open-sourced, spreading dangerous AI tools beyond any single nation’s control.
The Three Campaigns
DeepSeek
- Scale: Over 150,000 exchanges
- Targets: Advanced reasoning, rubric-based grading (to train reward models), and censorship-safe alternatives to politically sensitive queries
- Tactics: Synchronized traffic across accounts, shared payment methods, and prompts designed to extract step-by-step chain-of-thought reasoning
Moonshot AI (Kimi models)
- Scale: Over 3.4 million exchanges
- Targets: Agentic reasoning, tool use, coding, data analysis, computer-use agents, and computer vision
- Tactics: Hundreds of fraudulent accounts across multiple access paths; later phases focused on reconstructing Claude’s reasoning traces
MiniMax
- Scale: Over 13 million exchanges (the largest campaign)
- Targets: Agentic coding and tool-use orchestration
- Tactics: Detected while still active; when Anthropic released a new model, MiniMax pivoted within 24 hours, redirecting nearly half its traffic to the updated system
Anthropic attributed the campaigns with high confidence using IP correlations, request metadata, infrastructure fingerprints, and corroboration from industry partners.
In one case, request metadata directly matched public profiles of senior researchers at the labs.
How the Attacks Bypassed Restrictions
Anthropic does not offer commercial access to Claude in China. The labs circumvented this by purchasing access through third-party commercial proxy services that resell API calls at scale.
These services operate sprawling networks of fraudulent accounts that mix distillation traffic with legitimate customer requests, making detection significantly harder.
The company said it is investing heavily in new detection systems, including classifiers for chain-of-thought elicitation and behavioral fingerprinting to spot coordinated activity.
It is also sharing technical indicators with other AI labs, cloud providers, and authorities, while tightening verification for educational and research accounts often exploited in these schemes.
Anthropic stressed that no single company can solve the problem alone and called for coordinated action across the AI industry, cloud providers, and policymakers.
It reiterated its longstanding support for U.S. export controls on advanced chips, arguing that distillation attacks actually reinforce the need for such controls: restricted chip access limits both direct training and the scale of illicit data extraction.
The disclosure comes weeks after OpenAI warned U.S. lawmakers about similar distillation efforts by DeepSeek targeting ChatGPT and other American models.
Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.



