Anthropic has identified and exposed industrial-scale data extraction campaigns orchestrated by three major Chinese AI laboratories: DeepSeek, Moonshot, and MiniMax.
These organizations utilized approximately 24,000 fraudulent accounts to generate over 16 million exchanges with Anthropic’s Claude models.
The primary objective of these campaigns was “distillation,” a technique where a less capable AI model is trained on the high-quality outputs of a stronger model.
While distillation is a legitimate method for creating smaller, efficient models, these unauthorized campaigns violated terms of service and bypassed regional access restrictions to illicitly acquire advanced capabilities at a fraction of the standard development cost and time.
Industrial-Scale Distillation Campaigns
Anthropic investigation revealed that these campaigns were not random scraping events but highly coordinated operations designed to steal specific cognitive abilities from Claude, such as coding, complex reasoning, and tool use.
By analyzing IP correlations, request metadata, and infrastructure indicators, researchers were able to attribute the attacks to specific labs with high confidence.
For instance, DeepSeek focused on extracting reasoning patterns and generating “chain-of-thought” data, effectively asking Claude to write out its internal logic step-by-step.
Moonshot targeted agentic reasoning and computer vision capabilities, while MiniMax, responsible for the largest volume of traffic, focused heavily on coding and tool orchestration.
The scale of these operations was massive, with MiniMax alone accounting for over 13 million exchanges.
In one instance, when Anthropic released a new model, MiniMax pivoted nearly half of its traffic within 24 hours to target the updated system.
This rapid adaptation highlights the sophistication of the threat actors, who aimed to integrate American AI capabilities into their own products before they were even fully released to the public.
| Lab Name | Exchange Volume | Targeted Capabilities | Attribution Method |
|---|---|---|---|
| DeepSeek | 150,000+ | Reasoning tasks, censorship-safe query generation, rubric grading | IP correlation, shared payment patterns |
| Moonshot AI | 3.4 Million+ | Agentic reasoning, coding, data analysis, computer vision | Request metadata matching senior staff profiles |
| MiniMax | 13 Million+ | Agentic coding, tool use, orchestration | Infrastructure indicators, product roadmap timing |
Evasion Tactics and Security Implications
To conduct these attacks, the labs utilized commercial proxy services known as “hydra clusters.”
These sprawling networks of fraudulent accounts distribute traffic across various cloud platforms, ensuring there is no single point of failure; if one account is banned, another immediately takes its place.
This infrastructure allowed the labs to bypass the fact that Claude is not commercially available in China.
The illicitly distilled models resulting from these attacks pose significant national security risks because they often strip away the safety guardrails built into the original Western models.
This lack of safety measures means that foreign actors could deploy these powerful AI systems for offensive cyber operations, disinformation campaigns, or mass surveillance without the ethical restrictions inherent in the original models.
In response, Anthropic is deploying new behavioral fingerprinting systems to detect distillation patterns and is tightening verification processes for educational and startup accounts.
The company emphasizes that addressing this threat requires coordinated action across the global AI community and policymakers to maintain the integrity of export controls and prevent the proliferation of unguarded frontier AI capabilities.
Follow us on Google News, LinkedIn, and X to Get Instant Updates and Set GBH as a Preferred Source in Google.




