Anthropic has accused three Chinese AI companies of setting up more than 24,000 fake accounts using its Claude AI model to improve its own models.
DeepSeek, Moonshot AI, and MiniMax laboratories are said to have generated more than 16 million interactions with Claude through these accounts using a technique called “distillation.” Anthropic said the institute “targeted Claude’s most differentiated competencies: agentic reasoning, tool use, and coding.”
The accusations came amid a debate over how strictly to enforce export controls on advanced AI chips, a policy aimed at curbing China’s AI development.
Distillation is a common training method that AI labs use on their own models to create smaller, cheaper versions, but competitors can use it to essentially copy other labs’ homework. OpenAI sent a memo to members of Congress earlier this month accusing DeepSeek of using distillation to mimic its products.
DeepSeek first made headlines a year ago when it released its open-source R1 inference model, which offers performance roughly comparable to America’s Frontier Laboratories at a fraction of the cost. DeepSeek will soon release its latest model, DeepSeek V4, which reportedly outperforms Anthropic’s Claude and OpenAI’s ChatGPT in coding.
The scale of each attack varied in scope. Anthropic tracked over 150,000 interactions from DeepSeek. These interactions appear to be aimed at improving the underlying logic and consistency, especially censorship-safe alternatives for policy-dependent queries.
Moonshot AI had more than 3.4 million exchanges covering agent inference and tool usage, coding and data analysis, computer-assisted agent development, and computer vision. Last month, the company released a new open source model, Kimi K2.5, and a coding agent.
tech crunch event
boston, massachusetts
|
June 9, 2026
MiniMax’s 13 million exchanges targeted agent coding, tool usage, and orchestration. Anthropic says that at the time of MiniMax’s launch, almost half of its traffic was redirected to siphon features from the latest Claude model, which it could see in action.
Anthropic says it will continue to invest in defenses that make distillation attacks harder to carry out and easier to identify, but calls for a “coordinated response across the AI industry, cloud providers, and policymakers.”
The distillation attack comes at a time when U.S. chip exports to China remain hotly debated. Last month, the Trump administration officially allowed U.S. companies such as Nvidia to export advanced AI chips (such as the H200) to China. Critics argue that the loosening of export restrictions will improve China’s AI computing capabilities at a critical time in the global race for AI supremacy.
Anthropic said the scale of extraction performed by DeepSeek, MiniMax and Moonshot “requires access to advanced chips.”
According to Anthropic’s blog, “Distillation attacks therefore strengthen the rationale for export controls. Restricted chip access limits both direct model training and the scale of illegal distillation.”
Dmitri Alperovitch, chairman of think tank Silverado Policy Accelerator and co-founder of CrowdStrike, told TechCrunch that he is not surprised to see such attacks.
“It has been clear for some time that part of the reason for the rapid progress of China’s AI models is a distilled appropriation of the US frontier model. Now we know this for a fact,” Alperovitch said. “This should be an even more compelling reason to refuse to sell AI chips to these companies.” [companies]It only becomes more advantageous for them. ”
Anthropic also said that distillation could not only threaten to undermine America’s AI advantage, but also pose a national security risk.
“Anthropic and other U.S. companies are building systems that prevent state and non-state actors from using AI to develop biological weapons and carry out malicious cyber activities,” Anthropic’s blog post reads. “Models built through illegal distillation are unlikely to retain these safeguards, and when completely stripped of many protective features, dangerous features can flourish.”
Anthropic noted that authoritarian governments are deploying frontier AI for things like “offensive cyber operations, disinformation campaigns, and mass surveillance,” but the risks multiply when these models are open sourced.
TechCrunch has reached out to DeepSeek, MiniMax, and Moonshot for comment.
Source link
