Anthropic, the San Francisco-based AI research company behind the popular Claude language model, has launched a public accusation against three Chinese AI labs—Deepseek, Moonshot, and MiniMax—alleging they illicitly harvested data from Claude to train their own AI systems.
Systematic Data Harvesting Allegations
The company claims these Chinese firms systematically submitted approximately 16 million queries to Claude, using the responses to extract and replicate Claude's capabilities. According to Anthropic, this was not a case of general public use, but rather a coordinated effort to reverse-engineer Claude's performance and knowledge base.
"We believe these labs have engaged in a sustained and systematic attempt to extract and replicate Claude's capabilities," said a spokesperson for Anthropic. The company has pointed to patterns in the data queries that suggest deliberate and repeated exploitation of Claude's API for training purposes.
Implications for AI Development and Intellectual Property
This incident highlights the growing tensions in the global AI landscape, particularly around intellectual property rights and data ethics. As AI models become more powerful and valuable, the question of how their training data is used—and who owns it—has become increasingly critical. Anthropic's accusations come at a time when AI companies are scrambling to protect their proprietary knowledge amid rapid global expansion.
Industry experts warn that such practices could set a dangerous precedent, potentially stifling innovation and undermining trust in AI development. If confirmed, the allegations could prompt regulatory scrutiny and force companies to reassess how they manage and protect their AI models.
What’s Next?
Anthropic has not yet filed formal legal action but has indicated it will pursue all available legal remedies. Meanwhile, Deepseek, Moonshot, and MiniMax have not publicly responded to the claims. The situation underscores the urgent need for clearer international frameworks around AI data usage and model training practices.
As the AI race intensifies, this case may become a defining moment in how the industry handles data integrity and competitive ethics.



