Anthropic has reported that three AI companies—DeepSeek, Moonshot, and MiniMax—have engaged in unauthorized extraction of its Claude chatbot's capabilities. The firm claims that these companies conducted over 16 million exchanges with Claude through approximately 24,000 fraudulent accounts, aiming to enhance their own AI models.
In a statement on its website, Anthropic described these activities as "distillation attacks," where less advanced models exploit the responses of superior ones for training purposes. Although distillation can be legitimate, Anthropic warns that it can also be exploited maliciously. The company asserts that it has identified these attacks with high certainty by analyzing IP addresses and metadata.
Anthropic intends to bolster its defenses against such attacks while also facing legal challenges from music publishers who allege that Claude was trained using pirated songs. This comes after similar allegations were made by OpenAI in the previous year against rival firms, which resulted in account bans for suspected misconduct.