The US intensifies its crackdown on DeepSeek over model distillation

The United States has decided to turn AI model distillation into a political, industrial, and strategic front. Washington’s messaging is no longer limited to commercial competition between labs: the White House and several U.S. companies claim that Chinese firms, with DeepSeek at the center, are extracting capabilities from frontier models developed in the U.S. to accelerate their own systems at lower costs.

The offensive rests mainly on two moves. On one side, a White House memorandum signed by Michael Kratsios on April 23, 2026, accuses “foreign entities, primarily Chinese,” of industrial-scale campaigns to distill U.S.-developed models. On the other, public pressure exerted by OpenAI and Anthropic, which in February brought their own allegations against DeepSeek and other Chinese labs into the political and media arena.

This issue matters because it’s not just about intellectual property. In the U.S. narrative, illicit distillation is now framed as a national security, technological advantage, and geopolitical control problem over the next generation of models.

Before delving into the details, it’s important to recall a key nuance: Distillation is a legitimate and widely used AI technique. It’s used to train smaller, cheaper models from the outputs of more powerful ones. The conflict isn’t with the technique itself but with its allegedly unauthorized use on third-party closed models. Anthropic explicitly states: distillation is normal when a lab applies it to its own models, but it becomes problematic when it’s used to acquire capabilities from others without permission on a large scale.

OpenAI and Anthropic place DeepSeek at the center of their allegations

OpenAI formalized its position in a document sent on February 12, 2026 to the House Select Committee on strategic competition with China. In that report, the company states that DeepSeek must be understood within the context of “ongoing efforts” to exploit capabilities developed by OpenAI and other U.S. labs. It also claims to have detected activity consistent with persistent attempts to distill its models, even through obfuscated methods.

The document goes further, asserting that accounts associated with DeepSeek employees have developed techniques to bypass OpenAI’s access restrictions, using third-party routers and other methods designed to hide the origin of requests. OpenAI additionally claims that DeepSeek developed code to access U.S. models and obtain outputs useful for programmatic distillation. All of this, according to the company, is part of an increasingly sophisticated ecosystem aimed at capability extraction.

Anthropic intensified its narrative a few days later. In a note published on February 23, 2026, the company reported having identified “industrial-scale” campaigns by three Chinese labs: DeepSeek, Moonshot, and MiniMax. According to their account, these campaigns generated over 16 million exchanges with Claude via about 24,000 fake accounts, violating their terms of use and regional restrictions.

Specifically regarding DeepSeek, Anthropic mentions more than 150,000 exchanges aimed at extracting reasoning capabilities, evaluation tasks for reinforcement, and “safe for censorship” alternatives in politically sensitive queries. The company claims to have attributed these campaigns with “high confidence” to specific labs through IP correlation, request metadata, and infrastructure indicators.

The White House elevates it to a strategic doctrine

The truly new element is that these allegations are no longer confined to corporate terrain. The White House memorandum from April 23 states that foreign actors, especially Chinese, are employing industrial distillation campaigns against U.S. frontier systems, directly threatening Washington’s effort to preserve its competitive edge through infrastructure, chips, and export controls.

This shift aligns with Anthropic’s stance, which links illicit distillation to the erosion of export controls. Its argument is that if foreign labs can extract capabilities from U.S. models, the intended effect of chip restrictions is blunted because these actors can bridge gaps without bearing the full costs of training and security. Anthropic even suggests that illicitly distilled models might lose essential safeguards and be used in military, surveillance, or cyber offensive applications.

Meanwhile, OpenAI approaches the issue similarly. Their congressional submission connects protection against distillation directly to countering what they call “autocratic AI” and maintaining U.S. leadership in infrastructure, chips, energy, and data centers.

It’s not just about DeepSeek

DeepSeek’s case is significant, but probably not the only concern for Washington. The core debate is whether closed models can continue to compete in a market where part of their capabilities could be absorbed by external actors through massive querying, distributed accounts, and systematic pattern extraction for training competitors.

For the tech sector, this raises several uncomfortable questions. One is technical: to what extent can systems be prevented from leaking replicable behaviors when heavily queried? Another is legal: where exactly does infringement begin when the employed technique is known, and the result involves acquiring functional capabilities rather than copying weights? And third, political: whether the U.S. will try to turn such practices into a new realm for sanctions, access restrictions, and diplomatic pressure.

Currently, the public domain contains only official memos and direct accusations from affected companies. That’s already enough to grasp the scale of the clash. What was once a normal AI technique—distillation—is increasingly being framed as a battlefield in the U.S.-China technological rivalry. In this context, DeepSeek has become the most visible symbol of a much larger conflict.

Frequently Asked Questions

What is model distillation in Artificial Intelligence?
It’s a technique used to train smaller or cheaper models by using the outputs of more powerful models. The industry deems it legitimate when applied to one’s own models, but controversial when used without authorization on third-party systems.

What exactly does OpenAI accuse DeepSeek of?
OpenAI claims that DeepSeek maintains ongoing efforts to leverage capabilities developed by OpenAI and other U.S. labs, and reports having detected activity consistent with persistent attempts to distill these models via obfuscated methods.

What figures has Anthropic provided regarding these campaigns?
Anthropic states that DeepSeek, Moonshot, and MiniMax generated over 16 million exchanges with Claude through approximately 24,000 fake accounts. Specifically regarding DeepSeek, they mention over 150,000 exchanges.

Why does the White House treat this as a national security issue?
Because Washington considers that unauthorized distillation could weaken the U.S. technological advantage, undermine export controls, and enable strategic rivals to acquire advanced capabilities without bearing the full development and security costs.

Scroll to Top