DeepSeek, the Chinese artificial intelligence lab that shook the industry with low-cost models, has broken one of the most established routines in the sector: it has not provided Nvidia or AMD with early access to its upcoming flagship model, V4, for performance optimization tasks. According to sources cited by Reuters, instead of sharing early versions with major American manufacturers—which was a common practice before significant launches—DeepSeek would have given several weeks’ advantage to domestic suppliers, including Huawei, to fine-tune software and performance on their processors.
The move comes amid escalating tensions around the chip supply chain and export controls. Beyond the symbolic gesture, the message is technical and economic: in modern AI, who optimizes first, profits first. And in a market where actual performance depends as much on silicon as on kernels, compilers, libraries, and drivers, the “time advantage” becomes a competitive edge.
Why “Early Access” Is a Strategic Weapon
In the typical lifecycle of a large model, developers share pre-release versions with chip manufacturers so their technical teams can refine compatibility and improve hardware performance across widely deployed systems. It’s not just about benchmarks: it’s the work that ensures the model reaches the market “production-ready,” with a stable user experience and no surprises in latency or power consumption.
Reuters notes that DeepSeek had previously collaborated with Nvidia’s technical staff, highlighting a shift in attitude. On this occasion, the lab reportedly did not grant Nvidia or AMD that access for “performance optimization,” but did allow Chinese suppliers to prepare in advance.
Huawei Gains Time in the Software Race
The immediate effect is that Huawei and other local actors can pre-adjust their stack—drivers, runtimes, integration, and tuning—to make V4 perform better on their silicon. In a context where China seeks to reduce its reliance on U.S. technology, this extra time fits into a broader strategic picture: if models are increasingly trained and deployed within national borders, control over the entire platform (chips + software + operation) becomes crucial.
Although no official explanation has been issued by DeepSeek or Huawei, the sector interprets this episode as a decision that goes beyond the technical. Reuters quotes Ben Bajarin (Creative Strategies), who believes the immediate impact on Nvidia and AMD will be limited, but frames the decision as part of a broader strategy to keep American hardware and models at a disadvantage within the Chinese market.
A Turbulent Background: Export Controls and Blackwell Suspicion
The news about early access to V4 comes just a day after another Reuters report: a senior Trump administration official claimed that DeepSeek’s latest model was trained using Blackwell, Nvidia’s most advanced chip, despite U.S. export restrictions that block the shipment of Blackwell to China. According to that account, the U.S. believes DeepSeek might try to eliminate “technical indicators” that reveal the use of U.S. chips, and that the Blackwell units are clustered in a data center in Inner Mongolia.
This context helps explain why a Chinese lab might want to shift its public narrative and industrial alignment: if there are suspicions about the use of restricted hardware, political and reputational pressure becomes greater. At the same time, the incentive to strengthen the domestic ecosystem also increases.
What’s at Stake: Inference, Not Just Training
The current contest is no longer limited to training giant models. In businesses and service markets, the big battle is inference: running trained models efficiently, cheaply, and at scale. Reuters recalls that in 2025, the U.S. authorized Nvidia to resume shipments of its H20 chip to China, and AMD to supply its MI308—both designed for inference—while more advanced processors remain under licensing restrictions. Reuters adds that it’s unclear whether DeepSeek has obtained clearance to purchase these American chips.
This nuance is critical: if the Chinese market can still access “permitted” inference chips, Nvidia and AMD continue competing. But if Chinese models are optimized earlier and better on local silicon, reliance on imported hardware could decrease through software and operational improvements.
For AMD, Reuters cites an illustrative figure: the company reported $390 million in sales of the MI308 in its latest quarter—highlighting that the inference market is real, not experimental.
DeepSeek: The Open Source Phenomenon That Discomfits Washington
The international reaction is not only about a pending product launch. DeepSeek has become a symbol of a broader wave of Chinese models released with strong traction in open source communities. Reuters notes that DeepSeek models have been downloaded more than 75 million times on Hugging Face since January 2025, and that among models released in the past year, Chinese models have outstripped those from any other country on the platform.
At that level of adoption, any technical decision by the lab—who gets early access, which hardware is optimized, what is shown, and what is hidden—ceases to be an internal matter and becomes a geopolitical and market indicator. For Washington, the rise of Chinese models reignites a recurrent debate: whether chip restrictions genuinely slow China’s competitive capabilities… or if, on the contrary, they accelerate the development of domestic alternatives.
A Disconcerting Industry Conclusion: Ecosystem “Neutrality” Is No Longer Assumed
Until recently, the norm was straightforward: big models, major manufacturers, cross-optimizations, and a relatively interoperable global ecosystem. The DeepSeek-V4 case suggests that this normality is breaking down. Not because Nvidia or AMD will be sidelined from the AI market—claiming that would be premature—but because early access and software tuning are becoming tools of industrial policy.
Practically, DeepSeek’s message is clear: performance is not just bought with GPUs; it’s also bought through time, priority of access, and alliances. And in AI, a few weeks can feel like eternity.
Frequently Asked Questions (FAQ)
What does it mean that DeepSeek does not give “early access” to Nvidia and AMD for its V4 model?
It implies that these manufacturers cannot optimize their hardware performance ahead of launch, while Chinese suppliers would have time to adjust software and drivers.
Why is pre-optimization important for large AI models?
Because much of the real performance depends on the stack (libraries, kernels, drivers, runtimes). Being late to this optimization can result in lower efficiency, higher latency, or less stable production performance.
Which chips are still sold to China for inference and why does it matter?
According to Reuters, U.S. permits shipments of Nvidia H20 and AMD MI308 for inference. If the Chinese market shifts to local silicon for optimization, these chips may become relatively less significant, even if they remain legally exportable.
Why does Washington see DeepSeek as a strategic case?
Because of the rapid adoption of its models and the debate over whether export controls actually hinder Chinese AI development or, indirectly, promote the growth of a competitive domestic ecosystem.

