The “AI fever” has turned GPUs into a geopolitical resource. In this gameboard, China is driving a transition that just two years ago seemed unthinkable: rapidly reducing its dependence on Nvidia, even if that comes at a cost to energy efficiency or software maturity.
A Bernstein analysis cited by international media points to an especially tough scenario for the company: Nvidia’s market share in China’s AI acceleration sector could fall to around 8%, compared to figures that were about two-thirds of the market not long ago. The combination of export restrictions and domestic progress would be the trigger for this shift.
Two Forces Disrupting the Balance
1) Export Controls: Less Product, More “Friction” in Trade
The first factor isn’t technological but regulatory. U.S. restrictions on advanced chips and their ecosystem (components, interconnections, computing capabilities, etc.) have limited what models can be sold and under what conditions. The practical result is that, in the high-performance AI segment, Western supply becomes irregular, more expensive, and bureaucratic.
Within this context, part of the Chinese market—especially large-scale deployments—cannot base its planning on uncertain supplies. Purchases are now prioritizing what can be signed, manufactured, and deployed domestically, even if it’s not “the best” in the traditional sense.
2) Domestic Substitution: “Good Enough” and Increasingly Scalable
The second factor is the rapid maturation of Chinese suppliers. The same Bernstein analysis suggests that local manufacturers could meet about 80% of domestic demand in the coming years, pushing Nvidia into a marginal position.
Here, the key isn’t just the chip itself but the entire system: clusters, internal networks, software, support, and availability. When the goal is to build capacity at scale (data centers, regional clouds, superclusters), the “time-to-deploy” variable is almost as important as TFLOPS.
Huawei, Moore Threads, and the “System Model”: Competing for Racks, Not Just Cards
In practice, China is replicating the path already taken by the U.S. and hyperscalers: moving from “a GPU” to rack-level architectures.
An example is Huawei’s approach with CloudMatrix 384, a scaled training platform that competes on the full cluster level. According to the Financial Times, Huawei has promoted comparisons showing CloudMatrix 384 outperforming Nvidia configurations in certain scenarios, though with significantly higher energy consumption. This nuance impacts data centers’ CAPEX (power infrastructure) and OPEX (electricity and cooling).
Meanwhile, new players like Moore Threads are aiming to close the gap at the “AI GPU” level with dedicated products, relying on the arguments of technological sovereignty and local availability. Bernstein’s analysis notes this competitive pressure as part of a broader trend shift.
Comparison Table: What’s Changing and Why It Matters
| Key Variable | Before (Nvidia’s dominance) | Now (accelerated transition) | Real Market Impact in China |
|---|---|---|---|
| Access to cutting-edge GPUs | Fairly smooth | Restricted and uncertain | More challenging capacity planning with external suppliers |
| Units of competition | GPU / card | System / rack / cluster | Success comes from delivering “ready infrastructure,” not just silicon |
| Alternative suppliers | Complementary | Strategic | Buyers accept “good enough” if deployable locally |
| Price vs. Availability | High price, but manageable | Priority on availability | Supply chain factors weigh as much as performance |
| Energy & efficiency | Clear Western advantage | Trade-offs (higher consumption) | Increased pressure on power and cooling capacity |
Implications for Europe: A Warning on Dependence and Supply Chains
This case also sends a clear message outside China: when a component becomes a strategic weapon, dependence turns into operational risk. A “sovereign AI” isn’t just a slogan in this context—it’s a way to ensure continuity, costs, and medium-term planning.
Frequently Asked Questions
Why is Nvidia’s share in China expected to fall to 8%?
Because Bernstein analysts, cited by international media, project a sharp decline driven by export restrictions and the rise of local suppliers capable of meeting much of the internal demand.
What does it mean that China might cover 80% of its demand with local chips?
It means that, for many deployments, buyers can prioritize availability, support, and local delivery capacity, even if energy efficiency isn’t the best on the market. This self-sufficiency reduces Nvidia’s space.
Does Huawei really compete with Nvidia in AI training?
Huawei is adopting a “rack-scale” approach with systems like CloudMatrix 384. Some reports indicate competitive performance in specific scenarios, though with higher energy consumption.
What are the global implications for the AI hardware market?
It accelerates a bifurcation: on one side, Western supply chains (Nvidia/AMD and their ecosystem), and on the other, a more integrated Chinese stack. This fragmentation could raise costs, lead to duplicated software development, and strain advanced manufacturing capacity.
Sources: tomshardware and Nikkei

