NVIDIA’s presence in the Chinese semiconductor market for Artificial Intelligence (AI) faces a cycle shift. A recent analysis attributed to Bernstein predicts that the company’s share in the Chinese AI processors/accelerators market could drop from 66% in 2024 to around 8% in 2026, as local manufacturers ramp up their design, production, and integration capabilities in data centers. The underlying message is clear: U.S. export controls on advanced chips are acting as a catalyst for a technological replacement that China has been pursuing for years, but which is now becoming more urgent and systematic.
Bernstein’s forecast relies on two parallel dynamics. On one hand, the availability of NVIDIA GPUs and accelerators for China has been shrinking due to tightening regulation and licensing requirements. On the other hand, Chinese companies like Huawei, Cambricon, Moore Threads, and MetaX are gaining traction with “good enough” alternatives for certain use cases, especially in large-scale inference and training when deployments are balanced with more nodes and software optimization. Indeed, the analysis suggests that domestic providers could meet approximately 80% of the local demand for AI accelerators.
The decisive factor: export restrictions and forced catalog redesign
This shift cannot be understood without considering the U.S. export control framework. Since 2022, and especially after the 2023 updates, these rules aim to limit China’s access to advanced computing chips and associated supercomputing technologies. The practical outcome has been a market where selling “the latest” technology is either unfeasible or heavily conditioned, pushing companies to design variants to meet technical thresholds or to seek licenses on a case-by-case basis.
In this context, NVIDIA attempted to maintain presence with products adapted to restrictions. However, even these routes have proven fragile. In April 2025, the company announced it would need a license to export its H20 chip (aimed at AI) to China, forecasting an accounting impact of $5.5 billion related to inventory and commitments, reflecting how costly it is to operate with regulatory uncertainty in such capital-intensive supply chains.
By early 2026, the U.S. Department of Commerce revisited its licensing evaluation policy for certain semiconductors destined for China, reinforcing the idea that the “market window” increasingly depends on additional conditions and safeguards.
Huawei and the “cluster scale” strategy to close the gap
From a technological standpoint, Chinese progress does not merely involve launching a chip and expecting it to compete equally with Western latest-generation products. The most repeated approach among local contenders is system-level dominance, combining proprietary hardware with networks, interconnection, software, and large-scale deployments.
In this narrative, Huawei stands out as the most inertial player, partly due to its Ascend line and its focus on large-scale clusters. Recent technical analysis indicates Huawei has outlined a roadmap including the Ascend 950 (expected in 2026), with ambitious goals in low-precision formats common in AI (like FP8), and boosting performance through “rack-scale supercomputing,” i.e., thousands of chips working as a single system.
The same analysis emphasizes that while performance per chip may still lag behind leading GPUs, China is trying to compensate through cluster engineering and an indigenous software ecosystem. Huawei promotes frameworks and programming layers (such as domestic alternatives to CUDA and training environments) to reduce dependence on NVIDIA’s stack, which traditionally has been one of its most difficult competitive advantages to replicate.
The “Four Small Dragons” and the ecosystem battle
Apart from Huawei, the Chinese market is filling with players seeking to occupy specific acceleration niches. Bernstein — according to reports published by tech outlets — highlights firms like Moore Threads, MetaX, Biren Technology, and Suiyuan Technology, sometimes called the “Four Small Dragons” of Chinese GPUs.
However, the key is not solely silicon. The real bottleneck lies in software compatibility: toolchains, compilers, libraries, optimized kernels, and support in popular frameworks. This is where a manufacturer can “win” even if its chip is not the most efficient, as long as it convinces thousands of developers to migrate without rewriting half the platform. Therefore, the debate is no longer just “which chip is faster,” but “which platform reduces the total cost of adoption” across universities, local cloud providers, and large enterprises.
Implications for NVIDIA and the global AI market
If the forecast of an 8% market share materializes, the impact on NVIDIA would be not only commercial but strategic. China is a huge AI computing demand market, and losing prominence there accelerates two secondary effects:
- Alternative standardization: the more a domestic stack is adopted, the more its tools, libraries, and know-how are reinforced, reducing the historical “lock-in” of CUDA within parts of the ecosystem.
- Market fragmentation: multinational companies may be forced to develop different products and workflows for different regions, increasing costs and slowing down global deployments.
From the Chinese side, the scenario also has limitations. Part of the local industry still depends on production capacity, access to certain manufacturing processes, and the availability of advanced memory and large-scale packaging. Still, the trend is consistent: external restrictions are accelerating internal substitution, and the Chinese AI market appears to be moving toward a much more domestically driven divide than just a few years ago.
Frequently Asked Questions
What does it mean that NVIDIA’s share of AI chips in China drops to 8%?
It implies a loss of dominance in China’s AI accelerator market and an increase for local providers, according to Bernstein’s forecasts. It does not necessarily reflect technical performance but rather the distribution of procurement and deployments within China.
Which Chinese companies are gaining ground in AI accelerators?
According to analyses, Huawei, Cambricon, Moore Threads, and MetaX are leading, along with other manufacturers seeking to establish domestic GPU/accelerator offerings and software stacks.
Why do export controls have such an impact on the AI GPU market?
Because they limit which chips can be sold (and under what conditions) in China. This reduces the supply of cutting-edge products and pushes customers and suppliers to prioritize local alternatives and invest in compatible software and deployment solutions.
Why is the ecosystem (CUDA, frameworks, libraries) as important as the chip?
Because migration costs are largely determined by software: if training, serving models, and operating clusters require rewriting tools, the switch is slowed. That’s why China is trying to replicate not only hardware but also the software stack that enables engineering teams to work productively.

