Shenzhen has launched a new AI computing cluster that, according to information disseminated by the local government and specialized media, already reaches a total capacity of 14,000P after adding a new phase of 11,000P to a previous deployment of 3,000P. The significance is not just the figure but its approach: it is presented as the first Chinese “10,000 cards level” cluster built with an entirely domestically developed technology stack, from accelerators to network, storage, orchestration software, and system operation.
The official activation of this second phase occurred on March 26, 2026, according to the Shenzhen Municipal Government, which frames it within its strategy to reinforce an “autonomous and controllable” computing foundation at a time when China aims to reduce reliance on foreign hardware and software for training large models. In this sense, the project has a value beyond scale: it serves as an industrial proof of whether the country can sustain a complete AI environment with indigenous components and tools, not just isolated national chips within hybrid infrastructures.
The system relies on Huawei Ascend 910C accelerators and the Ascend + CANN ecosystem, according to official Shenzhen information. The city defines it as the first Chinese cluster of Ascend 910C supernodes at this scale, while Digitimes adds that the deployment involves around 14,000 Ascend 910C units within a supernode architecture designed to minimize latency and reduce traffic between nodes. This nuance is important because, in large AI clusters, stacking more cards alone is no longer enough: network, job scheduling, and fault management weigh as heavily as raw processing power.
More than Power: A System Engineering Challenge
The most interesting takeaway from this project is precisely that. China has already deployed its own chips in various settings, but the bottleneck remained in intermediate layers: interconnection, software, resource scheduling, and large-scale operation. The Shenzhen cluster aims to address this with a more compact and coordinated architecture where resources are grouped into supernodes interconnected via high-speed links and a layer of distributed programming. According to Digitimes, this approach seeks to tackle three classic limits of massive clusters: communication bottlenecks, operational complexity, and failure accumulation.
The city also promotes the project as a piece of infrastructure with tangible utility, not just a political showcase. Official information mentions support for the national integrated computing network, development of large Chinese models, and strengthening the domestic chip ecosystem. Supporting this vision, several sources report that the initial capacity of the first phase was fully allocated, with nearly 50 companies, universities, and research institutes having signed framework agreements for the new phase, with an overall utilization rate around 92%.
This suggests that the challenge is no longer just attracting demand but proving that this demand can be served stably. In other words, Shenzhen is competing not solely on nominal power but on its ability to deliver useful and sustained computing for model training and inference. That is the real test for any infrastructure of this size.
The Numbers Shenzhen Wants to Present
Part of the project’s appeal lies in the publicly shared metrics, though caution should be exercised here. Shenzhen’s government does not detail all technical indicators in its report, but industry sources referencing project materials indicate that the first phase of 3,000P recorded an average daily failure rate of 0.3‰, a training linearity of 93.12% with the Pangu-718B model, and a PUE of 1.08. These figures are ambitious and reinforce claims of efficiency and reliability, but they have yet to be validated by an independent third party.
Nevertheless, these indicators align with the sector’s current challenges. Large AI clusters tend to fail as they grow if node communication degrades, energy consumption spikes, or long jobs encounter interruptions. That is why Shenzhen emphasizes concepts like supernodes, domestic high-speed interconnection, liquid cooling, and centralized planning. The goal is clear: turn AI computing into a manageable industrial capability rather than a chaotic assembly of servers and cards.
A Piece in China’s Tech Sovereignty Race
The geopolitical context of this announcement is also evident. After US restrictions on advanced chips, China has accelerated efforts to build a less-dependent AI supply chain. In this framework, Shenzhen seeks to position itself as a key node in that strategy. The municipal plan aimed to surpass 80E FLOPS of available intelligent computing capacity by 2026, in addition to creating several large-scale clusters and a low-latency metropolitan network connecting other regional hubs.
Therefore, this project matters beyond the city itself. If it manages to sustain high utilization levels, software compatibility, and operational stability, it could serve as a benchmark for other Chinese deployments based on national technology. Conversely, if interoperability or real reliability fall short, the cluster may end up more as a symbol than a production platform. For now, Shenzhen is trying to demonstrate the former.
Frequently Asked Questions
What does it mean that the cluster is 14,000P?
It means its total announced capacity reaches 14,000 petaflops by combining the new phase of 11,000P and a previous phase of 3,000P. This figure is used to describe the aggregate computing power of the system.
What chips does the Shenzhen cluster use?
According to official and industry sources, the system is based on Huawei Ascend 910C accelerators and the Ascend + CANN software ecosystem.
Is it confirmed that the entire system is fully domestic?
This is the official stance of the project and Shenzhen’s authorities: they describe it as China’s first “10,000 cards” level cluster built with a fully domestic tech stack. Some technical details come from project materials collected by specialized media but have not been verified by an independent public audit.
Why is this project important for China?
Because it tests not just chips but China’s ability to deploy a complete AI infrastructure—computing, networking, storage, software, and operation—with indigenous technology on an industrial scale.
via: digitimes

