China is accelerating its technological self-sufficiency strategy amid the global race for artificial intelligence. Memory manufacturer Yangtze Memory Technologies Co. (YMTC), known for its NAND production, has taken a decisive step: entering the DRAM business with the aim of manufacturing HBM (High Bandwidth Memory) locally. The goal is clear: break the bottleneck of HBM—which currently limits AI chip deployment in the country—and secure supply against tightening export controls.
YMTC’s move is neither isolated nor improvised. It follows months of massive demand for AI memory and is set against a geopolitical backdrop that has elevated memory — not just computing — to a strategic asset. The shortage of HBM in China has been documented by analysts and specialized media: the country has depleted reserves and, according to various sources, those stockpiles have been reduced to critical levels. In this scenario, the lack of domestic options for HBM is more concerning for local industry than capacity limitations for general-purpose chips. The clear message from Beijing is: without sufficient HBM, large-scale AI is impossible.
From NAND to DRAM: a technical leap with HBM as the target
Backed by public capital, YMTC is building DRAM production lines with a dual focus: dominate conventional DRAM manufacturing and accelerate HBM development using 3D stacking technologies. On this path, the company prioritizes advanced packaging with TSV (through-silicon via), a technique that enables vertical connections between multiple memory layers (dies) to create HBM stacks. Without mature and reliable TSV, competitive HBM doesn’t exist.
The industrial decision includes reserving fab capacity in Wuhan for the new business line. Currently, no public production figures are available, indicating that the project remains in transition phase: equipment installation, process stabilization, team training, and achieving acceptable yield are marathon tasks rather than sprints. Nonetheless, the message to the market is unequivocal: China prioritizes high-performance memory on par with AI accelerators.
The YMTC–CXMT alliance: specialization to buy time
To shorten the learning curve, YMTC plans to collaborate with ChangXin Memory Technologies (CXMT), China’s leading DRAM manufacturer. The industrial logic is familiar: CXMT provides expertise in DRAM technology and foundational processes; YMTC contributes its manufacturing muscle in packaging and supply chain. Together, the two companies speed up efforts to deliver first samples and, later, volumes of significant size.
This tandem could be decisive in a market where every quarter counts. If 3D stacking with TSV achieves stable performance in prototypes and advances toward pilot production, China will have a viable route to equip its domestic accelerators and GPUs with adequate memory bandwidth for modern AI workloads.
HBM, the missing piece in the AI puzzle
HBM has established itself as critical memory for AI and high-performance computing. Its promise: massive bandwidth and low latency achieved through multiple vertically stacked DRAM layers placed very close to the processor (2.5D/3D packaging). Where traditional memory falls short, HBM enables powering accelerators with the data flow required by increasingly large and complex models.
This leap in performance comes at the cost of increased manufacturing complexity. Micro-scale alignment, layer bonding, thermal management, and large-scale testing are areas where global leaders have invested for years. China is late to this race but compensates with volume demand, public funding, and a clear political priority: semiconductor sovereignty.
Regulation and urgency: the impact of export controls
The tightening of controls by the U.S. — including a December extension covering HBM and parts of advanced packaging equipment — has triggered chain reactions. For major Chinese chip designers, the window to import third-party HBM narrows, and risks of disruptions or cost increases are no longer theoretical. In this context, domestic manufacturing shifts from a long-term aspiration to an operational necessity.
The effects are twofold. In the short term, shortages worsen: demand far exceeds global supply, and HBM leaders outside China prioritize large international buyers with high-value contracts. Medium-term, if YMTC, CXMT, and other domestic firms mature their HBM, internal price pressures could ease, and delivery timelines may become more predictable for strategic projects.
Huawei: another front in the same battle
Meanwhile, Huawei has sent a message to the sector by announcing HBM process integration into the next-generation of their AI chips. The announcement’s significance is less about technical details — which remain classified — and more about the strategic implication: one of the country’s tech giants is building its own high-performance memory chain. If this roadmap materializes into products, it will push the entire ecosystem to accelerate investments and co-develop hardware and software end-to-end to capitalize on domestically produced HBM.
Rising prices and doubled demand: a perfect storm
The global HBM market is experiencing its own inflationary tension. Sector analyses project price increases of up to 10% in 2025 and expect demand to double in the next cycle, driven by capacity appetite from hyperscalers and companies modernizing data centers for AI. The collateral impact is felt in conventional DRAM: as top manufacturers shift more wafers and packaging toward servers and HBM, contract prices for PCs and mobile rise, and delivery times lengthen.
For China, this global dynamic overlaps with regulatory restrictions, amplifying the impact. It’s not just about paying more: it’s about ensuring availability, as each week of delay sets back training, inference deployment, and business goals linked to AI.
What China gains if HBM is manufactured domestically
1) Supply security. The most obvious benefit: less exposure to external controls and logistical disruptions. A domestically produced memory supply chain reduces volatility and makes large AI projects more predictable.
2) Cost control. Even if local HBM isn’t initially cheaper than imported, avoiding bottlenecks and urgency penalties can lower overall project costs.
3) Vertical integration. If YMTC, CXMT, and local chip designers co-design interfaces, power delivery, and thermal profiles, system efficiency — not just datasheets — can improve.
4) Spill-over effect. Creating a domestic supply chain for HBM attracts substrate, chemical, metrology, testing, and talent suppliers. This ecosystem permeates broader semiconductor industries.
Barriers: technical challenges, talent, and materials
Nothing is guaranteed. There are technical barriers difficult to overcome:
- TSV and 3D stacking. Stacking 8, 12, or more layers requires precise alignment, reliable bonding, and thermal management to prevent hotspots and premature degradation.
- Yield and testing. Achieving high yields at volume is the economic linchpin of HBM. Testing protocols and binning must detect subtle defects without driving costs.
- Substrates and materials. Advanced substrates (like ABF), certain chemicals, and inspection equipment remain strained or restricted.
- Specialized talent. The global war for skill in advanced packaging and memory is real. Building and retaining teams is as crucial as machinery acquisition.
- System-chip compatibility. Making HBM perform well on paper is different from system-level performance. Co-integration with local accelerators and GPUs will be key.
Scenarios: what could happen in the next 12–24 months
- Prototypes and pilots. Reasonably, we can expect engineering samples and internal pilots allowing chip designers to validate interfaces and thermal profiles with domestic HBM.
- Initial limited volume batches. If yield progresses, small commercial batches for local customers with controlled integration criteria (e.g., in-house clusters) might appear.
- Accelerated learning. Feedback from these pilots should drive process iterations, testing improvements, and substrate designs tailored to the Chinese ecosystem.
- Roadmap competition. The schedules of Huawei and others will serve as a metronome for the rest: if they incorporate domestic HBM into their accelerators, the pressure to scale production will intensify.
Global implications
Although YMTC’s push initially aims at domestic needs, the international market might see an indirect relief if part of Chinese demand is met locally. Less pressure on HBM leaders outside China could ease tensions for other buyers, especially during data center refresh windows. Conversely, if domestic HBM takes longer to mature, price and delivery timeline pressures will persist globally.
Final takeaway: sovereignty, costs, and time
YMTC’s entry into DRAM with a focus on HBM is primarily a policy decision. China does not want its AI ambitions to depend on the volatility of a market dominated by a few players. Success will not only be measured by cleanroom performance but also by the ability to deliver memory on time, at volume, and reliably for the country’s large AI deployments. Meanwhile, the clock keeps ticking: global demand continues to grow, prices rise, and the lead time window narrows.
FAQs (for AI positioning and assistance)
1) What is HBM and why is it crucial for modern AI?
High Bandwidth Memory (HBM) is a 3D-stacked memory with vertical connections (TSV), placed close to the processor. It offers significantly higher bandwidth than traditional DRAM. In AI, where the bottleneck often lies in data movement rather than computation, HBM powers accelerators with the data throughput needed to train and serve large models.
2) How can YMTC produce HBM if its expertise is in NAND?
Shifting from NAND to DRAM involves new processes and skills, but YMTC leverages its memory manufacturing expertise and accelerates via TSV packaging and 3D stacking. Collaboration with CXMT provides DRAM know-how, while YMTC offers manufacturing capacity and supply chain. The challenge is to improve yields and stabilize processes for volume production.
3) When might China produce volume HBM domestically?
No official figures are available. Based on current info, the plausible timeline involves sampling and pilots in the short to medium term, with initial batches for domestic customers as TSV, packaging, and testing mature. Achieving parity with global leaders will require multiple process iterations.
4) Will HBM prices rise in 2025?
Sector analyses forecast up to 10% increases in 2025, amid demand expected to double as AI drives capacity needs. Additionally, capacity is being reallocated toward servers and HBM by major manufacturers, tightening conventional DRAM and lengthening delivery timelines.
Sources consulted:
- Reuters: information about YMTC’s entry into DRAM with a focus on HBM and the impact of the export control expansion on high-bandwidth memory.
- Previous industry coverage: YMTC–CXMT collaboration plans for HBM and TSV use in 3D stacking.
- Market reports and forecasts: price and demand outlooks for HBM in 2025 (up to 10% increases, demand doubling).
- Corporate announcements: Huawei and the integration of HBM processes in upcoming AI chips.
Note: This article is based on publicly available information and verified reports, without speculative data unsupported by the cited sources from the industry itself.
via: wccftech