SK hynix has made a significant technical breakthrough in the development of HBM memory by verifying a 12-layer stack bonded using hybrid bonding, an advanced packaging technology that could be decisive for future high-bandwidth memory generations. The South Korean company hasn’t disclosed concrete manufacturing performance figures but has acknowledged that it is working to increase the yield to a level suitable for mass production.
The announcement, made by Kim Jong-hoon, SK hynix’s Technical Lead, during the Beyond HBM conference held in Seoul, comes amidst the race for HBM4 and HBM5. The demand for AI accelerators has made this memory one of the most scarce and strategic components in the market. NVIDIA, AMD, Google, Amazon, and other chip designers increasingly require more capacity, higher bandwidth, and lower power consumption to power GPUs, TPUs, and large-scale AI architectures.
What Hybrid Bonding Brings to HBM Memory
HBM memory is built by stacking multiple DRAM chips vertically and connecting them via through-silicon vias (TSVs) and internal interconnections. Until now, the most common techniques have used bumps or microbumps to connect layers. This approach is mature but has physical limitations: it occupies space, adds resistance, generates heat, and complicates increasing the number of layers without making the package significantly larger.
Hybrid bonding aims to reduce these limitations by directly connecting the surfaces of the chips, typically through copper-to-copper contacts and dielectric-to-dielectric interfaces. By eliminating or reducing the reliance on bumps, it enables denser interconnections, lower stack height, improved electrical efficiency, and, in theory, less heat generation. For HBM, where every millimeter and every watt matter, this improvement can make a real difference.
SK hynix explained that it has already tested a 12-layer HBM structure using hybrid bonding. The company hasn’t released yield data, which is a notable omission because the major challenge of this technology is not just demonstrating it works in the lab but manufacturing it with sufficient performance, cost-effectiveness, and repeatability. Kim Jong-hoon stated that preparations are “much more advanced than in the past,” but avoided specifying exact percentages.
This cautious approach makes sense. In semiconductors, a process can be technically feasible yet not profitable if too many units are defective. In HBM, the problem is compounded because a stack combines many layers. If any one layer fails, the entire assembly could lose value or become unusable. That makes yield in packaging extremely sensitive.
MR-MUF Will Continue While the New Stage Arrives
SK hynix will not abandon its current technology immediately. It will continue using MR-MUF, or Mass Reflow Molded Underfill, while refining hybrid bonding. MR-MUF uses copper bumps and fills the gaps between layers with underfill material after heating the assembly. It has been one of the key technologies that helped SK hynix advance in HBM3E and HBM4.
The transition will be gradual. HBM4 is already entering production using advanced packaging techniques, but hybrid bonding seems more associated with later generations or higher-density variants. Market forecasts suggest broader adoption of hybrid bonding around HBM5, around 2029 or 2030, when stacking more layers and better thermal management will be even more critical.
The technical context helps explain why. HBM4 already represents an improvement over HBM3E: doubling the interface to 2,048 bits, increasing the number of channels, and boosting the bandwidth per stack. The JEDEC specification for HBM4 envisions higher capacities and configurations up to 16 layers, with a clear focus on AI accelerators and HPC. To reach 16, 20, or more layers in future generations, traditional techniques are beginning to reach their limits.
This is where hybrid bonding becomes attractive. The more layers stacked, the more challenging controlling height, heat, alignment, and mechanical reliability becomes. Reducing the distance between dies and improving electrical connections could enable denser and faster modules, albeit with more demanding manufacturing processes.
The Battle for HBM Packaging
For years, memory technology has been largely defined by manufacturing processes, capacity, and speed. In the AI era, advanced packaging has become central to the product. HBM is not just fast DRAM: it’s a complex 3D structure that must coexist with GPUs, interposers, advanced substrates, basic logic, cooling solutions, and power delivery.
SK hynix, Samsung, and Micron compete to secure clients for HBM4 and to develop HBM4E and HBM5. SK hynix has an advantage due to its role as a key supplier for NVIDIA, but competitors are accelerating. Micron has announced progress on HBM4 platforms for next-generation systems, and Samsung is trying to regain ground after several cycles in which SK hynix led much of the demand driven by AI needs.
The pressure isn’t only to sell more memory but to meet the increasingly specific requirements of major accelerator designers. For example, NVIDIA has raised the bar for speed, power efficiency, capacity, integration, and timelines. HBM4 includes customized logic, complicating supplier switching without redesigns or additional validation. While personalization benefits early adopters, it also raises the risk and costs of errors.
Investments reflect this tension. SK hynix recently announced a capital expenditure of about 19 trillion won (around $12.85 billion USD) for a new plant in South Korea focused on AI memory and advanced packaging. The company also noted that demand for HBM from its customers over the coming years exceeds their supply capacity, indicating that shortages will persist for some time.
Why It Matters for AI Data Centers
HBM memory is one of the key bottlenecks in modern AI systems. Large models require not only vast computational power but also need to feed data to accelerators at incredible speeds. If memory bandwidth is insufficient, the GPU or accelerator may sit idle, despite having excess computational capacity.
Therefore, advances in HBM directly impact the cost and performance of AI data centers. More layers mean greater capacity per stack. Higher bandwidth means better utilization of accelerators. Improved thermal efficiency can lower power consumption and enable denser systems. However, if packaging technology doesn’t scale well with high yield, memory will become more expensive and less available.
Hybrid bonding offers a potential solution, but it’s not an immediate fix and comes with challenges. It demands extreme precision in alignment, surface cleaning, defect control, thermal processes, and metrology. Any small defect in an HBM stack can compromise the entire assembly. That’s why SK hynix emphasizes progress over mass deployment.
This announcement sends a clear signal to the market: the next frontier in AI won’t only be about new GPUs but also about how memory modules are manufactured, connected, and cooled. Platform performance will increasingly depend on advanced packaging, and SK hynix aims to maintain its lead before Samsung and Micron catch up.
Frequently Asked Questions
What has SK hynix verified?
SK hynix has verified a 12-layer HBM stack using hybrid bonding—a direct die-to-die bonding technology that aims to improve density, performance, and efficiency.
What is hybrid bonding in HBM memory?
It is an advanced packaging technique that directly connects chip layers, reducing or eliminating bumps, potentially improving bandwidth, power consumption, stack height, and thermal management.
Is SK hynix already mass-producing HBM with hybrid bonding?
No. The company states it’s working toward achieving a yield suitable for mass production but has not disclosed specific figures or confirmed commercial production using this technology.
Why does this matter for AI?
Because AI accelerators rely on high-bandwidth, large-capacity HBM. Improving packaging can enable denser, faster, and more efficient modules, benefiting future GPUs and high-performance computing systems.
via: wccftech

