For years, the conversation about AI performance focused on computing power: more GPUs, more cores, more FLOPs. But as models grow and architectures become more parallel, the bottleneck is shifting toward a less “sexy” but more decisive factor: memory. Specifically, high-bandwidth, low-latency memory access is becoming the key factor determining how much a cluster can perform… and how much it costs to operate.
In this context, ZAM (Z-Angle Memory) emerges—a stacked memory proposal that Intel is developing in partnership with SAIMEMORY, a subsidiary linked to SoftBank, with a direct goal: to offer an alternative to HBM (High Bandwidth Memory), which is currently the de facto standard for fueling GPUs and accelerators in large-scale AI workloads.
Why HBM has become the “bottleneck point” in AI
HBM is not just “another RAM.” It’s vertically stacked DRAM memory connected via advanced packaging technologies (2.5D/3D), designed to deliver enormous bandwidth with better energy efficiency than other approaches. The result is clear: for demanding training and inference, the ability to move data from memory to compute engines can be just as important as the silicon itself.
The problem is that this advantage comes at an industrial cost: manufacturing HBM at scale requires sophisticated processes, advanced packaging, and concentrated production capacity. Additionally, the HBM market is dominated by a very limited number of suppliers—a fact that impacts prices, availability, and bargaining power.
With AI driving up demand, the pressure on the HBM supply chain is no longer a secondary issue: it influences deployment schedules, data center CapEx, and practically determines who can scale and who cannot.
What is ZAM, and why do Intel and SoftBank believe it can change the game?
The announced alliance positions Intel as the technological and innovation partner, while SAIMEMORY would lead development and commercialization. According to the published framework, work begins in the first quarter of 2026, with prototypes expected in 2027 and deployment at scale targeted for around 2030.
The name “ZAM” is no coincidence: it refers to the Z-axis, i.e., vertical stacking strategy aimed at increasing density and performance “upwards” rather than expanding surface area. It’s also noted that SoftBank plans to invest approximately 3 billion yen until reaching the prototype phase (fiscal year 2027).
The technical promise accompanying ZAM is ambitious for such a mature market: it involves memories with 2–3× the capacity of HBM, halving power consumption, and achieving comparable or lower costs. Industry estimates cited in sector media suggest goals of reducing consumption by 40–50% and bringing manufacturing costs to about 60% of HBM’s.
If these figures approach reality in production, the impact would be immediate: more memory capacity per accelerator, fewer watts per gigabyte, and potentially less reliance on a concentrated supply.
The differentiating factor: not just memory, but packaging too
The leap from HBM is explained not just by “what memory is,” but by how it’s integrated. Here, Intel aims to leverage their expertise: advanced packaging. TrendForce reports mention the use of EMIB (Embedded Multi-die Interconnect Bridge) as part of the approach to optimize interconnections in stacked architectures.
In other words, the discussion isn’t only about DRAM vs. DRAM, but about the complete stack architecture, interconnection methods, and volume manufacturing.
Furthermore, Intel’s recent experience with memory technologies and packaging provides relevant background: their work with Sandia National Laboratories and what’s described as Next-Generation DRAM Bonding (NGDB) forms a technological foundation related to new stacking methods to overcome capacity limits and push toward “high-performance” memory that can be scaled up.
A true HBM substitute… or a niche-dependent complement?
The question many will ask—and that some headlines already hint at—is whether ZAM can “replace” HBM. In the short term, the timeline alone suggests caution: prototypes in 2027 and scaled deployment around 2030, at best.
This positions ZAM more as a strategic bet for the second half of the decade rather than an immediate solution for current HBM saturation issues.
Still, even as a complement, ZAM could be relevant in specific scenarios:
- Small and medium AI servers where total cost (memory + energy + cooling) penalizes more, and “premium HBM” becomes difficult to justify.
- Edge computing and distributed deployments where thermal and energy margins matter most, and a more efficient stacked memory can make a difference.
The key will be demonstrating not just bandwidth and density, but reliability at volume, manufacturing yields, supply chain compatibility, and, above all, seamless integration with accelerators and platforms.
Industry perspective: memory as a competitive weapon in the AI ecosystem
Intel and SoftBank’s move is notable beyond ZAM: it confirms that the “race in AI” is no longer solely about chip design but controlling the entire triangle of compute + memory + energy. For hyperscalers and colocation markets, this is more than a technological curiosity—it’s a direct line to the bottom line.
If the market remains tight on HBM supply, any credible alternative—even if it arrives in 2029–2030—can have an impact even before it exists: it pressures suppliers, incentivizes investments, and encourages diversification in memory strategies. But credibility in this field is earned with working prototypes, clear manufacturing pathways, and capable partners to industrialize.
Frequently Asked Questions
What problem does ZAM aim to solve compared to HBM in AI data centers?
It seeks to boost capacity and energy efficiency of stacked memory, reduce performance costs, and lessen dependence on a very concentrated HBM supply.
When might Z-Angle Memory (ZAM) be ready for production?
Public references point to prototypes in 2027 and scaled deployment around 2030.
Will ZAM replace HBM in GPUs and AI accelerators?
The current outlook suggests more of a coexistence or partial replacement in specific segments. Adoption will depend on actual performance, reliability, costs, and ease of integration with existing platforms.
What role do technologies like EMIB play in these new stacked memories?
Packaging and interconnection are critical: approaches like EMIB are cited as key strategies to connect stacked chips efficiently, a crucial factor for competing with HBM-type architectures.

