HBM4: New Ultra-Fast Memory to be 30% More Expensive than HBM3E, According to TrendForce

The next generation of high bandwidth memory is set to break into the market with higher performance… and significantly higher prices

HBM4 memory is poised to become the new standard for demanding workloads in artificial intelligence and high-performance computing (HPC), promising not only a significant leap in speed but also in cost. According to a report from the analysis firm TrendForce, HBM4 will cost at least 30% more than the current HBM3E, which itself saw a 20% increase over its predecessor.

This price escalation is driven by the technical complexity of the new architecture. HBM4 will double the number of input/output pins compared to HBM3E, rising from 1,024 to 2,048, which means larger, more expensive, and complex chip designs. In exchange, it will offer a maximum bandwidth of up to 2,048 GB/s per stack at 8 Gbit/s per pin, according to specifications recently ratified by the JEDEC consortium.

Technical Advantages and New Logic-Based Die Designs

Samsung and SK Hynix, in collaboration with external foundries, are developing a new design known as the logic-based base die, which will better integrate memory with the processor. This advancement aims to reduce latency, increase efficiency in data pathways, and ensure more stable transmission at high speeds. In comparison, the base dies of HBM3E were essentially signal channels without logical functions.

The redesign aims to meet the growing demand for memory solutions for AI acceleration chips and supercomputing, where every nanosecond and watt counts. The improvement in logical integration promises faster, more efficient, and reliable memory for upcoming high-end processors.

Rising Market: Expected to Surpass 3.75 Exabytes in 2026

The demand for HBM memory shows no signs of slowing down. TrendForce estimates that the total sales volume of HBM—across all generations—will exceed 30 billion gigabits in 2026, equivalent to 3.75 million terabytes (3.75 exabytes). According to their forecasts, HBM4 will begin to displace HBM3E as the main solution in the second half of next year.

The South Korean manufacturer SK Hynix is expected to remain a market leader with more than 50% market share, while Micron and Samsung will need to accelerate their production plans to keep up in the race for HBM4.

Preparing for the Next Generation of Accelerators

Major players in the chip market are already preparing their products to adopt this new technology. Nvidia has announced that its future Rubin-generation GPUs, expected in 2026, will incorporate HBM4 memory. AMD will also follow suit with its upcoming series of Instinct MI400 accelerators.

Meanwhile, SK Hynix has already begun shipping samples of its first HBM4 chips, manufactured in 12-layer configurations. Although exact bandwidth and capacity figures have yet to be revealed, expectations are that they will exceed the current limits of HBM3E, which already reaches up to 1,280 GB/s per stack in SK Hynix versions.

Comparison of HBM Generations (Bandwidth per Stack)

GenerationBandwidth per PinBandwidth per Stack
HBM11.0 Gb/s128 GB/s
HBM22.0 Gb/s256 GB/s
HBM2E3.6 Gb/s461 GB/s
HBM36.4 Gb/s819 GB/s
HBM3E (Micron)9.2 Gb/s1,178 GB/s
HBM3E (Samsung)9.8 Gb/s1,254 GB/s
HBM3E (SK Hynix)10.0 Gb/s1,280 GB/s
HBM48.0 Gb/s2,048 GB/s
HBM4E (est.)~10.0 Gb/s~2,560 GB/s

With the HBM4 standard already approved by JEDEC and the first samples circulating, the industry is gearing up for a new leap in capacity, speed… and cost. A price that many will be willing to pay if it translates into leadership in the demanding AI market.

via: computerbase

Scroll to Top