Micron is preparing a move that could reshape the landscape of memory for Artificial Intelligence. According to information published by ETNews, the American company has begun developing a new generation of stacked GDDR, a technology that aims to combine some of the philosophy of HBM with the lower relative cost of traditional graphics memory. The goal is to meet the increasingly diverse memory demands for AI, where not everything relies on the most advanced — and expensive — HBM, but also includes intermediate solutions for inference, more affordable accelerators, and new GPU architectures.
The key to this move is the concept. GDDR has been for years the typical memory for graphics cards, consoles, and visual workloads, while HBM has become the queen of the AI boom due to its enormous bandwidth and integration close to the processor. Now, the idea is a hybrid approach: vertically stacked GDDR, with greater capacity and performance than conventional GDDR, though presumably below HBM in raw performance. According to ETNews, Micron has already kicked off development, with plans to have equipment ready and begin process testing in the second half of 2026; initial samples could arrive as early as 2027.
A response to an AI market that no longer wants a single type of memory
This move is not coincidental. AI memory is becoming much more complex than it was just two years ago. The explosion of large training clusters turned HBM into a major bottleneck in the industry, and manufacturers like Micron, Samsung, and SK hynix have invested heavily to increase capacity. In fact, Micron recently announced strong demand for memory driven by data centers and AI expansion, in a context where supply remains tight.
At the same time, the market is beginning to fragment. Not all AI workloads require the highest-cost, most extreme solution. For inference, edge AI, specialized accelerators, or certain GPU profiles, the balance between cost, capacity, and bandwidth can favor different types of memory than the classic HBM. This is where vertically stacked GDDR could make sense: offering more density and performance than traditional GDDR without the cost spikes associated with HBM. This is a reasonable inference based on the positioning described by ETNews and the current AI memory market context.
What is known so far about Micron’s plan
The available information remains limited and does not come from an official Micron announcement of a commercial product. ETNews indicates that the company is working on a vertically stacked GDDR architecture, with an initial configuration around four layers, likely responding to requests from AI accelerators and other high-performance applications. For now, detailed specifications, speeds, power consumption, or final packaging format are unknown.
What we do know from recent official sources is that Micron is ramping up its presence in the GDDR market. At the end of February, it announced GDDR7 24 Gb memories with speeds up to 36 Gb/s, a 12.5% improvement over the early GDDR7 modules around 32 Gb/s. This evolution aligns with the idea that the company wants to strengthen its entire memory family, rather than just follow the HBM track.
The gap between HBM and conventional GDDR
If Micron manages to bring this technology to market, it could create a very promising new category within the memory industry. HBM will continue to be the go-to for the most extreme training systems, where bandwidth is paramount. But between that high-end and conventional GDDR, there’s a potential space for a memory offering better density, higher effective bandwidth, and a more affordable cost. ETNews suggests that stacked GDDR could occupy this intermediate position.
This segment could be attractive not only for AI inference but also for certain high-performance GPUs, specialized accelerators, and eventually, premium gaming graphics if thermal and economic balances prove favorable. Caution is advised: gaming applications remain a market possibility rather than a confirmed roadmap. The initial focus appears to be clearly on AI and professional acceleration.
The major challenge: heat, power, and cost
The hardest part isn’t the idea itself but executing it. Stacking GDDR isn’t just about making a cheaper HBM. GDDR is designed with different electrical, thermal, and packaging trade-offs. ETNews highlights several challenges: the stacking method between GDDR chips, thermal management, power consumption, and especially, the added cost of stacking itself. If the final bill approaches that of HBM, the competitive advantage diminishes significantly.
And this is the true frontier of the project. HBM justifies its price because it provides a huge leap in bandwidth and interconnection efficiency for high-end AI. Vertically stacked GDDR will only make sense if it can stay well below HBM in cost but still outperform traditional GDDR in performance and capacity. If that balance can’t be achieved, it might remain a technical curiosity rather than a profitable category.
A message directly to Samsung and SK Hynix
There’s also a clear competitive reading. ETNews notes that Micron would be the first to attempt this commercially, giving it an opportunity to outpace Samsung and SK hynix in a new memory subsegment. This matters because the current game is no longer just about who dominates HBM4 or secures better contracts with NVIDIA and other AI giants, but who detects the next layers of the market first.
If AI continues expanding into more accelerators, devices, and price points, memory will need to diversify. That’s precisely what Micron seems to be anticipating: a future where there isn’t a single reigning memory, but multiple families competing based on load, cost, and architecture.
Frequently Asked Questions
What is the stacked GDDR that Micron is developing?
It would be an evolution of GDDR graphics memory where multiple chips are stacked vertically to increase capacity and performance, partially approaching the concept of HBM.
Has Micron officially announced this product?
No. The information comes from ETNews and indicates an ongoing project, not a formal commercial launch by Micron.
What would this type of memory be used for?
Mainly to fill an intermediate space between HBM and traditional GDDR, especially for AI inference, specialized accelerators, and possibly certain high-performance GPUs. The latter is a market inference, not an official spec.
What technical risks does this technology face?
The main challenges include physical stacking of chips, thermal control, power consumption, and maintaining a favorable cost-to-performance ratio compared to HBM.
via: etnews

