Micron Raises the Bar with 24 Gbit GDDR7 and 36 Gb/s: More VRAM and Bandwidth for the Next Wave of GPUs

The next generation of graphics cards is no longer solely determined by GPU compute power. Increasingly, the bottleneck appears elsewhere: memory. While this isn’t a new debate, it’s becoming more visible: when VRAM gigabytes or bandwidth run short, you see stutters, exchanges with system RAM, and performance drops in demanding scenarios. As games push towards heavier textures, advanced effects, and higher resolutions, the margin for error shrinks.

In this context, Micron has announced a development that directly impacts memory configurations of future GPUs: its GDDR7 jumps to 24 Gbit per chip (3 GB) and reaches 36 Gb/s per pin. This is a significant leap — both in density and speed — which, on paper, allows for designing cards with more VRAM capacity without widening the bus, while also elevating bandwidth to levels that historically belonged to more specialized solutions.

Micron frames this as a response to “a new bottleneck” where GPU performance isn’t sufficient if memory doesn’t keep up, positioning GDDR7 as the baseline technology for “next-generation gaming” and AI-oriented PCs.

What exactly is changing: 24 Gbit per chip and 36 Gb/s per pin

The key to Micron’s new GDDR7 is that each chip now features 24 Gbit, equivalent to 3 GB. Previously, the common pattern in GDDR7 and GDDR6 was to use densities of 16 Gbit (2 GB) per chip, which forced choices between more chips (higher cost and complexity) or a more limited total VRAM.

With 3 GB chips, the “classic” configurations are immediately altered:

  • 256-bit bus (8 chips): from 16 GB (with 2 GB chips) to 24 GB (with 3 GB chips).
  • 384-bit bus (12 chips): from 24 GB to 36 GB.
  • 512-bit bus (16 chips): 48 GB without needing to double chips per side.

If a clamshell (doubled memory for the same bus) design is used, it opens the door to 96 GB (32 chips over 512 bits), though with increased electrical, thermal, and PCB design complexity. While this isn’t likely to appear in consumer cards soon, it shows that the technological barrier is no longer the chip’s density.

The other leap: bandwidth scaling over 2.3 TB/s theoretical

The announced speed — 36 Gb/s per pin — is equally significant. In GDDR, total bandwidth is calculated directly: speed per pin × bus width / 8. With this formula, the leap is easy to visualize:

  • 256 bits at 36 Gb/s: 36 × 256 / 8 = 1,152 GB/s
  • 384 bits at 36 Gb/s: 36 × 384 / 8 = 1,728 GB/s
  • 512 bits at 36 Gb/s: 36 × 512 / 8 = 2,304 GB/s (over 2.3 TB/s theoretical)

To put this into context, prior Micron documentation on GDDR7 mentioned configurations at 32 Gb/s with over 1.5 TB/s of system bandwidth (in a 384-bit, 12-chip setup), compared to GDDR6 at 20 Gb/s. With 36 Gb/s, the theoretical ceiling rises another notch, reducing the likelihood that bandwidth becomes the first bottleneck in extreme graphics loads.

Quick table (theoretical) with GDDR7 at 36 Gb/s

BusChips (typical)VRAM with 24 Gbit (3 GB)Theoretical bandwidth
256 bits824 GB1,152 GB/s
384 bits1236 GB1,728 GB/s
512 bits1648 GB2,304 GB/s

Comparison: Micron reaches 36 Gb/s, Samsung had already aimed for over 40 Gb/s

In the high-density GDDR7 race, Micron isn’t alone. Samsung announced in October 2024 their GDDR7 24 Gb chips, indicating speeds of 40 Gb/s with potential up to 42.5 Gb/s depending on application, along with improvements in energy efficiency. Meanwhile, SK Hynix has also announced plans for 24 Gb (3 GB) modules as part of their product evolution.

The industry outlook is clear: the 24 Gb density is becoming the next “step” for GPUs, driven not only by capacity needs but also by supply pressures and manufacturers’ interest in expanding VRAM without redesigning buses and PCBs from scratch.

Comparison table (based on public announcements)

ManufacturerAnnounced DensityHighlighted SpeedKey Message
Micron24 Gb (3 GB)36 Gb/sIncreased capacity + next-wave speed
Samsung24 Gb40 Gb/s (up to 42.5 Gb/s)Leading in speed and efficiency
SK hynix24 Gb (planned)(varies by roadmap)Entry into 3 GB segment to expand offerings

Why does this matter for gaming… and why is it also relevant for local AI?

In gaming, the clearest advantage of larger VRAM is avoiding scenarios where the system “runs out of space” and has to swap data with system RAM. This often results in micro-stutters and reduced fluidity, especially in titles with heavy asset loads. The industry has also observed that, within the same range, models with less VRAM age worse when game requirements grow heavier.

But the announcement also aligns with another trend: using GPUs for local inference and creative AI workloads. Micron links their GDDR7 to “immersive and intelligent” experiences and AI PCs, implying that VRAM is no longer only for textures and graphics buffers but now also for models, contexts, and pipelines that scale rapidly.

PAM3, reliability, and efficiency: the less flashy but more decisive part

GDDR7 isn’t just “more gigabits.” The JEDEC standard introduces design changes aimed at enhancing performance without increasing power consumption, including PAM3 signaling, more internal channels, and improvements in training and reliability. Micron emphasizes features such as new reliability functions (e.g., internal ECC) and efficiency gains over previous generations.

This matters because increasing VRAM and bandwidth comes with higher power and heat. GDDR7’s approach is to grow while keeping thermal system limits reasonable, which is crucial for consumer cards and especially compact designs.

What remains to be seen: volume, cost, and product decisions

From a technological standpoint, the benchmark is set: GDDR7 24 Gb with up to 36 Gb/s per pin already exists in Micron’s catalog. Whether and how this makes it into gaming cards or workstation products depends on factors like production capacity, chip price, manufacturer priorities (AMD, Intel, NVIDIA), and market segmentation.

Practically, the industry tends to adopt memory not only based on what it enables but on what makes feasible. If supply aligns and costs don’t prevent final products, the jump to 24 GB on 256-bit buses could become the new high-end standard. Otherwise, the progress might remain longer within professional segments and products where price sensitivity is lower.


Frequently Asked Questions (FAQ)

What does GDDR7 24 Gbit mean, and how much VRAM does it allow on a 256-bit GPU?
Each 24 Gbit chip equals 3 GB. With a 256-bit bus, typically 8 chips are used, allowing for 24 GB of VRAM without altering the bus width.

What bandwidth does GDDR7 at 36 Gb/s offer, and why is over 2.3 TB/s mentioned?
At 36 Gb/s, a 512-bit bus can reach 2,304 GB/s, i.e., over 2.3 TB/s theoretical maximum, using the standard calculation (Gb/s × bits / 8).

How does Micron’s proposal compare with Samsung’s GDDR7 24 Gb?
Micron announces 24 Gb at 36 Gb/s, whereas Samsung reported 24 Gb at 40 Gb/s with potential to reach 42.5 Gb/s. The final product speeds depend on GPU adoption and supply considerations.

Why is VRAM so critical in modern gaming and also for local AI inference?
Because it prevents swaps with system RAM when handling large textures, buffers, and complex scenes, and increasingly, AI workloads on GPUs consume VRAM for models and intermediate data.

Scroll to Top