SK hynix puts its 192 GB SOCAMM2 for NVIDIA Vera Rubin into production

SK hynix has announced the start of mass production of its SOCAMM2 192 GB module, a low-power LPDDR5X memory manufactured with their 1cnm process— the sixth generation of the 10-nanometer class— aimed at the upcoming NVIDIA Vera Rubin platform. The South Korean company asserts that this move strengthens its position in one of the most vital battles in AI infrastructure: efficient memory for next-generation servers.

The significance lies in the fact that the AI bottleneck is no longer solely in GPUs. It also depends on how to supply these systems with enough capacity, bandwidth, and energy efficiency without drastically increasing rack power consumption. In this context, SOCAMM2 aims to open a different path from traditional RDIMM solutions by bringing low-power LPDDR memory—commonly used in mobile devices—into the data center and AI server environments.

What is SOCAMM2 and why is it gaining momentum

SK hynix describes SOCAMM2 as an optimized AI server module based on LPDDR, featuring a slim form factor, high scalability, and a compression connector designed to improve signal integrity and facilitate module replacement. The company guarantees that its new 192 GB solution offers more than double the bandwidth and over 75% better energy efficiency compared to conventional RDIMM modules, based on its internal metrics.

This approach aligns with NVIDIA’s Vera Rubin platform architecture. NVIDIA explains that its Vera CPU is designed for data- and memory-intensive workloads, with 88 cores, ARM compatibility, and up to 1.2 TB/s of LPDDR5X bandwidth. In market terms, this means Rubin requires not only powerful accelerators but also a much more refined memory hierarchy to prevent performance loss due to data access bottlenecks.

The key advantage of SOCAMM2 is precisely that: combining high capacity with lower power consumption within a modular architecture. Unlike fully soldered or more closed solutions, the form factor maintains a level of serviceability and replaceability that is attractive in data centers, where maintenance and scalability are crucial. So, it’s not just about faster RAM but a component designed to balance performance, power, and density requirements effectively.

A race no longer limited to SK hynix

The announcement also clearly indicates that the race for SOCAMM2 is heating up. Micron had already moved in this direction and positions itself as the first supplier offering a modular, data center-grade LPDDR5X format. In March, Micron announced its own 256 GB SOCAMM2, with ranges from 48 GB to 256 GB, and linked it to NVIDIA Vera Rubin NVL72 systems and independent Vera CPU platforms. According to Micron, this configuration enables up to 2 TB of memory per CPU and up to 1.2 TB/s bandwidth per processor.

This makes SK hynix’s move more than just an incremental improvement. The company is now competing in a category where memory stops being a relatively interchangeable component and begins to become a core part of the system architecture. In other words, they are not just competing to sell DRAM chips but are positioning themselves to play a key role in NVIDIA’s reference ecosystem for the next wave of AI infrastructure.

Additionally, the use of LPDDR in AI servers introduces a different logic compared to traditional data center memory. Micron has been advocating for months that these modules can reduce power consumption and physical footprint compared to RDIMMs, while SK hynix now emphasizes the increased bandwidth and energy efficiency of its 192 GB solution. Although exact figures vary by manufacturer and testing conditions, the overarching message is clear: the market is beginning to accept that low-power memory is no longer exclusive to laptops or smartphones.

The real context: more AI, higher density, less energy margin

Moreover, SK hynix’s announcement comes at a time when every watt counts. The growth of training and inference workloads for large models has increased pressure on system memory—not just on the HBM accompanying GPUs. The CPU, main memory subsystem, and data movement capacity within the node have become critical components to sustain performance with increasingly large models requiring substantial data feeding.

That’s why SK hynix emphasizes that its 192 GB SOCAMM2 aims to alleviate bottlenecks during training and inference of large language models. While it’s a commercial claim, it addresses a real industry issue: the imbalance between accelerating computation and the system’s capacity to support that acceleration efficiently. As AI racks grow more powerful, memory stops being just an accessory and begins to represent a fundamental architectural constraint unless it evolves at the same pace.

The mass production announced by SK hynix ultimately suggests that SOCAMM2 is moving beyond the promise stage into real deployment. It remains to be seen how widespread its adoption will be, how other manufacturers will respond, and to what extent NVIDIA will incorporate this type of memory into its future platforms. One thing seems already clear: the next major battleground in AI won’t be confined solely to GPUs or HBM. It will also unfold in this system memory, which until recently appeared secondary but now is emerging as a key player.

FAQs

What exactly is SOCAMM2?
SOCAMM2 is a memory module based on LPDDR designed for AI servers. It aims to deliver high capacity, lower power consumption, and a modular form factor with a compression connector to enhance signal integrity, maintenance, and scalability.

What platform is SK hynix’s new module designed for?
SK hynix has indicated that its SOCAMM2 192 GB module is designed for the NVIDIA Vera Rubin platform. NVIDIA, on its part, describes Vera as an architecture capable of up to 1.2 TB/s of LPDDR5X bandwidth in the Vera CPU.

How does it compare to traditional RDIMM memory?
According to SK hynix, their module offers more than double the bandwidth and over 75% better energy efficiency compared to conventional RDIMMs — figures provided by the company itself.

Is SK hynix the only company investing in SOCAMM2?
No. Micron has also announced SOCAMM2 modules for the Vera Rubin ecosystem and has already introduced 192 GB and 256 GB versions, confirming that this category is consolidating as a new strategic line in AI memory solutions.

via: news.skhynix

Scroll to Top