A new modular memory based on LPDDR will enable greater energy efficiency, superior bandwidth, and easier upgrades in AI PCs and servers.
NVIDIA continues to push the evolution of artificial intelligence with a strong commitment to low-power modular memory. According to the South Korean outlet ETNews, the company plans to produce between 600,000 and 800,000 units of its new SOCAMM modules (Smart On-Board Compression Attached Memory Modules) this year to integrate into its next-generation AI-oriented platforms.
### A Revolution in AI Memory
Unveiled during NVIDIA’s GTC event, SOCAMM technology represents a new standard for data center-grade LPDDR memory, initially developed by Micron, and compatible with AI devices that demand energy efficiency and high performance. Unlike traditional HBM or LPDDR5X solutions—often soldered onto the motherboard—SOCAMM modules are interchangeable, allowing future upgrades via a simple three-screw system.
One of the first products to adopt this technology is the GB300 Blackwell platform, which already indicated NVIDIA’s intention to adopt a new form factor for its AI products.
### Key Advantages Over RDIMM and LPDDR5X
SOCAMM modules combine high energy efficiency, compact formatting, and high bandwidth, reaching between 150 and 250 GB/s—substantially outperforming traditional RDIMM, LPDDR5X, and even LPCAMM. Thanks to their modular design, SOCAMM is positioned as an ideal choice for low-power AI PCs and servers, enabling memory expansion without replacing the entire motherboard or system.
While the estimated production of up to 800,000 units is still below the volume of HBM memories NVIDIA has received this year from its partners, a massive scale is expected in 2026 with the arrival of the second generation: SOCAMM 2.
### Micron Leading, with Samsung and SK Hynix Watching
Currently, Micron is the primary manufacturer of SOCAMM modules for NVIDIA, but discussions have already begun with Samsung and SK Hynix to expand production capacity and strengthen the supply chain.
SOCAMM has been specifically designed to meet the new demands of AI infrastructure, where every watt of power consumption counts, and energy density, upgradability, and speed are crucial.
### Implications for the Future of AI Hardware
The adoption of SOCAMM represents a structural shift in how AI computing systems are designed and scaled. By enabling memory replacement and upgrades without replacing the entire system, NVIDIA is paving the way for a new era of modularity and sustainability in AI hardware.
This technology is expected not only to become a staple in NVIDIA’s product ecosystem but also to set a precedent for other architectures seeking to combine energy efficiency, extreme performance, and scalability in demanding environments such as data centers, workstations, and edge AI devices.
With the upcoming SOCAMM 2 generation and increasing interest from other memory giants, all indicators suggest that this low-power modular format will play a leading role in the future of intelligent computing.
via: ETNEWS and Micron