SK hynix obtains Intel certification for its 256GB DDR5 RDIMM and targets the new wave of inference in data centers

In the midst of the race to optimize artificial intelligence infrastructure, SK hynix has announced the completion of the Intel Data Center Certification process for its 256 GB DDR5 RDIMM server module—based on 32 Gb DRAM chips manufactured on the 1b node (fifth generation of the “10 nm class”)—on the Intel Xeon 6 platform.

Beyond the headline, this move touches a sensitive point in the market: memory is once again a key performance factor in servers, especially as AI models shift from “responding” to reasoning, chaining steps, and maintaining more active context. This transition drives an increase in data volume that must move and reside close to the processor at the lowest possible energy cost.

What exactly has been certified (and why does it matter)

The announcement describes a milestone that, practically speaking, amounts to a “green light” for enterprise deployments: the 256 GB DDR5 RDIMM module has passed extensive testing and rigorous validation in Intel’s advanced data center development lab, confirming compatibility, reliability, and quality when combined with Xeon 6 processors.

In real-world terms, such certifications are more than a label. For many infrastructure buyers—data center operators, integrators, and large enterprises—they reduce uncertainty in two key areas:

  • Platform compatibility (boot, stability, firmware, load behavior).
  • Consistent performance in production scenarios, where variability hampers planning.

Additionally, SK hynix notes it previously achieved a similar validation in early 2025 for another 256 GB RDIMM based on a 1a die of 16 Gb, but now the focus is on the jump to 32 Gb chips and on certification for the Xeon 6 platform, Intel’s current server reference.

Why AI is driving the “size” of server RAM

For years, talking about memory in servers meant capacity for virtualization, databases, or analytics. With AI, the conversation shifts: the key isn’t just “how much” memory exists, but how much useful work per watt can be done, and how performance sustains under mixed loads.

The announcement highlights this directly: as inference models become more complex, the need grows for processing large datasets in real time with reliability. In that context, high-capacity modules like the 256 GB one help to:

  • Keep more data accessible without overly depending on slower storage layers.
  • Reduce bottlenecks in systems where the “working set” expands.
  • Scale services without multiplying nodes solely to add more memory.

SK hynix’s key figures

SK hynix supports the announcement with two clear comparisons focused on the metrics most critical to data centers today: performance and power consumption.

  • Up to 16% higher inference performance in servers equipped with the new module, compared to configurations using 128 GB products based on 32 Gb chips.
  • Approximately 18% lower power consumption compared to previous-generation 256 GB modules based on 16 Gb 1a chips, thanks to the use of 32 Gb DRAM and an efficiency-oriented design.

The company frames this within the concept of “performance per watt”, a direct way to speak the language of equipment that bears the electric bill and cooling costs.

RDIMM: the detail that makes a difference in servers

The module referenced in the announcement is a RDIMM (Registered Dual In-Line Memory Module), a standard in servers and workstations that incorporates a register/buffer to manage signals between the memory controller and DRAM chips. In simple terms: improves stability and scalability when working with high capacities and dense configurations—precisely the terrain where AI is pushing hardware capabilities.

Strategic insight: memory as a “platform product,” not just a component

SK hynix takes the opportunity to reinforce its positioning: expanding partnerships with major operators and responding to increasing customer demand for server memory. It also uses a term that captures its commercial ambition in the AI era: “full-stack AI memory creator”.

The implicit message is clear: memory isn’t just sold by gigabytes, but by fit within the platform, certifications, power efficiency, supply capability, and support in real deployments.

The announcement includes statements from both sides. Sangkwon Lee, SK hynix’s head of DRAM product planning and enablement, states that this milestone allows them to respond more quickly to customer needs and solidify their leadership in DDR5 for servers. On Intel’s side, Dr. Dimitrios Ziakas, VP of platform architecture at Intel Data Center Group, emphasizes the collaborative engineering effort and the module’s fit for capacity-hungry AI applications.


Frequently Asked Questions

What does it mean that a 256 GB DDR5 RDIMM is “Intel Data Center Certified” for Xeon 6?
It indicates that it has undergone Intel’s validation process to ensure compatibility, reliability, and expected performance when used on that server platform.

Why is a 256 GB module important for AI-oriented servers?
Because modern inference requires handling larger amounts of data in memory and reducing accesses to slower layers, thereby improving performance and stability in complex loads.

What’s the benefit of moving from 16 Gb to 32 Gb DRAM chips on a 256 GB module?
It allows for more efficient designs and, according to SK hynix, helps to lower power consumption and improve performance per watt, besides facilitating high-capacity modules.

Are RDIMM and UDIMM the same?
No. RDIMM includes a register/buffer to enhance stability and scalability in server environments, whereas UDIMM is more typical in consumer systems and isn’t designed for the same densities and loads.

via: news.skhynix

Scroll to Top