Samsung Accelerates the Race for HBM3E Memory: Seeks NVIDIA Approval by May

Samsung Electronics is speeding up in the competitive market of high-bandwidth memory (HBM). According to South Korean media outlets such as EBN, the company is redesigning its HBM3E 12H 36 GB product, aiming to obtain validation from NVIDIA earlier than expected: by May of this year.

HBM3E 12H: More performance, more capacity

The HBM3E 12H is being presented as the highest capacity HBM memory on the market, featuring a 12-layer stack that reaches 36 GB and a bandwidth of up to 1,280 GB/s, representing a jump of over 50% compared to the current HBM3 8H with eight layers. These figures signify a turning point in terms of performance for next-generation AI applications, which demand greater speed, capacity, and energy efficiency.

One of the key aspects of this advancement is the incorporation of TC NCF (thermal compression non-conductive film) technology. This innovation allows the new memory to maintain the same physical height as previous versions despite having more layers, reduces wafer deformation, improves thermal dissipation, and optimizes performance.

The challenge: achieving validation from NVIDIA

Samsung has already submitted samples to NVIDIA, but these did not meet the performance levels required by the U.S. company. Therefore, a rapid redesign is underway to meet the stringent standards of the leading AI accelerator manufacturer. If validation occurs in May, as expected, it would pave the way for mass production in the first half of 2025, just in time to compete with solutions from SK hynix and Micron.

It is important to note that NVIDIA uses HBM memory in its line of data center GPUs, where each improvement in capacity and bandwidth translates to superior performance in training language models (LLMs), AI processing, and HPC environments.

Towards the era of large-scale artificial intelligence

The use of HBM memories like the HBM3E 12H significantly scales the efficiency of data centers. According to internal estimates by Samsung, AI training speed could increase by 34%, and the number of concurrent users for inference services could multiply by 11.5 compared to systems with HBM3 8H.

All this has direct implications for companies operating at cloud and AI scale. A lower number of higher-capacity chips reduces the total cost of ownership (TCO), improves energy consumption, and enables more compact and scalable architectures.

In parallel: Samsung is already developing HBM4

In addition to seeking NVIDIA’s approval, Samsung is already working on the next generation: HBM4, utilizing its new 1c manufacturing process. Initially planned for late 2024, the development has been delayed and is now expected to be ready by June 2025.

Thus, the second quarter of this year is shaping up to be crucial for Samsung, as it seeks to regain leadership in a market dominated by the high demand for solutions for generative AI and cloud computing.

Scroll to Top