Seoul / Silicon Valley. After months of rumors, failed prototypes, and second chances, Samsung has achieved what once seemed elusive: officially entering NVIDIA’s HBM supply chain. Various reports agree that Jensen Huang’s company has ordered 12-Hi HBM3E stacks from the Korean giant for their Blackwell Ultra rack-scale solutions (GB300). This decision, arriving at the end of the HBM3E generation before moving to HBM4, represents an industrial and reputational victory for Samsung and reconfigures the landscape of high-bandwidth memory suppliers, which until now was dominated by SK hynix and Micron.
This move has two key interpretations. First, technical: Samsung has finally overcome the qualification hurdles that kept it from securing core NVIDIA orders throughout 2024 and much of 2025. Second, strategic: it opens a third supply avenue that diversifies risks for NVIDIA in a context where HBM memory—alongside HBM3E/4 and HBM-co-packaged—is the most critical bottleneck to scaling AI computation.
What’s been finalized (and why it matters)
- Product: Samsung’s HBM3E 12-Hi.
- Destination: NVIDIA Blackwell Ultra GB300, new “rack-scale” solutions designed for large-scale training and inference.
- Status: Nearly confirmed agreement according to Korean press; volumes, pricing, and timeline are in final adjustments.
- Implication: Samsung finally joins NVIDIA’s “core program” for HBM, breaking a cycle of certification delays that had eroded its position in the AI wave.
For Samsung, this is fresh air following a period where SK hynix positioned itself as the dominant player in HBM3/3E for AI, and Micron accelerated with its HBM3E 24 GB 8-Hi/12-Hi. For NVIDIA, it means greater flexibility in capacity and pricing, just as the company prepares the transition to HBM4 in the second half of the Blackwell cycle.
From “Jensen Approved” to technical approval: a rollercoaster year and a half
This will be remembered as a notable chapter: at GTC 2024, Huang left his signature “Jensen Approved” on a Samsung HBM3E 12-Hi module. The photo circulated, markets got excited… and then reality set in: heat, power, and reliability issues in testing delayed orders, and NVIDIA’s CEO even hinted that Samsung needed to redesign. This message, repeated in January 2025, touched a nerve. The response came in the form of a “more aggressive redesign” and a new batch of samples in the first half of the year. Today, the GB300 agreement confirms the technical barrier has been overcome.
An “inverse circular agreement”: 50,000 GPUs in reverse
Meanwhile, signals of a reciprocal deal have emerged: Samsung reportedly plans to acquire around 50,000 NVIDIA GPUs to support its internal AI transformation (AX), strengthen services and datacenters, and — according to market speculation — equip facilities linked to strategic collaborations (for example, the upcoming AI datacenter in Pohang). No official details are available, but the message is clear: “I sell you HBM, you sell me GPUs”. This kind of cross-financing and supply arrangement is becoming normalized in the AI economy: it offers visibility to both sides, aligns incentives, and if structured well, accelerates deployments without heavily burdening the end customer’s balance sheet.
Why HBM3E if the focus is already shifting to HBM4?
Good question. HBM3E remains the workhorse: mature, with known performance, and sufficient for the initial Blackwell rack-scale wave. Meanwhile, HBM4 — with wider interfaces, taller stacks, and advanced integration (2.5D/3D with organic or hybrid interposers) — is progressing in qualification toward 2026. Samsung is betting on having a relative advantage: it has integrated logic and semiconductor lines within its umbrella, while competitors like SK hynix and Micron will rely on advanced integration via TSMC. In the short term, Samsung is monetizing HBM3E now and positioning its HBM4 with a compelling cost and vertical integration story attractive to NVIDIA and AMD.
Note: The initial batch of HBM3E 12-Hi for GB300 isn’t expected to be huge, but that doesn’t diminish its importance. Breaking the technical and contractual barrier paves the way for easier HBM4 adoption with less friction, and puts Samsung inside the cycle at the moment when Blackwell and future generations will demand it more.
Systemic impact: less concentration, more resilience
For the ecosystem, diversity in HBM is healthy. The triple sourcing (SK hynix, Micron, and Samsung) reduces supply risks, provides room for negotiation, and most importantly, accelerates capacity ramp-up as AI demand—multimodal training, agents, simulation—remains strong. Meanwhile, the potential “reciprocal GPU purchase agreement” by Samsung adds guaranteed volume for NVIDIA and pushes forward the enterprise AI agenda for the Korean conglomerate.
What to watch from now on
- Supply schedule and volume for HBM3E 12-Hi to GB300: timelines for initial deliveries, mix by node, and performance in production.
- HBM4 roadmap: who qualifies first with NVIDIA and AMD, and under which conditions (channel width, speed, stack height, power).
- Details of the reciprocal deal: if Samsung formalizes the purchase of 50,000 GPUs, which models, for what applications, and where they will be installed.
- Cost signals: how the third source influences pricing and margins for Blackwell Ultra accelerators in 2026.
Fundamental analysis: a more open path toward 2026
Blackwell Ultra GB300 will be remembered as the milestone where rack-scale systems cemented the idea of computing as a block (GPU + interconnect + memory + software), and where HBM memory was established as the second critical silicon after the GPU. With Samsung onboard, NVIDIA gains breadth; Samsung regains traction in high-end DRAM; and the AI market faces the leap to HBM4 with three players in the game. The rest — costs, yields, schedules — will determine who sets the pace in the next phase.
Sources: