OpenAI “blocks” nearly half of the world’s DRAM for its Stargate project: an agreement with Samsung and SK Hynix shakes up the entire supply chain

What until recently sounded like industrial science fiction is starting to take shape with staggering numbers: OpenAI is reportedly reaching an agreement with Samsung and SK Hynix to reserve around 40% of global DRAM capacity over the next few years, aiming to support its ambitious Stargate AI project. According to published information, the company plans to consume about 900,000 DRAM wafers per month until 2029; a figure that, compared to the anticipated global capacity by late 2025—around 2.25 million wafers per month—reveals the scale of the upheaval: nearly half of the global production would be dedicated to a single client and a single data center program.

This move, if confirmed in its aggregated terms, reorders priorities across the semiconductor value chain: from memory manufacturers to equipment suppliers (lithography, deposition, etching), passing through GPU designers, hyper-scalers competing with OpenAI, and ultimately, the consumer market (PCs, consoles, mobile devices), which could see how DRAM—previously plentiful and cyclical—becomes a bottleneck.

The staggering numbers: from wafers to chips, and from there to $120 billion

Translated into chips, the number of 900,000 wafers/month can, under typical density and performance assumptions, correspond to between 1.5 and 1.7 billion LPDDR5 or DDR5 circuits. JPMorgan estimates that this demand level would equate to roughly 130 trillion gigabits of memory and generate up to $120 billion in revenue, depending on the mix between standard DRAM and HBM (high-bandwidth stacked memory, key for AI).

The impact on capex is also enormous: $160 billion in new factories and equipment to sustain expansion, with ASML, Applied Materials, Lam Research, and Tokyo Electron among the clear beneficiaries of EUV/DUV scanner and processing tool sales. The question isn’t whether capital suppliers will target this market; it’s whether they can scale at the speed required by the client.

Samsung and SK Hynix on board; Micron out (for now)

The two South Korean manufacturers have been identified as signatories of the framework agreement. In contrast, Micron, the third global DRAM player and the only American among the trio, appears to be excluded based on available reports. This exclusion is significant: in an environment of political tension and reshoring of critical supply chains, the fact that the planet’s top AI consumer commits such a volume to foreign providers will initiate high-level political and business discussions in Washington.

The alliance with Samsung extends beyond chips: it includes Samsung SDS in data center design, Samsung C&T and Samsung Heavy Industries in infrastructures (including floating ones) and the distribution of ChatGPT Enterprise within Korea. But the core mission remains the same: ensure that OpenAI never lacks critical memory to train and serve ever-larger models.

Why is DRAM becoming the focal point?

So far, the hardware ecosystem for AI revolved around GPUs: more FLOPS, more NVLink, more coherence. The next—and already visible—bottleneck is memory: the bandwidth and capacity needed to fuel these GPUs and sequence inferences with hundreds of thousands of tokens or massive batches during training.

HBM (stacked 3D memory on interposers) has become the de facto standard for AI GPUs; but traditional DRAMDDR5/LPDDR5 — remains the lifeblood of servers, CPUs, and accelerators that do not use HBM, as well as the foundation for memory storage systems, buffers, and caches at scale. Reserving almost half of the global DRAM production drives the entire industry to recalculate and secure its supply chains.

Domino effects: prices, availability, and planning

If DRAM industry capacity doesn’t keep pace with Stargate‘s demands, the market faces a classic scarcity scenario: rising prices, longer lead times, and priority for high-paying customers—namely, hyperscalers and AI—over consumer products. The DDR5 kit for PCs or mobile LPDDR could shift from being a commodity to a variable that heavily impacts the bill of materials (BoM). Discussing DDR6 for upcoming platforms moves from a declaration of standardization to a matter of manufacturing capacity and affordable price ranges.

Companies relying on cloud services will also be affected indirectly: hyperscalers may pass costs and limitations down to their catalogs, and hardware deployments in data centers for expansion could slow down if memory modules come under pressure.

Can industry respond in time?

Scaling DRAM capacity doesn’t happen overnight: building or expanding fabs takes years, permits, funding, guarantees of long-term demand, and a supply ecosystem—from chemicals to reticles—that must also scale up. ASML (EUV/DUV lithography), Applied Materials/Lam Research/Tokyo Electron (deposition/etching/metrology) and many others will need to operate at full capacity. It’s feasible with incentives and long-term contracts, but the timeline risk remains real.

Furthermore, the DRAM vs. HBM mix influences the response: HBM involves stacking, interposers, and advanced packaging (CoWoS, FO-PLP), which is currently a global bottleneck controlled by few providers. If OpenAI and its partners push HBM in large volumes, that bottleneck could become even tighter.

What does OpenAI stand to gain—or risk?

Securing massive memory supply gives OpenAI a strategic advantage: prioritized scale and cost predictability in a critical resource. But it also concentrates risks: multi-year commitments worth billions, dependence on roadmaps and third-party ramp-ups, and the risk of regulatory reaction if shortages impact key sectors. Additionally, reserving memory isn’t the same as receiving: the industry must meet demanding schedules over years of exceptional circumstances.

A “three-layer” agreement: memory, infrastructure, and ecosystem

The agreement with Samsung would also include design and construction of data centers, even floating infrastructure (a concept studied in Korea for housing computing power with distinct cooling and energy sources), and distribution of ChatGPT Enterprise domestically. Still, the economic core remains unchanged: ensure unprecedented volumes of DRAM and—depending on the final mix—HBM.

Other sectors moving: GPUs, servers, cloud, and consumer

  • GPU manufacturers: NVIDIA—and competitors—must ensure their memory capacity scales in line with computing power in upcoming architecture jumps.
  • Servers and storage: design choices for motherboards, sockets, and backplanes could become more modular to buffer price/availability fluctuations in DRAM.
  • Public cloud providers: hyperscalers without guaranteed reserved DRAM will hedge their bets by adjusting contracts and catalogs.
  • Consumer devices: PCs and mobiles may see delays but with high impact intensity if expansion plans falter. A price increase cycle for DDR5/LPDDR5 and a potential DDR6 rollout under supply constraints seem plausible.

The political game: critical chains and “America First”

The reported exclusion of Micron from the agreement will likely put the issue on Washington’s agenda: DRAM is a critical supply chain, along with logic chips and lithography. With Samsung and SK Hynix potentially controlling over 40% of global production for a U.S.-marked program, it’s expected there will be pressure to diversify and repatriate parts of the supply chain through incentives or parallel commitments.

AI paradox: no memory, no future

AI requires enormous memory in volumes and formats that only two years ago seemed excessive. The “deal of the century” in DRAM sketches a landscape of conditional abundance: if capacity grows on schedule, AI curves will continue upward; if not, there’ll be priority selection, price pressures, and delays.

The industry faces a monumental opportunity$100 billion+ in additional revenue, record capex, new factories—but with a colossal obligation: deliver at scale, flawlessly, over the coming years.


FAQs

What exactly is Stargate AI, and why does it need so much DRAM?
Stargate AI is the name given to OpenAI’s data center macro-project dedicated to training and serving next-generation AI models. This leap demands extraordinary memory volumes—both standard DRAM (DDR5/LPDDR5) and likely HBM—to power GPUs and accelerators with large contexts and batches.

Where does the “900,000 wafers per month” figure come from, and what does it mean in terms of global supply?
Published reports estimate about ~900,000 wafers/month until 2029. Compared to the projected global capacity of ~2.25 million wafers/month by late 2025, this represents over 40% of the entire DRAM production dedicated to a single client.

How will this impact DDR5/DDR6 prices and availability for PCs and mobile devices?
If capacity doesn’t keep pace with Stargate’s demand, expect a cycle of rising prices and longer lead times for DDR5/LPDDR5; the launch of DDR6 would occur under supply constraints. The exact impact depends on the DRAM/HBM mix, manufacturing ramp success, and the availability of alternative supply contracts.

Why is Micron out of the picture, and can that change?
According to reports, Micron is not part of the agreement with OpenAI, unlike Samsung and SK Hynix. Policy and national security considerations in the U.S. could drive medium-term adjustments toward supply diversification. However, additional capacity will take years to materialize: building fabs and securing equipment and materials require extensive time, not months.

via: news.samsung

Scroll to Top