Memory heats up: DRAM surges over 170% year-over-year, and the rally could last until 2026

Memory has once again become a “commodity” market that dictates the pulse of the entire tech industry. After an inventory adjustment period in 2023–2024, DRAM has entered an exceptional bullish cycle: in the third quarter of 2025, contract prices surged by about a 171.8% year-over-year, a level surpassing the appreciation rates of safe-haven assets like gold. Meanwhile, spot prices have climbed violently, and some major sellers have postponed or fragmented offers to avoid commitments in a market that changes weekly.

The reason is no longer a mystery: AI demand has reshaped manufacturing priorities. Manufacturers are allocating wafers and dies to HBM (high-bandwidth memory) and advanced DDR5 nodes for servers, strangling conventional DRAM supply and pushing prices higher across the chain. The perfect storm first hits data centers — where RDIMM and MRDIMM have become “gold” — but it spreads to PCs, laptops, and mobile devices, and begins brushing against embedded systems and automotive applications.

What’s happening in the supply chain

  • HBM priority. The demand spike for foundational models’ training and inference has saturated advanced packaging lines and cutting-edge process nodes. Every wafer dedicated to HBM doesn’t produce conventional DDR5.
  • Disciplined CapEx. After two challenging years, the big three (Samsung, SK Hynix, and Micron) haven’t opened the floodgates for capacity expansion aggressively. Instead of overbuilding, they are maximizing margins in higher-value products (HBM, server DRAM) and leaving the consumer categories under tension.
  • Month-to-month contracts. Due to volatility and scarcity, part of the market has shifted from quarterly to monthly or even biweekly negotiations, with upward revisions as demand evolves.
  • Decreasing fulfillment rates. In October, several channel players reported order satisfaction levels close to 70% for server DRAM clients: backlogs, rationing, and clear prioritization of hyperscalers over PC OEMs.

The signal from Korea and the US is clear: Q4 2025 will be more expensive than expected and there is no tangible short-term relief. RDIMM prices for data centers have recorded double-digit increases in fall contract addendums, and enterprise SSDs have also risen in response to NAND tension.

Why is this more painful for servers (and why it will eventually affect everyone)

AI has driven servers into a profile shift: where CPU was once the bottleneck, now the memory bandwidth and capacity are. A next-generation AI server can mount terabytes of DRAM and hundreds of GB of HBM; clusters amplify this effect. With so much memory weight in the bill of materials, small price increases translate into thousands or tens of thousands of euros per node. Companies are reacting by securing supply: long-term 2–3-year contracts with manufacturers and pre-purchases. This removes supply from the spot market and further tightens prices.

In consumer products, the escalation comes with a lag. Still, since September/October, there’s been a shift toward UDIMM and DDR5 SO-DIMM: 2×16 GB and 2×32 GB kits have increased prices in few weeks, and laptops aimed at AI/creators are starting to recalibrate configurations or retail prices. High-end workstations or gaming PCs will notice especially when upgrading from 32 GB to 64/128 GB.

How long could this rally last (and what might stop it)

  • 2026 horizon. Most analysts don’t see relief until well into 2026. Some project quarterly increases of 30–50% between late 2025 and the first half of 2026 in certain categories if AI demand remains strong.
  • Physical bottlenecks. The availability of HBM modules, back-end packaging, and EUV capacity in cutting-edge DRAM nodes won’t expand instantly. The industrial inertia will take several quarters to ease.
  • Potential cooling factors: moderation in hyperscaler orders, a recession slowing data center capex, or a new wave of suppliers with actual capacity (currently unlikely). Also, a shift in mix from training to optimized inference (less need to train huge models) could partially relieve DRAM/HBM demand.

Practical impact for companies

  1. Budgets. AI projects, VDI, or in-memory databases need to recalibrate TCO: rises of 40–50% in RDIMM and 15–35% in enterprise SSDs alter workload cost profiles.
  2. Architecture. Evaluate densities per socket (2 DIMMs per channel vs. 1 DPC), MRDIMM, and CXL as memory expanders where suitable. Optimizing NUMA and process affinity can save DIMMs.
  3. Procurement strategy. Volume and duration contracts improve bargaining position. For co-location providers and medium-sized companies, scheduled purchases with bases and configuration flexibility (e.g., 384 GB vs. 512 GB) mitigate risk.
  4. “Lock-in” risk. Long-term agreements guarantee supply, but can become costly if the cycle turns. It’s wise to stagger orders and maintain multi-sourcing when possible.

Impact on consumer market and channel

  • DDR5 kits. Retail price hikes are arriving in waves. Stores and brands have limited orders or raised PVP to avoid selling below re-stocking levels.
  • Laptops. OEMs are adjusting launches and base configurations. Models with 64 GB standard are becoming more premium or delaying stock availability.
  • DDR4. Stock is depleting unevenly; it’s not a guaranteed refuge because many production lines no longer operate and the gap may shrink.

What’s being said in the field (and what the roadmaps indicate)

The ecosystem expects faster DDR5 and next-gen MRDIMM modules by 2026–2028; DDR6 is tentatively projected around 2029/2030 in the roadmaps. Meanwhile, the HBM roadmap (HBM4 and beyond) attracts investments and talent. There are lab ideas, like 3D DRAM memories or HBF (high-bandwidth flash) for the next decade. None of this unlocks relief for 2025–2026: tensions will persist as AI spending maintains momentum.

Actionable tips (no magic here)

  • Businesses: if use case permits, align deployments with milestones (not “all at once”) and defer memory expansions to negotiated price points. Consider CXL for workloads where scalability is key.
  • Integrators: secure batches for signed projects and offer validated alternatives (e.g., 12×32 GB versus 8×64 GB if price per GB exceeds limits).
  • Enthusiasts/creators: if you need RAM in the next 3–6 months, go ahead with purchases. For “caprice” upgrades, wait or set a target (e.g., 32 GB now, 64 GB when market calms).
  • Avoid the gray market: it thrives during tense cycles. Check part numbers, warranties, and compatibility (QVL) to prevent losing time over savings.

Frequently Asked Questions

Why are DRAM prices rising so much if manufacturers claim margins have improved?
Because cost isn’t the only factor; it’s about mix and usable capacity: the same factory making DDR5 can also produce HBM and high-ASP DRAM. In a demand surge for AI, they prioritize higher-margin products and limit conventional DRAM, driving prices up across the range.

When will the market ease?
The current consensus places relief sometime in 2026. It depends on new capacities, yield improvements, and load mix (training vs. inference). Until then, it’s prudent to plan for high prices and less predictable timelines.

Does it make sense to move to CXL or MRDIMM to “save” memory?
It depends on your workload. CXL can extend capacity with attached memory (higher latencies but cost/GB competitive). MRDIMM offers densities and high frequencies with multi-aisle signaling. For bandwidth-sensitive loads, topology and NUMA affinity matter more than raw GB.

Should I buy RAM “just in case”?
Only if you have a confirmed project or an immediate upgrade. Stockpiling introduces obsolescence risk (PCB/IC revisions, BIOS updates) and opportunity cost. Better to secure availability with your supplier and stagger deliveries.

via: ctee

Scroll to Top