SK hynix Gains Ground in Microsoft’s AI Chip Strategy

Microsoft is strengthening its relationship with SK hynix at a critical time for AI infrastructure. According to reports published in South Korea and picked up by TrendForce, Kwak Noh-Jung, CEO of SK hynix, plans to meet this week with Bill Gates and Satya Nadella during the Microsoft CEO Summit 2026, a private gathering in Redmond involving global executives, tech leaders, and institutional officials.

This meeting is not merely a formal courtesy call. It comes as Microsoft accelerates the deployment of its own AI accelerators, like Maia 200, and as high-bandwidth memory has become one of the most contested resources in the industry. In practical terms, the race for AI power is no longer just about who buys more NVIDIA GPUs but about who secures supplies of HBM, advanced packaging, DRAM, and NAND to support increasingly dense data centers.

Here, SK hynix appears as a strategic player. The South Korean company is one of the leading global memory manufacturers and has gained a particularly prominent role in HBM, the high-performance stacked memory used alongside AI accelerators. According to Chosun Biz, Microsoft already uses SK hynix’s fifth-generation HBM3E in its Maia 200 accelerator, in addition to purchasing DRAM and NAND from the company.

Maia 200 and Microsoft’s Effort to Take Greater Control of Infrastructure

Microsoft introduced Maia 200 in January 2026 as a proprietary AI accelerator focused on inference. The chip is already deployed in Azure’s US Central region, near Des Moines, Iowa, with plans to expand to US West 3 near Phoenix, Arizona, and future regions. The company has also launched a preview of the Maia SDK, integrating with PyTorch, Triton compiler, optimized kernel libraries, and providing access to a low-level language for the chip.

This move aligns with a broader trend among large cloud providers. Microsoft, Google, Amazon, and Meta are developing their own chips to reduce costs, better control their platforms, and avoid over-reliance on NVIDIA. The goal is not to replace GPUs entirely—since they remain essential—but to reserve them for specific workloads and deploy internal ASICs where they can deliver better cost per token, energy efficiency, or operational control.

In this context, memory is as crucial as the accelerator itself. An AI chip might have immense computing power, but without fast data delivery, actual performance suffers. HBM addresses this bottleneck by placing high-bandwidth memory very close to the processor through stacking and advanced packaging techniques.

This is where SK hynix enters. The company has benefited significantly from the demand for HBM in AI and aims to secure long-term agreements with strategic clients. For Microsoft, strengthening ties with SK hynix could ensure supply stability in a market where available capacity is reserved years in advance and component prices are rising, impacting data center budgets.

ElementRelevance for Microsoft
Maia 200Proprietary AI inference accelerator for Azure
SK hynix HBM3EHigh-bandwidth memory powering the chip
DRAM and NANDFundamental components for AI servers and storage
Iowa and ArizonaInitial regions selected for Maia 200 deployment
CapEx 2026TrendForce estimates $190 billion for Microsoft

Memory Becomes a Bottleneck in AI

TrendForce estimates that Microsoft has increased its CapEx forecast for 2026 to $190 billion, roughly a 130% year-over-year increase. About $25 billion of that is related to rising component costs. This reveals that a significant portion of AI investment is not just about building more data centers but also about paying more for memory, chips, servers, and critical components.

This pressure is not limited to Microsoft. Major North American cloud service providers are expanding investments in GPU clusters, proprietary ASICs, high-performance networks, and advanced memory. However, the faster demand grows, the more some parts of the supply chain become tight. HBM is one such area since manufacturing conventional memory is not enough—stacking, interconnection, testing, and packaging become vastly more complex.

For SK hynix, this creates a huge industrial opportunity. The company already competes with Samsung and Micron in advanced memory and holds a strong position in HBM, which allows it to negotiate favorable terms with key AI infrastructure buyers. If Microsoft wants to deploy more Maia 200 units and future ASIC generations, it must secure compatible, stable, and validated memory capacity.

The upcoming meeting among Kwak Noh-Jung, Bill Gates, and Satya Nadella should be seen in this context. It’s not just about selling memory chips; it’s about participating in the design of next-generation AI infrastructure—from accelerators to memory, packaging, and integration into data centers. The relationship between memory manufacturers and big cloud providers is increasingly strategic rather than purely transactional.

There’s also a geopolitical angle. South Korea hosts two critical memory players, SK hynix and Samsung, while the U.S. seeks to boost its semiconductor independence. Large hyperscalers aim to diversify suppliers. Amid restrictions on China, growing AI demand, and limited advanced packaging capacity, forging alliances with Korean manufacturers has become a priority.

Less Dependence on NVIDIA, but Greater Dependence on Memory Supply Chain

Microsoft cannot reduce reliance on NVIDIA merely by designing its own chips. An internal ASIC needs memory, software, packaging, manufacturing capacity, rack integration, networking, and model optimization. The dependency shifts but does not vanish; the supply chain remains highly specialized.

Maia 200 exemplifies this transition. Deploying the chip at scale in Azure allows Microsoft to gain more control over certain inference workloads and optimize costs, performance, and energy consumption. Still, for this strategy to succeed, providers like SK hynix must supply ample HBM and future generations capable of handling larger models, longer contexts, and more inference traffic.

The financial implications are significant. A CapEx forecast of $190 billion is enormous even for Microsoft. If much of this increase is due to inflation in component costs, investors will need to distinguish between spending that truly expands capacity and spending that reflects higher prices for the same capacity. In this view, memory shifts from a technical component to a central factor in AI profitability.

For SK hynix, the opportunity is clear but carries risks. The company must scale production, maintain quality, manage pricing cycles, and avoid over-reliance on a few large clients. AI demand remains strong, but history shows memory markets can experience tough cycles when supply outpaces demand. The difference now is that HBM demand is more linked to strategic contracts and reserved capacity, potentially smoothing volatility.

The partnership with Microsoft could become a model. If Maia 200 is successful and future Azure chips continue to use SK hynix’s advanced memory, the South Korean firm will be more than just a component supplier; it will be part of Microsoft’s architecture in its next AI phase—a phase less dependent on standard accelerators and more focused on custom infrastructure design.

The AI race was initially centered on models and GPUs. Now, exclusive discussions about HBM, packaging, and long-term supply also play a crucial role. SK hynix holds a significant voice in this evolving landscape.

Frequently Asked Questions

What’s been reported about SK hynix and Microsoft?

South Korean media and TrendForce report that Kwak Noh-Jung, CEO of SK hynix, plans to meet with Bill Gates and Satya Nadella during the Microsoft CEO Summit 2026 in Redmond.

Why is SK hynix important to Microsoft?

Because Microsoft needs high-performance memory for its AI infrastructure. SK hynix supplies DRAM, NAND, and HBM3E, including memory used in the Maia 200 accelerator.

What is Maia 200?

Maia 200 is a proprietary Microsoft AI inference accelerator deployed in Azure. It’s already active in the US Central region near Des Moines, Iowa, with plans to expand to US West 3 near Phoenix.

What role does HBM play in AI?

High-bandwidth memory (HBM) enables AI accelerators to process large data volumes rapidly. It is essential for advanced GPUs and ASICs used in AI data centers.

via: trendforce

Scroll to Top