Stargate Gets Stuck and OpenAI Reworks Its “Compute” Map: More Cloud, More Partners, Less Certainty

OpenAI faces a paradox inherent to the AI race: the more its demand grows, the more it relies on an infrastructure it doesn’t fully control. The company that popularized ChatGPT promised — alongside Oracle and SoftBank — a historic leap in data center capacity under the Stargate umbrella. However, the “compute” market has turned into a battle over electrical power, industrial land, and chips that doesn’t wait for anyone.

The ambition was huge: Stargate was launched as an initiative to invest up to $500 billion over four years to build a network capable of providing 10 GW of AI capacity in the United States, publicly announced in January 2025.

A mega-project, multiple realities

Officially, OpenAI has portrayed Stargate as an accelerated expansion: in September 2025, it announced five new data center sites in the U.S., aiming to reach nearly 7 GW and stay on track for the 10 GW target.

Yet, industry insiders suggest a different story: The Information reported that the project, conceived as a “joint venture” with its own governance and structure, has faced internal delays and disagreements over leadership and financial responsibilities, pushing OpenAI to explore alternative arrangements.

This tension — between public announcements and the real challenges of deploying infrastructure at this scale — explains why Stargate appears more like an “umbrella” under which bilateral agreements, local partners, energy providers, and specialized contractors are grouped, rather than a straightforward project.

The solution: “control without ownership” and a more diverse cloud ecosystem

The picture emerging in 2026 is clear: OpenAI is diversifying its suppliers to reduce dependence on a single source. Recently, the company strengthened its presence on AWS with a multi-year deal, while maintaining partnerships with Oracle and adding capacity via third-party providers like Google Cloud.

This shift has direct implications for clients and partners: AI infrastructure is no longer reliant solely on “a big data center,” but on a distributed network where each provider contributes different elements (chip access, locations, network capacity, energy, and deployment speed).

Reuters previously reported in 2025 that OpenAI took an unusual step by bringing Google Cloud into its provider ecosystem, in response to increasing demand, after Azure ceased being its exclusive infrastructure partner.

The bottleneck: electricity, land, and GPUs

Beyond the “more cloud” narrative lies a physical reality: the AI race is also a competition for energy and hardware. For example, in Texas, SB Energy (linked to SoftBank) proposed a 1.2 GW installation related to Stargate, reminding us that these figures are measured in grid-scale capacity, not just racks.

Meanwhile, hyperscalers continue to accelerate investments: the market debate now revolves around how much industry can spend on AI in a single year. A report cited by Reuters estimated that combined investments by major tech firms in AI infrastructure could reach around $650 billion in 2026.

Not just NVIDIA: OpenAI explores “Plan B” (and Plan C) for chips

The pressure isn’t only about data centers but also the silicon. OpenAI has begun diversifying its dependence on NVIDIA for certain scenarios, especially inference, exploring alternatives with other manufacturers and architectures.

In January 2026, OpenAI announced a partnership with Cerebras to deploy 750 MW of low-latency computing infrastructure, phased in starting that year.

Additionally, Reuters reported in early February 2026 that OpenAI was seeking alternatives to certain recent NVIDIA chips for specific needs, engaging with AMD, Cerebras, and Groq, among others.

This industrial strategy reflects the reality that when global bottlenecks exist, ensuring supply requires avoiding reliance on a single technological route. It also helps mitigate schedule risks: if a GPU generation is delayed or a supplier prioritizes other clients, operations can continue with less disruption.

The bill: “compute” consumes the future

The logical consequence is cost. Reuters recently reported that OpenAI expects to spend around $600 billion on “compute” through 2030.

In other words, although Stargate was conceived as a major structural solution, the immediate reality is to “buy capacity wherever available,” signing deals with financially robust suppliers and builders, even if that increases operational complexity and reduces direct control.

What it means for the market (and why Europe should care)

For the tech ecosystem, this sends an uncomfortable message: competitive advantage in AI no longer depends solely on models and talent but also on energy contracts, permits, location, and hardware. By 2026, decisions are already being influenced: alliances, pre-orders, cross-investments, and multi-track agreements aim to prevent being left behind.

There is also a geopolitical dimension: if infrastructure is concentrated among a few capable actors who can deploy gigawatts of capacity, the balance between innovation and dependency becomes more fragile. In practice, AI “sovereignty” isn’t solely decided by laws or industrial strategies — it’s shaped by data centers that can be built, powered, and filled with chips.


Frequently Asked Questions

What is OpenAI’s Stargate project and what capacity does it aim to achieve?
Stargate is an initiative announced in 2025 to boost large-scale AI infrastructure in the U.S., with a goal of reaching 10 GW capacity and a total investment of up to $500 billion.

Why does OpenAI use AWS and Google Cloud if they compete in AI?
Because “compute” is the limiting factor: to train and operate large-scale models, OpenAI needs capacity and chip supply across multiple clouds. Diversification reduces risks of availability issues and shortens deployment timelines.

What does the OpenAI-Cerebras partnership contribute regarding ChatGPT and inference?
It provides low-latency, high-availability compute focused on inference, with a phased deployment plan involving 750 MW of infrastructure, critical for speed-sensitive workloads.

What is the impact of massive data center spending on companies aiming to adopt AI?
It can improve medium-term capacity access but also puts pressure on prices, timelines, and availability in the short term. Many organizations’ AI project success will depend on whether they can secure cloud resources in time.

via: digitimes

Scroll to Top