OpenAI secures 10 GW of AI capacity ahead of schedule

OpenAI has delivered a clear message to the digital infrastructure market: the race for artificial intelligence will not slow down due to lack of ambition. The company claims to have already exceeded its initial goal of securing 10 GW of AI infrastructure capacity in the United States before 2029, just over a year after announcing Stargate, its major plan to deploy massive-scale computing alongside tech, energy, finance partners, and data center operators.

The figure is impressive, but it’s important to read it carefully. OpenAI speaks of secured or committed capacity, not necessarily capacity that is already operational. The difference is significant. A data center can be contracted, planned, under construction, or pending electrical connection long before it serves models in production. The company had previously indicated that it closed 2025 with 1.9 GW of computing capacity, which suggests that much of those 10 GW corresponds to future agreements, cloud contracts, ongoing projects, or capacity phased into service.

Nonetheless, the announcement marks a scale shift. OpenAI states that in just the last 90 days, it has added more than 3 GW to its infrastructure footprint. Its thesis remains unchanged: AI demand is growing faster than expected, and the only way to sustain better models, lower latency, more users, and lower unit costs in the long term is to build more computing capacity.

Stargate is no longer just a mega-project—it’s becoming a network of agreements

When OpenAI announced Stargate in January 2025, the plan was presented as a $500 billion investment in AI infrastructure over four years, aiming to secure 10 GW of capacity in the U.S. by 2029. Since then, investment figures have varied across estimates, and the strategy has practically evolved.

Initially, Stargate was interpreted as a bet on large, proprietary or highly controlled facilities. Over the months, OpenAI has expanded its approach to a more hybrid model: dedicated data centers, cloud capacity, agreements with neo-cloud providers, partner infrastructure, and multi-year contracts with providers like Microsoft, Oracle, CoreWeave, and Amazon Web Services.

One of the most notable moves was the agreement to use AWS’s Trainium chips, with an associated capacity of 2 GW and a large contractual expansion. OpenAI also maintains a longstanding relationship with Microsoft, has partnered with Oracle Cloud Infrastructure in facilities like Abilene, Texas, and has relied on specialized providers to accelerate access to GPUs and next-generation systems.

Key ElementAnnounced or Contextual Data
Initial Stargate goal10 GW of AI capacity in the U.S. by 2029
Capacity OpenAI claims to have exceededMore than 10 GW
Capacity added in the last 90 daysOver 3 GW
Initial announced investment for StargateUp to $500 billion
Recent projection cited in the marketAround $600 billion over four years
Operational capacity reported at the end of 20251.9 GW
Mentioned partners and providersOracle, Microsoft, AWS, CoreWeave, among others

OpenAI acknowledges that funding models and collaboration structures may change but emphasizes that the key is to achieve capacity at scale, on time, and with flexibility. This summarizes the current sector pressure well: no one knows precisely which architecture will dominate in three years, which chips will have the best cost per token, how much inference agents will perform, or what proportion of spending will go to training versus inference services. But underestimating infrastructure can be more dangerous than overbuilding.

Compute, energy, and terrain: the new battle in AI

AI infrastructure is no longer measured solely by the number of GPUs. OpenAI openly discusses a compute-driven economy, where the ability to build data centers, secure energy, obtain permits, deploy electrical transmission, train labor, and negotiate with local communities becomes a competitive advantage.

The company says it is evaluating new sites in the U.S. with its partners. For a project to be viable, it requires a complex combination: land, electrical power, permits, transmission, qualified workforce availability, local support, and partner preparedness. In practice, AI is leading large labs to negotiate with utilities, data center developers, construction unions, chip manufacturers, state governments, and cloud operators.

OpenAI also aims to bolster Stargate’s social narrative. In its communication, it highlights local employment, investment in schools, municipal revenue, responsible energy planning, and careful water management. The example of Abilene, Texas, is cited, where Stargate’s site operates on Oracle Cloud Infrastructure using NVIDIA GB200 systems. According to OpenAI, that site was used to train GPT-5.5, its most advanced model to date.

The water usage detail is also noteworthy. OpenAI states that Abilene uses closed-loop cooling, not traditional evaporative cooling towers. The initial fill of each building is roughly equivalent to two Olympic pools, and the annual water consumption at full capacity would be comparable to that of a medium-sized office building or about four average homes. This addresses one of the major criticisms surrounding AI data centers: their energy and water consumption.

Committed capacity does not always mean available capacity

The big question is what exactly “exceeding 10 GW” means. In data center terms, capacity can be in different states: contracted, reserved, under construction, energized, with servers installed, or already operational for real workloads. OpenAI does not specify how much of that figure is currently operational and how much will come online in the coming years.

This ambiguity does not invalidate the announcement but does warrant caution. Securing capacity ahead of competitors can be a highly strategic move, especially if energy, chips, transformers, substations, and construction bottlenecks worsen. But execution remains the challenge. Signing contracts is not enough—you must build, connect, cool, equip, and operate.

The other open issue is cost. Stargate started with a reference of $500 billion, but the market has already seen higher estimates for OpenAI’s expected compute spending over the coming years. The company does not clarify whether reaching the 10 GW goal already commits that initial investment or how costs are shared among OpenAI, cloud partners, data center vendors, external financiers, and long-term deals.

The bottom line is that AI is becoming a capital-intensive industry. Model labs are no longer just competing for researchers, data, or algorithms—they are competing for gigawatts, chips, energy, land, cooling, debt, cloud contracts, and industrial capacity. Generative AI started as software, but its next phase relies on physical infrastructure at a scale more reminiscent of energy, telecom, or semiconductors than a traditional internet company.

For OpenAI, surpassing the 10 GW threshold before 2029 is a way to build confidence among clients, partners, and investors. It aims to show it can grow with demand, avoid dependence on a single provider, and have strategies to support increasingly capable models. For competitors, the message is equally clear: the race for AI is also being decided on the ground, in substations, and through capacity contracts signed years in advance.

The real challenge now is turning committed capacity into useful compute. If OpenAI and its partners manage to do so on time, Stargate could become one of the largest AI infrastructure networks worldwide. If delays, costs, or local tensions accumulate, the gap between announcing gigawatts and getting them operational could become the true bottleneck.

Frequently Asked Questions

What has OpenAI announced about Stargate?
OpenAI states it has already exceeded its initial goal of securing 10 GW of AI infrastructure capacity in the U.S. before 2029.

Are those 10 GW already operational?
It’s unclear. Much of it appears to be contracted or committed capacity, not necessarily data centers already running workloads.

Why does AI require so much electrical and computing capacity?
Because training and serving advanced models require enormous amounts of servers, accelerators, memory, networking, energy, and cooling, especially as usage increases among consumers, businesses, developers, and governments.

What role do OpenAI’s partners play?
Stargate relies on a network of cloud providers, data center operators, chip manufacturers, utilities, construction firms, investors, and local communities. OpenAI recognizes that no single company can build this infrastructure alone.

via: openai

Scroll to Top