OpenAI is not building a company; it’s constructing a civilization of silicon. This is suggested by Sam Altman, its CEO, in a recent public statement, where he announced that the company will surpass one million GPUs in operation before the end of 2025. An impressive achievement, which Altman casually shared on social media with a mix of pride and audacity: “Really proud of the team, but now they gotta figure out how to 100x that, lol.”
The phrase sounds casual, but in Altman’s world—and that of OpenAI—nothing is accidental. Currently, the company that created ChatGPT is positioning itself as the world’s largest computational force, with infrastructure that not only redefines what a tech company can become but also raises profound questions about energy, governance, and digital power.
The era of the one million GPUs
For those unfamiliar, a GPU (graphics processing unit) is the key component in AI development. It’s what enables training models like GPT-4, DALL·E, or Codex. While companies like Elon Musk’s xAI are barely reaching 200,000 GPUs, OpenAI is about to quintuple that number.
Why? The limit is no longer the algorithm but the hardware. Altman knows this. In fact, he admitted in February that they had to slow down the deployment of GPT‑4.5 due to GPU shortages. Since then, OpenAI has launched into unprecedented expansion, with new data centers, strategic agreements with Microsoft, Oracle, and possibly Google, and a logistics project that resembles national infrastructure rather than a private enterprise.
The Texas giant
One emblem of this expansion is its mega data center in Texas—the largest AI data center in the world—already consuming 300 megawatts of power and aiming to reach 1 gigawatt in 2026. That’s enough to power an entire city. This growth has overwhelmed local electrical operators, who warn about the technical challenges of stabilizing a network capable of supporting such a digital beast.
In other words, we are looking at AI that consumes as much as heavy industry, with a carbon footprint and infrastructure impact that cannot be ignored.
100 million GPUs?
The idea of scaling from 1 to 100 million GPUs sounds absurd… yet that is exactly what Altman has proposed. The economic calculation is dizzying: about $3 trillion just on hardware. But the goal isn’t solely to build that many GPUs; it’s to find alternative paths—custom chips, more efficient architectures, optical storage, silicon photonics. Everything that allows scaling without exploding costs.
This isn’t about spending more but thinking differently. The aim is Artificial General Intelligence (AGI), a system that thinks like a human. And if that requires rethinking the global energy model, OpenAI seems willing to do so.
Computing as a competitive advantage
In this new landscape, infrastructure is the real differentiator. It’s no longer enough to have the best model; you need to have the capacity to train, deploy, and scale faster than anyone else. With its one million GPUs, OpenAI is setting a new standard that leaves much of the industry behind.
And it’s not alone. Meta, Amazon, Google, and even Apple are working on proprietary chips and specialized data centers. The talent war has moved into the physical realm: control over energy sources, materials, silicon, and assembly.
An unsustainable future?
What worries many is that all this is happening without broad enough debate about its consequences. Who manages these resources? What happens if a company has more computing power than many countries? Where are transparency, regulation, or equitable access in all of this?
Sam Altman, with his informal tone, continues pushing the frontier. But what’s at stake surpasses just a big number. OpenAI’s million GPUs mark the beginning of a new phase in technological history—a phase where infrastructure becomes power. And where AI, far from being ethereal, is measured in tons of metal, megawatts, and cubic meters of air conditioning.
The real question is no longer whether we can build more powerful AI. It’s whether we can—and should—sustain it.
we will cross well over 1 million GPUs brought online by the end of this year!
very proud of the team but now they better get to work figuring out how to 100x that lol
— Sam Altman (@sama) July 20, 2025
Source: Noticias inteligencia artificial