The investor enthusiasm for artificial intelligence is flooding the world with projects centered around massive data centers powered by GPUs, specialized chips, and hundreds of megawatts of energy. However, according to IBM CEO Arvind Krishna, the numbers don’t add up. And he states this with a level of conviction that’s rare in a sector used to promising nearly unlimited returns.
In a conversation on the Decoder podcast by The Verge, Krishna warned that the current deployment of infrastructure for frontier AI models—the ones aiming for what’s called artificial general intelligence (AGI)—could end up being simply unsustainable from an economic standpoint.
His calculation is straightforward: if the industry commits around $8 trillion to AI data centers, it would require about $800 billion in annual profits just to cover capital costs. “There’s no way that’s going to yield a return,” the executive summarizes.
$80 Billion to Fill a 1 GW Data Center
Krishna’s starting point is the cost of equipping a next-generation data center designed solely for large-scale AI workloads. According to his estimates, outfitting a campus with 1 gigawatt of capacity—the equivalent of a medium-sized city—requires around $80 billion in accelerators, servers, networks, and cooling systems.
The problem is that major tech companies are no longer talking about just one gigawatt. Krishna recalls that some leading AI players internally plan between 20 and 30 gigawatts for the coming years. That’s alone, for a single company, enough to drive investments up to approximately $1.5 trillion.
Adding up public announcements and sector figures, the executive estimates the total planned capacity for “AGI-class” workloads to be around 100 gigawatts. This figure not only strains electrical grids but also challenges any notion of reasonable investment returns.
Accelerated Depreciation: Five Years to “Pump and Refill”
Beyond the headline numbers of trillions, Krishna highlights an element he believes the market underestimates: the depreciation of AI chips.
In practice, large accelerators typically depreciate over about five years. But the rapid succession of generations—performance jumps that multiply power per watt multiple times—makes it very difficult to extend their lifespan beyond that period without falling behind the competition.
The senior executive’s conclusion is clear: each cycle forces companies to “write off” (for accounting and competitive reasons) much of their installed hardware and replace it with new equipment. In a scenario involving trillions of dollars in investment, this cycle of replacement entails an annualized capital cost only sustainable with extraordinary and very stable profits for years—something far from guaranteed in an still-immature market.
The Push for AGI and Pressure on Hyperscalers
Krishna’s warnings come at a time when big AI firms and cloud platforms are competing to demonstrate who can train and deploy the most powerful models on the market.
Meanwhile, governments and investment funds back projects of “AI factories” with tens of gigawatts, aiming for a potential AGI that, according to supporters, could generate an unprecedented productivity explosion. But, for now, much of that future revenue remains more a promise than a reality.
Krishna emphasizes that many of these plans assume a highly optimistic combination of token prices, sustained demand, and relatively controlled energy costs. If any of these variables shift—due to supply saturation, regulation, physical limits of the electrical grid, or social rejection of new infrastructures—the return on investment could plummet.
His message is particularly directed at the so-called hyperscalers: large cloud and digital service platforms. He warns that the race for size could become a trap if not accompanied by a truly solid business model.
Less Concrete, More Efficiency?
Krishna’s skepticism isn’t a denial of AI; rather, it questions the current growth model based on relentless rack stacking. His comments align with a broader debate emerging in the sector: how much does it make sense to continue scaling solely by adding more chips and megawatts, instead of better utilizing existing infrastructure?
This involves various fronts: from software optimizations and more efficient architectures to new generations of specialized chips and smarter resource-sharing strategies among companies and workloads.
It also overlaps with growing concerns about the energy and water footprint of AI data centers and the actual capacity of electrical grids to absorb tens of gigawatts dedicated to calculations that, in many cases, are still seeking their economic and social fit.
A Wake-Up Call for the Market
Krishna’s statements are not a definitive diagnosis—long-term technology forecasts rarely unfold exactly as predicted—but they serve as a wake-up call for investors, regulators, and industry leaders.
If current figures are realized without fundamental improvements in efficiency, business models, and regulation, the industry could face a mountain of assets that are difficult to monetize in just a few years—right when social and political pressure regarding AI’s impact is at its peak.
In an environment where new data center projects and multi-billion-dollar chip deals are announced almost daily, the fact that a veteran leader like Krishna urges caution and emphasizes that numbers must add up is, at minimum, a sign that the debate on the economic and energy sustainability of mass AI is here to stay.

