Anthropic ties Claude’s energy to TPUs and targets 2027

Anthropic has made one of the most significant moves in AI infrastructure this year. The company has signed an agreement with Google and Broadcom to secure multiple gigawatts of next-generation TPU capacity, a computing power that will start coming online from 2027 and will be used for training and serving future Claude models. This is no minor announcement: in a market increasingly constrained by the actual availability of chips, energy, and data centers, securing capacity years in advance is becoming as strategic as launching a new model.

The announcement is accompanied by another figure that helps explain the urgency of this move. Anthropic claims that its run-rate revenue has already surpassed $30 billion, compared to around $9 billion at the end of 2025. It also states that it now has over 1,000 enterprise clients spending more than $1 million annually on an annualized basis, double the more than 500 mentioned in February when it announced its Series G. It’s worth clarifying that run-rate refers to an annualized projection based on the current business pace, not confirmed closed revenues, but it still reflects a very strong commercial acceleration.

The battle is no longer just about models, but megawatts

Over the past two years, public discourse on AI has focused on benchmarks, reasoning, agents, and context windows. But beneath this product layer, another much more physical war is solidifying: supply. Training and deploying frontier models require stable access to accelerators, high-performance interconnection, cooling, and above all, electricity. When Anthropic talks about multiple gigawatts, what it’s really saying is that the next competitive leap for Claude will depend not only on software but also on already reserved energy and industrial infrastructure to support ongoing scaling.

This move is especially significant because it diversifies Anthropic’s hardware landscape. The company itself explains that Claude is currently trained and run on AWS Trainium, Google TPUs, and NVIDIA GPUs, with the idea of assigning each load to the most suitable chip. Practically, this reduces dependence on a single supply chain and gives Anthropic a rare flexibility at a time when access to cutting-edge accelerators remains one of the sector’s major bottlenecks.

The agreement with Google and Broadcom does not imply a break with Amazon. Anthropic clarifies that AWS remains its main cloud provider and training partner, and that the company will continue working with Amazon on Project Rainier. What’s being built is not a replacement but a broader supply architecture: AWS as a central partner, Google TPUs to gain scale and predictability, and NVIDIA as another critical piece of the compute mix. From both technical and operational perspectives, this blend makes a lot of sense: it enables cost optimization, availability, and load variety without being tied to a single vendor.

Claude strengthens its multi-cloud position

Another important detail in this announcement goes beyond chips. Anthropic asserts that Claude remains the only frontier model available across the three largest cloud platforms: Amazon Web Services, Google Cloud, and Microsoft Azure. Beyond the commercial branding of the word frontier, it is documented that Claude is offered in Amazon Bedrock, that Anthropic models are available as managed models in Vertex AI, and that Microsoft Foundry already integrates Claude models into Azure. This simultaneous presence across the big three gives Anthropic a unique position in the enterprise market.

For a technical audience, this point is probably more important than the financial headline. Being officially present on Bedrock, Vertex AI, and Foundry not only broadens Claude’s distribution; it also reduces adoption friction for large accounts already working across different environments and needing to bring the models into their own compliance perimeters, private networks, observability tools, and internal workflows. In other words, Anthropic is not just selling AI; it’s trying to become an interoperable layer within existing enterprise infrastructure.

Google Cloud, for example, documents Claude in Vertex AI as a managed, serverless service, with no infrastructure provisioning needed to consume the API. AWS does something similar in Bedrock, positioning Claude as a family of models for reasoning, vision, code generation, and enterprise workflows. Microsoft, meanwhile, has gone beyond initial announcements: it has published documentation for using Claude models in Foundry and even specific guides to configure Claude Code on its infrastructure. This indicates that Anthropic’s multi-cloud strategy is not just marketing talk but a tangible integration.

More than growth: an industrial reserve for the next phase

The real significance of the announcement lies in its timeline. Anthropic is not just signing to meet the immediate demand of 2026 but reserving future capacity starting from 2027. This nuance completely changes the perspective. The company doesn’t want to merely scale Claude with available inventory at any given time but to safeguard itself for the next market phase, where serving more capable models and more complex agents will require even greater inference, training, and infrastructure elasticity.

Anthropic describes this agreement as its largest compute commitment to date. It also emphasizes that most of the new capacity will be in the US, linking it to the announced commitment in November 2025 to invest $50 billion in US computing infrastructure. Practically, this positions the company not only as a model developer but as an actor beginning to engage in large-scale industrial planning.

From a colder perspective, this is also a defensive move. Frontier AI is becoming increasingly dependent on long-term contracts, vertical integration, and capacity control. Those who arrive late to energy, silicon, and data center space will find it harder to keep pace, no matter how good their models. Anthropic appears to have decided it does not want to improvise this problem a year from now but to address it now. That’s the true value of the announcement: less fireworks, more strategic infrastructure reservation.

In an environment where much of the industry still measures AI race progress through launches, demos, and benchmarking tables, Anthropic is reminding us of a much more uncomfortable and tangible reality: the next generation of models will not only be decided in labs but also in electrical contracts, chip factories, and gigawatt-scale capacity planning. Claude will continue competing on model quality, yes, but this agreement makes clear that Anthropic also aims to compete in the layer that’s most difficult to replicate: infrastructure.

Frequently Asked Questions

What does it mean that Anthropic has signed “multiple gigawatts” of TPU capacity?
It means they have reserved a large amount of computing infrastructure based on next-generation TPUs, measured by the electrical power associated with deployment. It’s a way to represent the industrial scale of the deal, not just the number of chips.

Has Anthropic already made $30 billion?
Not exactly. The company references run-rate revenue, which is an annualized projection based on current business momentum. It indicates commercial scale but does not equate to confirmed revenues over a full fiscal period.

Is Claude truly available on AWS, Google Cloud, and Azure?
Yes. AWS documents Claude models in Amazon Bedrock, Google Cloud offers them in Vertex AI, and Microsoft integrates them into Foundry. That official presence across the three biggest clouds is a key strategic point for Anthropic.

Why does this agreement matter so much for Claude’s future?
Because it guarantees compute capacity for 2027 and beyond at a time when access to chips, energy, and data centers is one of the main factors shaping frontier model development. Without guaranteed infrastructure, scaling advanced AI becomes significantly more difficult.

Scroll to Top