Anthropic has taken another giant leap in the competition for computing power. The company has expanded its partnership with Amazon to secure up to 5 gigawatts of new capacity dedicated to training and deploying Claude, as part of a commitment of over $100 billion in AWS technologies over the next decade. The agreement includes capacity based on Trainium2 starting this first half of the year and nearly 1 GW total capacity with Trainium2 and Trainium3 before the end of 2026.
The magnitude of this announcement makes it clear that the AI race is no longer just about models, but about who secures the infrastructure needed to power them first. Anthropic states that over 100,000 customers are already running Claude on Amazon Bedrock and that it currently uses more than a million Trainium2 chips to train and serve its models. Amazon, for its part, accompanies the deal with an immediate investment of $5 billion in Anthropic and the option to add up to $20 billion more in the future, on top of the $8 billion already invested previously.
What matters here is not just the size of the check, but what it reveals about the market’s current state. Anthropic acknowledges that its growth is already straining its infrastructure. The company announced in early April that its annualized revenue rate exceeds $30 billion, compared to around $9 billion at the end of 2025, and admitted that this growth—especially in Claude’s usage by free users, Pro, Max, and Team—has impacted reliability and performance during peak hours. This new agreement with Amazon aims precisely to alleviate that pressure with “significant” capacity over the next three months and nearly 1 GW before year’s end.
An agreement that goes far beyond renting GPUs
The most interesting takeaway from the announcement is that Anthropic is not just buying servers. It is betting on a long-term, structural relationship with AWS. The agreement covers everything from Graviton to Trainium2, Trainium3, and Trainium4, with an option to acquire future generations of Amazon’s custom silicon as they become available. That makes AWS not only a cloud provider but a long-term technological partner for Claude’s roadmap. Anthropic states bluntly: AWS will remain its primary training and cloud provider for critical workloads.
This dependency carries significant implications. On one hand, it reinforces the idea that large model laboratories need to stick with hyperscalers to stay competitive. On the other, it confirms that Amazon aims to play a much more central role in AI development—not just with its own models, but through its infrastructure and custom silicon. Reuters highlighted that Anthropic will use the agreement to strengthen its capacity while Amazon seeks to capitalize more commercially on Trainium and establish itself as an indispensable provider in the AI boom.
Additionally, there is a notable commercial layer. Amazon and Anthropic announced that the entire Claude platform will be available directly within AWS, with existing accounts, controls, and billing—no additional credentials or contracts required. This integration, currently in private beta, makes it easier for large organizations to adopt Claude within their existing governance and compliance frameworks. It also helps Amazon make Claude an even more native part of its ecosystem.
The computing war enters a new phase
Amazon’s move also comes at a particularly aggressive moment for Anthropic regarding infrastructure. Just a few days ago, the company announced an expansion of its work with Google Cloud and Broadcom to secure multiple gigawatts of next-generation compute capacity—another offensive to support Claude’s growth. This allows for a clearer understanding of its strategy: Anthropic does not want to depend on a single route, but to distribute workloads across different chips and partners, trying to avoid capacity shortages impacting its products.
In this context, the 5 GW agreed with Amazon holds symbolic and practical value. Symbolic because it places Anthropic in the top tier of resource competition. Practical because the sector’s bottleneck is no longer just talent or algorithms, but electricity, data centers, silicon, and the ability to deploy all these before competitors do. It’s no coincidence that the company itself talks about “record demand” and the need to build infrastructure to keep Claude “at the forefront.”
There is also a significant geographic message. The agreement includes expansion of inference capabilities in Asia and Europe to better meet international demand for Claude. This indicates that Anthropic is no longer only thinking about scaling in the US but also about strengthening its global footprint—an approach consistent with a company aiming to establish itself as a worldwide enterprise and developer platform.
What this means for Anthropic and what Amazon gains
For Anthropic, the benefit is clear: increased capacity, greater flexibility to continue training and serving Claude, and less risk that reliability issues will turn into a competitive disadvantage. For Amazon, the move is also highly advantageous. It not only invests more money in one of the most important startups of the moment but also commits a decade to one of the sector’s biggest compute consumers, making Trainium a central element of this relationship.
Ultimately, what’s at stake is much more than a commercial alliance. This agreement confirms that the AI industry has entered a phase where leadership will depend equally on model quality and on guaranteeing supply of computing resources, integrating them into stable products, and supporting a demand growing faster than many expected. Anthropic doesn’t want to fall behind; Amazon doesn’t want to miss that train. The market has just received yet another signal that the AI war will increasingly be a war of infrastructure.
Frequently Asked Questions
How much has Anthropic committed to AWS in this deal?
Anthropic has committed to spending over $100 billion on AWS technologies over the next ten years, in a deal to secure up to 5 GW of compute capacity.
What will Amazon invest in Anthropic?
Amazon will invest $5 billion immediately and can add up to $20 billion more in the future, in addition to the $8 billion already invested previously.
What capacity will be available by the end of 2026?
Anthropic aims to have nearly 1 GW of total capacity with Trainium2 and Trainium3 before the end of 2026, with new Trainium2 capacity already coming online in the first half of the year.
Why is this deal so important for Claude?
Because Anthropic recognizes that its rapid growth in users and revenue is already putting pressure on its infrastructure and affecting service reliability. Therefore, it must urgently expand capacity to sustain training, inference, and global deployment.
via: anthropic

