Anthropic relies on CoreWeave to scale Claude in the AI cloud

Anthropic has signed a multi-year agreement with CoreWeave to utilize their cloud infrastructure for the development and deployment of the Claude model family, a move that strengthens the role of specialized computing providers in AI compared to more traditional cloud services. CoreWeave confirmed that the contracted capacity will go into operation later this year.

The news is significant not only because of the client’s name but also for what it reflects about the market. Anthropic already maintains a strategic partnership with Amazon, which in November 2024 increased its total investment in the company to $8 billion and established AWS as its primary cloud provider. Nevertheless, Claude’s laboratory continues to expand its infrastructure base with other partners, including a new alliance announced earlier this week with Google and Broadcom for access to TPU-based capacity starting in 2027.

This makes the agreement with CoreWeave more than just a capacity contract. What is materializing is a multi-cloud and multi-infrastructure architecture for large AI models, where each provider covers a distinct part of the map: training, inference, rapid capacity scaling, or access to specific hardware. In a context of high demand for GPUs and accelerators, relying on a single partner has become an increasingly risky gamble.

CoreWeave strengthens its position in the specialized cloud space

For CoreWeave, the deal with Anthropic represents another important step in its efforts to establish itself as one of the leading names in specialized AI cloud services. The company stated in its announcement that, with Anthropic, they are now nine of the top ten AI model providers using their platform. Reuters added that the market viewed the announcement as a positive sign of commercial traction, and the company’s shares rose after the news broke.

The timing is no coincidence. Just a day earlier, Reuters reported an extension of the agreement between Meta and CoreWeave worth $21 billion through 2032, adding to other recent major contracts and reinforcing the idea that CoreWeave is shifting from being solely associated with Nvidia to becoming a key player in providing large-scale AI computing resources.

CoreWeave’s proposition is based on a very specific idea: offering a cloud built from the ground up for AI workloads, rather than a later adaptation of general-purpose infrastructure. The company emphasizes that its value lies in a tech stack optimized for modern training and inference, citing their results in MLPerf and their top positions on SemiAnalysis’s ClusterMAX rankings as proof of performance and reliability—though these references are part of their commercial narrative.

The competition is no longer just about models

The interesting aspect of this movement is that it clearly demonstrates that AI competition is no longer solely about who has the best models, but also about the infrastructure capable of running those models at scale. Claude, GPT, Gemini, and large enterprise models require sustained access to data centers, GPUs, network interconnects, storage, and power. This is where specialized cloud providers are beginning to challenge the traditional hyper-scalers.

For Anthropic, the agreement offers more room to grow without being tied to a single expansion route. For CoreWeave, it means delving deeper into the operational core of one of the most prominent labs in the market. And for the broader cloud sector, the message is quite clear: AI infrastructure is becoming more fragmented, specialized, and strategic than ever before.

If a few years ago the debate centered on who had the best model, now the question is shifting to who can provide the right compute earliest, in the necessary quantities, and at the right cost. In this landscape, CoreWeave aims to be a top-tier player, and the agreement with Anthropic directly supports that goal.

Frequently Asked Questions

What has Anthropic signed with CoreWeave?
A multi-year agreement to use CoreWeave’s cloud platform for workloads related to the Claude family of models, with capacity starting to activate later in 2026.

Does this mean Anthropic is leaving AWS?
No. AWS remains Anthropic’s main cloud provider within its strategic partnership with Amazon, but the company is expanding its infrastructure ecosystem with other partners.

Why is this agreement important for CoreWeave?
Because it strengthens their portfolio of major clients, enhances diversification, and solidifies their role as a specialized AI infrastructure provider.

What does this move reveal about the AI cloud market?
It shows that advanced model labs are distributing their workloads across multiple providers and that specialized cloud services are gaining ground over more generalist cloud models.

Scroll to Top