Akamai Technologies has announced new details about a service agreement valued at $200 million and lasting four years signed with a major U.S. technology company specializing in high-performance computing for artificial intelligence.
Under the terms of the contract, the client will deploy a cluster composed of thousands of NVIDIA Blackwell GPUs in a data center optimized for high energy density and efficiency. This infrastructure will integrate with other cloud services within the Akamai distributed cloud platform, designed to support high-intensity AI workloads.
The project will become one of the largest large-scale GPU clusters worldwide based on NVIDIA Blackwell RTX PRO 6000 Server Edition, reinforcing the growing corporate demand for platforms capable of developing, training, and deploying large-scale AI models.
The infrastructure will be backed by an AI-optimized Ethernet network platform, enabling high-performance connectivity with no bottlenecks or data loss—crucial for the so-called “AI factories” and GPU-accelerated computing. Additionally, the system will incorporate high-performance parallel file storage with NVMe-over-Fabric technology, facilitating linear scalability for AI workloads and high-performance computing (HPC).
“This $200 million commitment is a strong validation of our strategy to build a global platform covering the entire AI lifecycle,” explained Adam Karon, Chief Operating Officer and head of Akamai’s Cloud Technology Group. According to the executive, the company will provide a key technological foundation through a dedicated cluster of NVIDIA Blackwell GPUs integrated with the world’s most distributed cloud, capable of delivering the predictable, reliable performance that current AI applications demand.
The agreement comes at a time when Akamai is significantly strengthening its AI infrastructure. The company has recently expanded its global Infrastructure as a Service (IaaS) presence to 41 data centers, leveraging partnerships with data center operators to accelerate its expansion.
Among its latest initiatives is the launch of Akamai Inference Cloud, announced in October 2025, a platform aimed at bringing AI inference closer to end-users and devices. The company has also bolstered its ecosystem by expanding Managed Container Service, enabling application scaling across its distributed infrastructure.
Akamai also recently announced the acquisition of thousands of NVIDIA Blackwell GPUs, aiming to strengthen its global distributed cloud and build a unified platform for AI research and development, fine-tuning models, and post-training optimization. With this new contract, the company consolidates its commitment to becoming a key player in the infrastructure supporting the next generation of AI-based applications.

