The American company will offer Blackwell GPU clusters and global support on the new distributed computing platform for AI developers.
The company GMI Cloud, specialized in GPU infrastructure as a service (GPU-as-a-Service), has announced its participation as one of the first providers to integrate with DGX Cloud Lepton, the new platform launched by NVIDIA to facilitate global access to high-performance computing resources aimed at artificial intelligence.
Based in Mountain View, California, GMI Cloud contributes to this collaboration with its state-of-the-art GPU infrastructure, which includes clusters with the new NVIDIA Blackwell GPUs, optimized for large-scale AI workloads, from real-time inference to prolonged training under digital sovereignty requirements.
What is DGX Cloud Lepton?
Introduced during Computex 2025, DGX Cloud Lepton is an NVIDIA initiative aimed at addressing one of the main challenges faced by AI developers: unified, scalable, and reliable access to high-performance GPU resources. Through this platform, developers will be able to access distributed capabilities worldwide to prototype, train, scale, and deploy AI models without limitations.
The platform is natively integrated with key tools from the NVIDIA ecosystem such as:
- NVIDIA NIM microservices
- NVIDIA NeMo (foundation models and LLMs)
- NVIDIA Blueprints (enterprise AI templates)
- NVIDIA Cloud Functions
GMI Cloud’s Proposal in the Lepton Ecosystem
As part of the DGX Cloud Lepton launch, GMI Cloud offers developers 16-node clusters with Blackwell GPUs and multi-regional support. Its key contributions include:
- Direct access to optimized GPU infrastructure for cost, performance, and scalability.
- Strategic regional availability designed to meet regulatory and low-latency requirements.
- Proprietary full-stack infrastructure, allowing competitive pricing and rapid deployments.
- Advanced toolchain and accelerated deployments, thanks to integration with the complete NVIDIA software stack.
Alex Yeh, CEO of GMI Cloud, stated:
“DGX Cloud Lepton reflects everything we believe in: speed, sovereignty, and uncompromising scalability. We built our infrastructure from silicon to the cloud so developers can build AI without limits.”
Implications for the AI Ecosystem
The collaboration between GMI Cloud and NVIDIA reinforces a growing trend: the need for distributed and accessible computing environments that lower barriers for the development of generative AI, autonomous agents, and large-scale inference.
With this alliance, GMI Cloud positions itself as a strong alternative for AI teams that need total control over costs, latency, and performance, without relying solely on traditional hyperscalers.
GMI Cloud emerges as a key player in the new era of AI infrastructure, integrating power, sovereignty, and speed into a single distributed platform.