Dell and NVIDIA Unveil the New AI Factory: Comprehensive Infrastructure for Scaling AI Deployment

Sure! Here’s the translation to American English:

The new generation of servers, storage, and managed services from Dell Technologies promises to accelerate AI deployments in companies of all sizes.

During Dell Technologies World, the U.S. company announced an ambitious set of innovations that solidify its Dell AI Factory with NVIDIA as one of the most comprehensive solutions for deploying end-to-end enterprise AI. The proposal includes new PowerEdge servers with NVIDIA Blackwell GPUs, high-performance storage, optimized software, and 24/7 managed services.

“We are on a mission to bring AI to millions of customers around the world,” stated Michael Dell, CEO of Dell Technologies. “With Dell AI Factory with NVIDIA, businesses can manage the entire AI lifecycle at any scale.”

Liquid-cooled servers with up to 256 GPUs per rack

Among the key highlights is the new line of Dell PowerEdge XE9780 and XE9785 servers, available in air-cooled or direct liquid cooling configurations. These platforms can be configured with up to 192 NVIDIA Blackwell Ultra GPUs, and scale up to 256 GPUs per custom IR7000 rack. They are designed for training large-scale LLM models and deploying agent-based AI inference.

Another notable model is the PowerEdge XE9712, which utilizes NVIDIA GB300 NVL72 architecture and promises a 50-fold improvement in inference capacity and a 5-fold increase in performance. Additionally, Dell has confirmed that its XE7745 server will be available with RTX PRO 6000 Blackwell Server Edition GPUs in July, ideal for multimodal applications, digital twins, or robotics.

The company also announced its intention to support NVIDIA Vera and Vera Rubin platforms, as well as their integration into Dell IR scalable racks.

Optimized storage and networks for AI data

The new Dell ObjectScale enhances efficiency for large AI deployments with a dense architecture and support for S3 over RDMA, reducing latency by up to 80% and more than doubling performance, while decreasing CPU usage by 98%.

A high-performance distributed inference solution was also presented, combining PowerScale, Project Lightning, and PowerEdge XE servers with support for NVIDIA NIXL libraries and KV Cache memory. This will enable more efficient handling of complex workloads in data centers.

In terms of networking, Dell introduces new PowerSwitch SN5600 and SN2201 Ethernet switches (part of NVIDIA Spectrum-X) and Quantum-X800 InfiniBand switches. These solutions, backed by Dell ProSupport, are aimed at ensuring low-latency, high-bandwidth connectivity for AI workflows.

Managed services and software to accelerate agent-based AI deployment

The Dell AI Factory with NVIDIA now includes managed services that cover the entire stack: from infrastructure to NVIDIA AI Enterprise software. The service manages monitoring, updates, patches, and 24/7 support, facilitating access to advanced AI for companies with fewer technical resources.

Regarding software, users can deploy tools like NVIDIA NIM, NeMo Microservices, Retriever RAG, and Llama Nemotron models directly from Dell. Support for Red Hat OpenShift is also included, offering a secure and flexible hybrid environment.

Dell and NVIDIA strengthen their joint leadership in enterprise AI

According to Jensen Huang, CEO of NVIDIA, “AI factories are the infrastructure of the new industry, generating intelligence for sectors like healthcare, finance, and manufacturing. Together with Dell, we offer the widest range of Blackwell systems for enterprises, clouds, and edge environments.”

The Dell AI Factory with NVIDIA is currently the only comprehensive enterprise solution for AI covering everything from computing and storage to networking and software, validated by NVIDIA under its Enterprise AI Factory architecture.

Scroll to Top