Dell Strengthens Its “AI Factory” for Businesses: Increased Automation, Enhanced Performance, and a Clear Shift to On-Premises

Dell Technologies leveraged the SC25 Supercomputing Conference in St. Louis to unveil a series of new developments with a very clear message for companies: if they want to take AI seriously, they can do so using their own infrastructure, without relying solely on public cloud.

The company, positioning itself as the world’s largest provider of AI infrastructure, expands its Dell AI Factory strategy with new automation tools, servers optimized for generative models, RAG-ready storage and vector search, high-capacity networking, and liquid cooling solutions for extremely dense AI workloads.

At the same time, Dell emphasizes a key data point that helps explain market direction:

  • 85% of companies plan to move their AI workloads on-premise within the next 24 months,
  • and 77% are looking for a single infrastructure provider to cover their entire AI journey.

The company’s goal is positioning itself as that single partner, combining hardware, software, and services within a unified architecture.


Dell Automation Platform: reducing friction from testing to full deployment

The core announcement revolves around the expansion of Dell Automation Platform within the Dell AI Factory. The aim is to offer organizations a more automated and repeatable way to deploy AI use cases, avoiding artisanal projects that remain in pilot phase indefinitely.

The platform:

  • Deploys validated, optimized solutions on secure infrastructure,
  • Allows for consistent, measurable results,
  • And reduces the need to “guess” configurations or architectures.

Highlights include:

  • Integration with tools like an AI code assistant (in collaboration with Tabnine) and an AI agent platform (with Cohere North), now automated to accelerate production deployment.
  • Dell’s professional services providing turnkey pilots for interactive use cases, utilizing real customer data with clear ROI metrics (KPIs) before large-scale investments.

The core message is that enterprise AI cannot rely solely on enthusiastic developers and isolated tests: it requires method, automation, and governance.


Data as a driver: PowerScale and ObjectScale optimized for AI and RAG

Dell emphasizes that in the race for AI, difference will not only be in models but also in how the data avalanche that feeds them is managed. Therefore, it enhances its AI Data Platform with improvements to its storage engines: PowerScale and ObjectScale.

Announced advancements include:

  • PowerScale as standalone software: soon available as software license on qualified PowerEdge servers, such as the PowerEdge R7725xd. This offers greater flexibility for cloud service providers and organizations wanting to combine Dell’s high-performance storage with the latest servers and networking.
  • pNFS support with Flexible File Layout on PowerScale: enabling more efficient communication between metadata servers and clients, distributing data in parallel across multiple nodes and I/O paths. The goal is to deliver greater linear scalability and performance for demanding AI workflows.
  • ObjectScale AI-Optimized Search: integrates two specialized APIs, S3 Tables and S3 Vector, designed for high-speed access to complex data directly on ObjectScale. This is targeted at use cases such as advanced analytics, inference, and retrieval augmented generation (RAG), where fast search and access to large datasets are critical.

Practically, these improvements aim to enable companies to discover, store, and query data for AI without building ad hoc architectures for each project.


PowerEdge for the era of giant models

In computing, Dell strengthens its PowerEdge lineup to cover both training and distributed inference, as well as high-demand scenarios.

PowerEdge XE9785 and XE9785L

These servers are designed for next-generation AI and intensive HPC workloads:

  • Feature dual-socket AMD EPYC processors.
  • Include eight AMD Instinct MI355X GPUs per node, focused on generative AI and large model training.
  • Utilize AMD Pensando Pollara 400 AI NICs and the Dell PowerSwitch AI fabric, aimed at reducing traffic bottlenecks between nodes.

There will be air-cooled versions and another with direct liquid cooling, targeted at data centers seeking higher density with lower energy consumption.

PowerEdge R770AP

Meanwhile, the PowerEdge R770AP is a more general-purpose platform, but optimized for:

  • intensive parallel processing,
  • low memory latency,
  • and a high number of PCIe lanes to accelerate networks and storage.

This server is based on Intel Xeon 6900 series six-core processors, featuring many cores, large caches, and support for memory expansion via CXL, a key point for applications where memory is the bottleneck.


102.4 Tb/s networks for AI farms with over 100,000 accelerators

Large-scale AI not only requires GPUs: it demands networks capable of moving huge volumes of data with low latency. Dell strengthens this aspect with new PowerSwitch Z9964F-ON and Z9964FL-ON switches, based on the Broadcom Tomahawk-6 chip.

These devices offer:

  • 102.4 terabits per second switching capacity,
  • support for large-scale AI and HPC deployments,
  • capability to connect more than 100,000 accelerators.

They integrate with:

  • Enterprise SONiC Distribution by Dell Technologies, promoting open networking,
  • and SmartFabric Manager, which automates setup, lifecycle management, and monitoring of these complex networks.

SmartFabric Manager now also aligns with the Dell AI Factory, providing automated “blueprints” to deploy AI infrastructure with less manual intervention, and scales to rack-level integration via OpenManage Enterprise, ensuring end-to-end visibility of GPU infrastructure.


Unified management, liquid cooling, and racks ready for 150 kW

Dell also enhances the less visible but critical aspect of enterprise AI: physical infrastructure management and cooling.

The new features in the Integrated Rack Scalable Solutions (IRSS) program include:

  • OpenManage Enterprise (OME) for unified management from server to rack-scale environment. Capable of automating up to 25,000 devices from a single console, including power, cooling, and leak detection supervision.
  • Integrated Rack Controller (IRC), hardware and software combo providing quick, automated leak detection within racks, integrated with OME and iDRAC to minimize risks and downtime.
  • Dell PowerCool RCDU, a rack-mounted refrigerant distribution unit in a compact 4U format, supporting densities of up to 150 kW per rack. Compatible with 19-inch racks (IR5000) and 21-inch racks based on OCP standards (IR7000).

Integration with Dell ProSupport aims to keep this cooling and management infrastructure operating optimally, reducing surprises in operations where every minute of downtime incurs significant costs.


AI in the PC: expanded support for Ryzen AI

Beyond data centers, Dell emphasizes that AI on the workstation is also part of the picture. The company extends support for its AI PCs ecosystem to include more silicon, notably AMD Ryzen AI processors.

The goal is for developers and technical teams to create and run AI applications directly on the device, with optimized workflows and reduced dependence on the cloud for certain tasks— increasingly relevant due to cost, privacy, and latency considerations.


Market message: fewer experiments, more AI in production

With this wave of announcements, Dell aims to position itself as a comprehensive provider for companies transitioning from experimentation to production in AI. Automation, storage tailored for RAG, massive networks, optimized servers, rack-scale management, and AI-enabled PCs all form part of a unified narrative:

enterprise AI is no longer an isolated pilot but a factory of use cases that requires a solid, orchestrated technological foundation, increasingly located within the company’s own infrastructure.


Frequently Asked Questions

What exactly is the Dell AI Factory?
It is Dell’s strategy to offer a full ecosystem of infrastructure, software, and services aimed at enterprise artificial intelligence. It includes optimized PowerEdge servers for AI, storage solutions like PowerScale and ObjectScale, PowerSwitch networks, automation tools like Dell Automation Platform, and professional services to help companies design, test, and deploy AI use cases in a repeatable manner.

Why does Dell emphasize deploying AI on-premises so much?
According to the data the company has, a large majority of firms plan to move their AI workloads to their own infrastructure within the next two years. Reasons range from data sovereignty and regulatory compliance to the need to control costs, latency, and security. Dell aims to meet this demand with solutions that enable building “AI factories” within the corporate data center.

What role do new networking and cooling solutions play in this strategy?
Advanced AI models require thousands of interconnected GPUs and consume vast amounts of energy. This compels the use of networks capable of moving data at over 100 Tb/s and efficient cooling systems like liquid cooling, supporting densities of up to 150 kW per rack. Without these elements, scaling AI enterprise-wide becomes technically or financially unfeasible.

Is a complete infrastructure redesign necessary to adopt these AI solutions?
Not necessarily. Dell’s approach is based on a modular, scalable architecture. Many of the innovations, such as software like PowerScale on PowerEdge servers or new rack controllers and management tools, can be integrated gradually. The idea is that organizations can start with well-defined pilots and, if successful, scale to production reusing the same technology base and automation processes.

via: dell

Scroll to Top