Schneider Electric Raises the Bar in Liquid Cooling with a 2.5 MW Liquid Cooling Unit Designed for the AI Era

The race in artificial intelligence is no longer solely measured by chips, models, or parameter counts. By 2026, the bottleneck separating “conventional” data centers from new AI factories increasingly lies in less glamorous elements: water, pipes, pumps… and the actual capacity to extract heat constantly and predictably.

In this context, Motivair —the company specializing in integrated liquid cooling within Schneider Electric— has introduced a new system with a very clear message for the industry: thermal infrastructure needs to scale at the same pace as GPUs. The company announced the launch of the MCDU-70, a Coolant Distribution Unit (CDU) capable of 2.5 MW, designed to cool high-density data centers and, more importantly, to grow in modular deployments up to 10 MW or more in next-generation designs.

Why has a CDU become a “strategic” component?

A CDU might seem like just another element within a data center’s mechanics, but practically, it functions as the “distribution node” of the liquid cooling circuit: regulating flow rates, helping maintain stable pressures, and enabling controlled feeding in environments where air is no longer sufficient.

The jump is significant. In accelerated computing deployments — HPC, large-scale training and inference, generative model clusters — the challenge isn’t just the installed power but the concentrated thermal density. The more performance is condensed into less space, the harder it becomes to evacuate heat without sacrificing availability, efficiency, or maintenance times.

That’s why the market is shifting toward architectures where liquid cooling isn’t an “extra,” but a necessity for operation. Enter Motivair’s proposal: a high-capacity CDU that doesn’t force re-designs every time demand rises.

2.5 MW per unit with a goal: scaling to 10 MW and beyond

The MCDU-70 stands as Motivair’s highest-capacity CDU, with technical support rooted in scalability: units designed to operate in coordinated fashion as a centralized system.

According to the company’s disclosures, integration with EcoStruxure — Schneider Electric’s management and operation software platform — allows multiple CDUs to function as a “set” capable of meeting current needs and expanding as deployments grow. The approach is focused on next-generation infrastructure, targeting AI factories and large-scale accelerated computing environments.

The company provides a concrete example: deployments aiming for 10 MW as a step before scaling to gigawatt-scale facilities. In this context, six MCDU-70 units could be configured for a 4+2 redundancy strategy (four in operation and two backup), a common approach to enhance resilience without blindly over-sizing.

What this means for operators and engineering teams

Beyond the headline, what matters most to engineering, operations, and purchasing managers is that this kind of “thermal” infrastructure is beginning to influence design decisions from the outset:

  • True modular planning: if cooling scales in blocks (2.5 MW), data center growth can better align with commercial demand and the gradual addition of equipment.
  • Fewer expansion challenges: in environments where each upgrade impacts availability, reducing complex interventions is nearly as critical as efficiency.
  • Enhanced operational control: when systems are conceived holistically, operational software shifts from mere “monitoring” to a tool for standardizing policies, alarms, and capacity management.
  • Design for next-generation GPUs: fundamentally, silicon roadmap pushes infrastructure to change scale, and cooling needs to evolve proactively to avoid becoming a bottleneck.

In essence: the debate isn’t whether liquid cooling will come, but how to industrialize it to avoid an artisan approach in every deployment.

A portfolio from 105 kW to 2.5 MW

Additionally, the MCDU-70 acts as the “top” of a broader range. Schneider Electric emphasizes that its liquid cooling portfolio includes CDUs from 105 kW up to 2.5 MW, with an integrated approach that uses software coordination to match capacity with operator needs. They also highlight global availability via advanced manufacturing centers in North America, Europe, and Asia.

This detail reveals a key trend: liquid cooling is shifting from an “exotic” solution to an industrial catalog, offering scalable options — a necessity as the sector aims to deploy capacity at the pace announced for AI factories across multiple regions.


Frequently Asked Questions

What is a CDU (Coolant Distribution Unit) and what is its purpose in a data center?
A CDU distributes and controls the refrigerant in liquid cooling systems. In high-density environments, it maintains proper flow and pressure to evacuate heat steadily and reliably.

Why are 10 MW or more of cooling capacity being discussed for AI?
Because accelerated computing deployments are being designed in large power blocks. Cooling must support this scale to sustain intensive workloads without compromising availability.

What does a 4+2 redundancy mean in this type of system?
It’s a resilience approach: four units handle operational demands, and two serve as backups, allowing the system to absorb failures or maintenance without shut-downs.

Does liquid cooling completely replace air cooling?
It depends on the design. Many data centers combine both approaches, but as density increases, liquid cooling becomes more prominent due to its more efficient heat removal in certain scenarios.

via: Schneider Electric

Scroll to Top