2025, the Year Liquid Cooling Became No Longer Optional in Data Centers

For years, the debate over data center cooling was resolved with a simple mantra: if the air can handle it, don’t touch anything. But 2025 has changed that script. The expansion of Artificial Intelligence—with increasingly dense GPUs and accelerators—has pushed many operators into an uncomfortable spot: heat is no longer just an isolated technical issue but a factor that influences capacity, delivery timelines, and ultimately, profitability.

This trend can be explained by a combination of pressures. On one side, the electrical demand of data centers continues to grow, and European regulators have started demanding more transparency, requiring reporting of energy indicators and practices for facilities above certain power thresholds. On the other side, the race to deploy AI infrastructure increases thermal density within racks, raising the bar for what air cooling can reliably support.

Consolidation: When the Market Chooses to Buy Know-How Instead of Inventing It from Scratch

In this new context, 2025 has also been a year of acquisitions. Liquid cooling has shifted from being a “complement” to a strategic focus, and this has been reflected in corporate activity. One of the most notable moves was Eaton’s agreement to acquire Boyd Thermal for $9.5 billion, explicitly aiming to strengthen its liquid cooling portfolio for data centers. Meanwhile, other deals and acquisitions in the sector point to the same idea: whoever controls the power and cooling chains will have an advantage in deploying large-scale AI.

The logic is straightforward: the market is filling up with similar solutions, many originating as niche products, and consolidation allows major players to integrate product lines, global support, manufacturing, and delivery capacity. In an environment where customers demand deadlines and guarantees, scale becomes critical again.

From Kilowatts to Megawatts: The Era of “Real” Cooling Distribution Units (CDUs)

The most visible sign of this technical maturity is the leap in capacity of CDUs (cooling distribution units), which distribute refrigerant within liquid cooling architectures. In 2025, CDUs engineered for 2 MW are becoming common, and the bar has been raised further: Schneider Electric and Motivair announced CDUs capable of managing 2.5 MW, while Flex and JetCool demonstrated modular approaches that can be “seamed” together to reach 1.8 MW. Even operators like Switch are advocating for hybrid (air + liquid) systems with equivalent capacity, now seen in production.

This shift is not cosmetic. Talking about megawatts in a CDU recognizes that the bottleneck is no longer just how many servers can fit in a room but how much thermal power can be extracted safely, reliably, and operably. In other words, the market is designing cooling solutions to support racks and pods of AI that until recently seemed like science fiction.

Open Standards: Google Sets the Bar and the Sector Follows

The other major sign of “industrialization” comes with standards. In 2025, Google pushed the concept of open specifications by releasing details of its internal CDU design, Project Deschutes, to the Open Compute Project (OCP). This gesture is significant because it reduces friction: if the sector converges on reference architectures, manufacturers, integrators, and operators can deploy faster and with less risk.

Google didn’t just share plans: at the OCP Europe Summit in Dublin, the company detailed its liquid cooling deployments, indicating that about half of its global footprint already has liquid cooling enabled or deployed, with nearly 1 GW of capacity across 2,000 pods based on its TPUs, aiming for 99.999% availability.

Immersion, Microchannels, and “Ice Batteries”: Innovation in the Spectrum

While direct-to-chip (liquid on cold plates) has become the most adopted path, immersion cooling continues to grow as an alternative. In 2025, Submer signed a memorandum of understanding with the government of Madhya Pradesh to develop up to 1 GW of liquid-cooled AI data centers, and others have introduced immersion pods and new platform development agreements.

Similarly, more radical approaches have emerged. Microsoft and the Swiss startup Corintis have developed microfluidic cooling “inside the chip,” circulating refrigerant through microscopic channels etched into silicon. According to reports, this method can improve heat extraction compared to conventional cold plates, and Corintis has secured funding to accelerate industrial deployment.

Energy efficiency has also inspired less visible but potentially decisive solutions: from “ice batteries” (thermal storage) to shift cooling loads to off-peak hours—providers claim they can make up to 40% of peak loads flexible—and technologies to reuse residual heat.

Residual Heat: From Marketing Promise to Social and Regulatory Demand

The reuse of residual heat is becoming an awkward issue for those who haven’t addressed it. Local communities and authorities are pushing to prevent thermal excess from being wasted, and the EU has introduced reporting schemes and metrics that, practically, professionalize the conversation: measuring, comparing, justifying, and — when feasible — reusing. Projects announced in 2025 include district heating pilots in Germany, agreements to heat industrial facilities, and plans to use supercomputer heat to warm dozens of buildings.

In this landscape, cooling stops being just a line item in the facilities budget and becomes a key element of the digital business: it determines how much AI can be deployed, how fast, with what energy consumption, and under what social and political acceptance level.


Frequently Asked Questions

What is a CDU, and why are 2 MW CDUs being discussed in AI data centers?
A CDU (Cooling Distribution Unit) distributes refrigerant to racks or pods. The reference to 2 MW reflects the jump in thermal density: infrastructure capable of managing and controlling large cooling flows is needed for large-scale AI workloads.

When does air cooling cease to be sufficient in a data center?
When rack density and accelerator concentration make it impossible for air to reliably and efficiently remove heat. In such scenarios, liquid cooling offers better thermal transfer and allows scalable power without increasing operational risks.

What are the practical differences between direct-to-chip liquid cooling and immersion?
Direct-to-chip cools specific components via cold plates and circulating liquid; immersion submerges servers in a dielectric fluid. The former is easier to integrate with standard designs; the latter can offer advantages at extreme densities but requires operational changes and supply chain adjustments.

Why is residual heat reuse gaining importance in Europe?
Because social and regulatory pressures are increasing to justify the energy footprint of data centers. Reusing heat (for example, in urban heating networks or industrial processes) improves overall efficiency and can facilitate licensing and local acceptance.

Scroll to Top