Just over ten years ago, liquid cooling in data centers was seen as a technological curiosity, mostly limited to supercomputers, research labs, or niche projects. Today, however, it has become a central industry topic. What once seemed a distant future has rapidly materialized: hyperscalers, AI companies, and even traditional operators are no longer questioning if they will adopt it, but rather how and when.
How did we reach this point so quickly? The answer lies in the combined force of three unstoppable factors: exponential growth in computing demand, the energy density of new chip generations, and the push for sustainability.
The perfect storm: more power, more heat
For years, air cooling was sufficient. Fans, cold air trays, and hot aisle containment managed to dissipate CPU and server heat. But the advent of generative artificial intelligence has changed everything.
New GPU accelerators like NVIDIA Blackwell GB200 or AMD MI300X draw 1,000 watts or more per chip.
AI data center rack configurations now require densities of 50, 75, or even 100 kW per rack, whereas just five years ago, 10–15 kW was considered high.
International Energy Agency projections suggest data centers could consume up to 12% of the total electricity in the US by 2028, largely due to AI.
Air alone simply can’t keep up anymore. Physics is relentless: water’s thermal conductivity is 3,500 times greater than air. In a world where high-performance computing (HPC) and AI demand the dissipation of hundreds of kilowatts in compact spaces, transitioning to liquid cooling is no longer optional—it’s unavoidable.
From HPC to mainstream: accelerated leap
Liquid cooling isn’t new. Supercomputers like IBM Blue Gene/Q or Barcelona’s MareNostrum have used partial liquid solutions for over a decade. But these were exceptions.
The radical shift has occurred in the past three years:
AI-driven hyperdensity
Clusters with 24,000 GPUs or more, such as those operated by Meta, Microsoft, or Google, produce heat that air can’t handle alone.Industrial standardization
The Open Compute Project (OCP), alongside manufacturers like NVIDIA, Intel, AMD, and Meta, has established rack and connector standards that facilitate large-scale deployment of liquid cooling.Existing infrastructure adaptation
Operators such as Equinix, Digital Realty, and NTT are investing in retrofitting their data centers with liquid-to-chip or rear-door heat exchangers, avoiding total rebuilds.Sustainability and regulatory pressure
The EU and US are beginning to require energy and water efficiency. Liquid systems, especially closed-loop ones, consume significantly less water than traditional evaporative systems.
Liquid cooling technologies
The industry doesn’t move in a single direction. Various technologies are under deployment, each with advantages and challenges:
Liquid-to-chip (L2C): The most widespread, with cold plates delivering liquid directly to CPUs and GPUs.
Rear Door Heat Exchanger (RDHx): Heat exchangers located at the back of racks, ideal for quick retrofits.
Total immersion (Immersion Cooling): Servers submerged in dielectric fluids, offering maximum efficiency but higher adoption complexity.
Spray cooling and hybrid solutions: Emerging technologies seeking a balance between cost and density.
Major players align
Meta has announced its Catalina pods with NVIDIA GB200 will adopt air-assisted liquid cooling (AALC) in all data centers.
Microsoft has been testing immersion cooling since 2021 at its Quincy, Washington site.
Equinix and Vantage Data Centers are deploying campuses prepared for 250 kW racks equipped with standard liquid cooling.
China’s Huawei and Tencent report that over 30% of their AI workloads are now cooled with liquid.
The takeaway is clear: liquid cooling is no longer experimental; it’s standard for AI and HPC.
Beyond heat: efficiency and sustainability
A key argument in favor of liquid cooling is that it’s not only more effective at cooling but also more sustainable.
Immersion and L2C systems can cut water usage by up to 90% compared to evaporative towers.
Residual heat can be reused for urban heating or industrial processes—a practice already explored in countries like Sweden and Denmark.
Reducing cooling loads brings PUE (Power Usage Effectiveness) closer to the ideal of 1.1 or even lower.
Transition challenges
However, the shift comes with hurdles:
Deployment costs: Upgrading existing data centers requires significant investment.
Specialized maintenance: Leaks, corrosion, and fluid handling demand new skills from staff.
Hardware compatibility: Not all servers are ready for liquid cooling; the ecosystem is still maturing.
Perception of risk: Although technology is safe, many operators still fear “water near electronics.”
What does the future hold?
All signs indicate that liquid cooling will not only dominate AI and HPC but will gradually extend into cloud and traditional enterprise environments.
Analysts forecast that by 2030, over 50% of new hyperscale data center capacity will come equipped with built-in liquid cooling.
This trend could speed up further with 2,000-watt chips in upcoming accelerators and the development of modular fusion systems and fuel cells, enabling unprecedented energy densities.
Conclusion
What began as a technological curiosity is now an industrial imperative. Liquid cooling has moved from “the future” to the necessary present for supporting the boom in AI and HPC.
The question is no longer whether adoption will increase but rather which models will prevail and who will lead this transition—will it be the usual hyperscalers, or will new efficiency-focused players emerge as key drivers in this new era?
Frequently Asked Questions (FAQ)
Why is liquid cooling more efficient than air?
Because water conducts heat 3,500 times better than air, allowing for the dissipation of much higher loads in smaller spaces.What rack densities require liquid cooling?
Starting at around 50 kW per rack, air becomes insufficient. AI environments typically manage racks from 100 to 250 kW.What is the most common current liquid cooling model?
Liquid-to-chip (L2C), with cold plates directly on processors and GPUs, is the most widespread among hyperscalers.Is liquid cooling more expensive?
Initial investments are higher, but operational costs (energy, water) are reduced over time, and efficiency (lower PUE) improves, offsetting the upfront expense.

