For years, “cooling” in a data center was treated as an operational matter: preventing alarms from triggering, keeping temperatures within range, and ensuring that PUE didn’t worsen too much. That approach is now outdated. The combination of cloud computing, the explosion of Artificial Intelligence, and increased computing density are turning cooling into a strategic decision that directly impacts three fronts: actual growth capacity, energy costs, and environmental sustainability.
The reason is simple and unromantic: each new generation of servers generates more heat. It’s no longer just about moving cold air through a corridor and venting hot air out another. Racks are increasing in power, and they do so at a pace that complicates the lives of both new projects and, especially, existing data centers. The 2024 global Uptime Institute survey describes a sustained trend toward higher-powered racks and notes that almost one-third of operators report rapid power growth per rack, with colocation leading the way with the strongest rebound.
A market that’s booming because heat has become a business problem
When the problem is structural, money follows. The cooling market for data centers reflects this: several consultants agree that the sector is in a phase of accelerated growth, with projections to double (or more) by 2032.
- Fortune Business Insights estimates the “data center cooling” market will grow from $17.1 billion (2024) to $42.5 billion (2032), more than doubling in eight years.
- MarketsandMarkets offers a more conservative but still upward scenario: from $15.1 billion (2024) to $24.2 billion (2032).
- Global Market Insights projects an even sharper rise over the next decade: from $20.8 billion (2025) to $49.9 billion (2034), with a compound annual growth rate of 10.2%. If this pace continues, the market could already reach around $41.0 billion in 2032, in the “more than doubling” zone.
These aren’t “trendy” figures—they reflect that heat has shifted from being a technical variable to a capacity constraint.
Paradigm shift: from “generalist” air to row-, rack-, and liquid-based solutions
Technological evolution is pushing cooling in two clear directions:
1) “Row / Rack-Based” Cooling (In-Row, In-Rack): When cooling near the heat source stops being optional
In high-density deployments, cooling an entire room has a practical limit: air is a poor carrier when power per rack rises, and tolerance for thermal peaks drops. That’s why approaches that bring cooling closer to the heat source are growing: per row or directly per rack.
In reality, this isn’t just about efficiency; it’s about operational capacity. Especially in existing data centers, redesigning corridors, plenums, electrical distribution, and containment may be more expensive—or even unfeasible—compared to simply “locating” cooling in critical zones.
2) Liquid Cooling: The answer when air is no longer sufficient
Liquid cooling (direct-to-chip, rear-door heat exchangers, and even immersion cooling in some scenarios) is becoming central to projects focused on AI, HPC, and intensive workloads. And not out of whimsy: as density increases, evacuating heat with air becomes more challenging, escalating energy consumption, noise, flow complexity, and operational risk.
Here, many teams are beginning to recognize that energy efficiency is no longer just an “improvement”—it’s a financial and regulatory requirement. Cooling directly impacts costs and additionally influences sustainability goals and permitting.
The big bottleneck isn’t just technical: energy, water, and timelines
The cooling debate no longer can ignore three increasingly important factors:
- Energy: Having floor space and fiber isn’t enough; available power is required—and in many markets, this is the scarce resource. In Europe, scrutiny over the electrical impact of data centers has intensified, with countries debating limits, planning, and system resilience due to sector growth.
- Water: In some designs, thermal efficiency has historically been linked to evaporative solutions or indirect water consumption. The problem is that water is becoming a social and political issue in many regions; thus, it’s a risk and reputation factor, as well as a cost.
- Costs and Retrofits: The most challenging scenario is often existing data centers. Changing thermal architecture in a live facility involves maintenance windows, complex engineering, construction, and sometimes assuming that the limit isn’t just cooling but how energy is supplied and distributed within the building. Uptime Institute also emphasizes that continuous drastic improvements in PUE shouldn’t be assumed and points out factors—climatic conditions and the economic incentive to extend existing installations’ lifespan—that could hinder those gains.
In “non-traditional” environments, the challenge is threefold: cool, power, and scale
The expansion of edge nodes and infrastructure near end-users adds another layer: operating data centers in locations not originally designed for them. The problem is not just “how do I cool,” but also:
- how to guarantee stable and sufficient power,
- how to maintain the system with limited resources,
- how to scale without turning each expansion into civil works,
- and how to do it all at a total cost that makes sense over 5–10 years.
In this context, modular solutions, efficient containment, localized cooling, and integration with energy storage are increasingly critical, since cooling ceases to be a “subsystem” and begins to define project viability.
Quick list: 7 questions to ask before designing (or expanding) a data center today
- What rack density is expected today…and what’s the likely scenario in 24 months?
- Is the design optimized for AI/HPC or for traditional loads?
- Which part of the growth will require In-Row / In-Rack cooling without exception?
- When does liquid cooling (and which specific technology) enter the roadmap?
- What is the actual limit: cooling, electrical distribution, or available power?
- What strategies exist to minimize impact on water and meet environmental requirements?
- How is cooling cost measured: per kW, per rack, per service, or per customer?
Frequently Asked Questions
What is In-Row cooling, and why is it used in high-density data centers?
It’s an approach that places cooling units close to rack rows to evacuate heat more efficiently and precisely. It’s used when density increases and room-based cooling becomes insufficient or too costly to operate.
When does it make sense to move from air to liquid cooling in a data center?
It’s generally considered when rack densities increase steadily (AI, HPC, accelerators) and air cooling begins to demand more energy, flow complexity, and thermal safety margins to maintain stability.
Why does cooling impact a data center’s energy bill so much?
Because heat is the “tax” of computing: more computing means more heat, and evacuating it consumes additional energy. Small inefficiencies at scale turn into significant recurring costs.
What is the biggest challenge when upgrading cooling in existing data centers?
The combination of construction and operation: changing thermal architecture without downtime, within building physical constraints, and managing potential bottlenecks in electrical distribution and expansion capacity.

