The race for Artificial Intelligence is no longer just measured in teraflops. By 2026, the most common bottleneck in high-performance data centers will be heat. As racks fill with increasingly dense CPUs and GPUs, air cooling starts to fall short, and the discussion shifts toward thermal efficiency, power consumption, and total operating costs. In this context, ASUS has introduced its Optimized Liquid-Cooling Solutions alongside a strategic partner framework aimed at standardizing liquid cooling deployment for next-generation AI and HPC infrastructures.
The proposal is based on a fundamental idea: data centers designed for rack-scale AI platforms — explicitly including environments based on future NVIDIA Vera Rubin NVL72 systems — will require more aggressive thermal solutions that are also more industrializable. Cooling isn’t enough; it must be repeatable, scalable, and supported globally because the challenge is no longer a lab problem — it’s about mass deployment.
A “full-stack” catalog: direct-to-chip, row-based CDU, and hybrids
ASUS structures its offering as a comprehensive portfolio with three main approaches:
- Direct-to-chip (D2C): direct liquid cooling on components, designed to extract heat precisely where it’s generated, reducing reliance on large air flows.
- In-row CDU: refrigerant distribution units placed “in row” (beside racks) to manage heat exchange and enable dense deployments without turning each setup into a custom project.
- Hybrid designs: combinations of air and liquid cooling for cases where a full transition to liquid isn’t immediate or where architecture favors a mixed approach.
In its communications, ASUS emphasizes that this approach aims to address three critical pressures: density, power, and energy efficiency. The message is pragmatic: as rack computing density increases, the thermal margins shrink, and making mistakes in design becomes more costly.
A “validated ecosystem” for large-scale deployment
One of the most interesting parts of the announcement is ASUS’s focus on the partner framework. ASUS doesn’t just want to sell cooling components; it aims to act as an integrator capable of orchestrating end-to-end deployments with infrastructure and hardware providers.
This includes names like Schneider Electric and Vertiv as infrastructure partners, along with precision component manufacturers such as Auras Technology and Cooler Master, plus “other industry leaders.” The clear message is that liquid cooling — especially when dealing with ultra-dense racks — requires fine coordination across mechanical, hydraulic, sensor, material, and operational aspects, and this coordination is easier when there’s a known, tested integration ecosystem in place.
ASUS supports this announcement with figures to reinforce its technical credibility: it claims to hold 2,156 records as “Number 1” in SPEC CPU® and 248 results as “Number 1” in MLPerf™. The company uses these as proof of “real-world validation” and as indicators that their expertise extends beyond chassis design to performance and computational density.
Deployment case: a liquid-cooled AI supercomputer in Taiwan
Adding concrete weight to their story, ASUS presents a real example: its deployment at the National Center for High-performance Computing (NCHC), part of the National Institutes of Applied Research (NIAR) in Taiwan. In this project, ASUS describes a dual-compute architecture with a Nano4 NVIDIA HGX H200 cluster and a NVIDIA GB200 NVL72 system, claimed to be the first liquid-cooled AI supercomputer deployment in Taiwan with this architecture.
The key metric ASUS highlights is the PUE of 1.18, an industry standard indicating high energy efficiency for high-density setups. ASUS attributes this to their implementation of direct liquid cooling (DLC), designed from the ground up to balance performance with operational sustainability.
Additional documentation mentions that the Nano4 system achieved 81.55 PFLOPS and ranked #29 on the TOP500, illustrating that the focus isn’t just on cooling — it’s about competing in total performance at scale.
GTC 2026: ASUS aims to showcase muscle at NVIDIA’s showcase event
The announcement aligns with a specific schedule: NVIDIA GTC 2026, scheduled for March 16–19 in San Jose, USA. ASUS is participating as a Diamond Sponsor and will be present at booth #421 under the theme “Trusted AI, Total Flexibility”. The goal is to unveil a “new generation” of liquid cooling ecosystem in partnership with NVIDIA and other infrastructure providers.
This isn’t incidental. GTC has become the key platform where the AI ecosystem validates, compares, and shapes the immediate future of infrastructure. And with the market fixated on thermal and electrical limits, liquid cooling has shifted from a luxury to an essential element of the product offering.
The industry’s stakes: less air, more liquid, and less improvisation
Behind ASUS’s announcement is a clear trend: the move toward rack-scale AI platforms with increasingly dense accelerators is pushing the market to a model where liquid cooling is no longer “premium” but a viability condition. In this transition, value isn’t just in thermal design but in the ability to deploy it through partners, proven components, and repeatable procedures.
If the next wave of AI data centers is to operate under power, space, and sustainability constraints, the performance battle will be fought in the same arena: balancing heat, energy, and cost. ASUS aims to position itself precisely there.
Frequently Asked Questions (FAQ)
What is direct-to-chip (D2C) liquid cooling in AI racks, and when does it make sense?
It’s an approach that delivers coolant directly to CPUs and GPUs to extract heat at the source. It’s typically advantageous in high-density racks where air alone can’t effectively evacuate heat without disproportionate increases in power consumption and noise.
What is an in-row CDU in rack-scale GPU data centers?
A Coolant Distribution Unit (CDU) placed “in row” helps manage coolant flow, temperature, and distribution close to the racks, enabling scalable deployments while reducing hydraulic complexity per rack.
What does a PUE of 1.18 mean in a liquid-cooled supercomputing facility?
PUE compares total data center energy use to the energy consumed by IT equipment. A PUE of 1.18 indicates relatively low overhead for auxiliary systems like cooling, especially given high-density environments.
Why is liquid cooling critical for systems like NVL72 and future AI platforms?
Because they concentrate large amounts of power per rack. As accelerator density increases, efficient heat dissipation becomes a practical limit, especially when power and space are tight constraints.

