ASUS took advantage of GTC 2026 to showcase one of their most ambitious bets in AI infrastructure: a full range of systems spanning from rack-scale AI factories to desktop, edge, and enterprise solutions, all orchestrated around the NVIDIA Vera Rubin platform. The Taiwanese company’s message is clear: the next wave of AI won’t be fought solely in large data centers, but through a combination of high performance, advanced cooling, deployment flexibility, and local data control.
The most visible innovation is their entirely liquid-cooled infrastructure, which ASUS aims to address an increasingly evident reality: new AI clusters consume so much energy and produce so much heat that traditional cooling is becoming insufficient in many scenarios. The company claims their approach is designed to reduce both PUE and TCO. However, as with such announcements, it remains to be seen how these promises translate into real, comparable deployments across manufacturers.
An AI Factory Designed for Massive Loads
The centerpiece of the announcement is the ASUS AI POD based on NVIDIA Vera Rubin NVL72, specifically the XA VR721-E3 system, which ASUS describes as a 100% liquid-cooled platform for large-scale AI workloads. According to the company, this system can operate with a TDP of up to 227 kW in MaxP mode or 187 kW in MaxQ, targeting billion-sized models and environments where compute density is a critical factor. ASUS also claims the platform can deliver up to 10 times more performance per watt, a figure that should be interpreted as a manufacturer’s statement linked to the generational leap of the Rubin platform.
To support this approach, ASUS has confirmed partnerships with companies like Vertiv and Schneider Electric, aiming to provide a comprehensive layer of power and cooling solutions for different deployment types. This is no minor detail. In large-scale AI, the server is no longer just a box with GPUs but part of a larger system where power, cooling, redundancy, and data center design are almost as important as raw computational power.
Alongside the AI POD, ASUS has also introduced new servers based on NVIDIA HGX Rubin NVL8. The strategy here is more flexible: on one side, the XA NR1I-E12L is a hybrid option combining direct chip liquid cooling for the HGX Rubin NVL8 base with air cooling for two Intel Xeon 6 processors; on the other, the XA NR1I-E12LR is a fully liquid-cooled system. The goal is to facilitate a smoother transition for clients not yet ready to move entirely to liquid cooling across the rack.
From Data Center to Desktop and Edge
ASUS isn’t limiting its vision to supercomputing. Another key focus is bringing advanced AI capabilities to developers’ desktops and industrial edge. In this arena, they showcased the ASUS ExpertCenter Pro ET900N G3, a desktop system built on NVIDIA Grace Blackwell Ultra, and the ASUS Ascent GX10, a smaller form factor but also aimed at local development and experimentation. Both aim to provide enough memory and performance to handle large models and autonomous agents without relying solely on cloud infrastructure.
These devices are integrated with NVIDIA NemoClaw, the open-source stack NVIDIA introduced this week to facilitate always-on assistants based on OpenClaw. NemoClaw installs OpenShell, an open runtime adding isolation, permission controls, and privacy safeguards to run autonomous agents more securely. This layer is especially relevant because rental agents aren’t just about intelligence anymore—they involve controlling access to tools, data, and execution environments without introducing unnecessary risks in corporate and government settings.
At the edge, ASUS has introduced the PE3000N, a rugged inference engine powered by NVIDIA Jetson Thor, which, according to the company, reaches 2,070 TFLOPS and is designed for sensor fusion, autonomous navigation, and physical AI workloads. The goal is clear: build a complete chain from model training and tuning to deployment at the edge, where latency, physical robustness, and autonomy are more critical than marketing gloss.
ASUS Aims to Offer a Full Stack, Not Just Hardware
The third layer of their announcement centers on software and enterprise positioning. ASUS has strengthened its message around ASUS AI Hub, an on-premises platform it first introduced in late 2025 as a ready solution for deploying enterprise assistants, RAG workflows, and document intelligence with local data control. This week, the company reports that the platform has already been used internally by over 10,000 employees, handling peaks of more than 600 requests per hour, achieving over 80% OCR accuracy, and improving efficiency by more than 30%. These are ASUS’s internal metrics, useful for understanding use cases but not equivalent to independent market validation.
Additionally, ASUS has partnered with NVIDIA-certified storage providers like IBM, DDN, WEKA, and VAST Data to complete its offerings for AI and HPC environments where storage becomes a bottleneck. ASUS discusses block, object, JBOD, and software-defined systems, emphasizing their aim to compete not just as a server manufacturer but as an integrator of a complete AI architecture.
Ultimately, ASUS is pursuing the same direction as other major providers featured at GTC 2026: selling AI factories as an integrated concept. The difference is their effort to cover nearly every tier—from rack-scale Rubin systems to desktop solutions with Grace Blackwell and localized agents with OpenShell and NemoClaw. This approach could be attractive for companies seeking a single technological reference from lab development all the way to production, but it also requires ASUS to prove in practice that such a comprehensive catalog can translate into homogeneous deployments, robust support, and real integration across distinct components.
What this announcement makes clear is that enterprise AI in 2026 is no longer sold as just a server with accelerators. It’s positioned as an ecosystem where power, cooling, software, storage, data sovereignty, and agent security are part of the same package. And ASUS wants to play a role in that conversation.
Frequently Asked Questions
What has ASUS presented at GTC 2026 for AI?
A new range of AI infrastructure based on NVIDIA Vera Rubin, including liquid-cooled rack-scale systems, hybrid and fully liquid servers, desktop systems for local AI development, and edge and enterprise solutions.
What is ASUS AI POD with NVIDIA Vera Rubin NVL72?
It’s ASUS’s main solution for large-scale AI factories. Based on the NVIDIA Vera Rubin NVL72 platform, it’s designed for massive training and inference workloads with full liquid cooling.
What are NVIDIA NemoClaw and OpenShell used for in ASUS equipment?
They enable safer local execution of autonomous agents. NemoClaw installs OpenShell, adding isolation, permission controls, and privacy safeguards for always-on assistants and other agents based on OpenClaw.
Is ASUS AI Hub intended for companies that prefer not to use the cloud?
Yes. ASUS positions it as an on-premises solution for enterprise assistants, RAG workflows, and document AI, providing greater data sovereignty and local deployment within organizations.
via: press.asus

