NVIDIA Grace CPU C1 Drives Efficient AI Deployment in Edge, Telecommunications, and Storage

Taipei. NVIDIA confirmed this week at the COMPUTEX 2025 fair the growing support for its Grace C1 CPU, especially in sectors such as edge computing, telecommunications, and storage solutions. This reinforces its commitment to a high-performance and energy-efficient architecture tailored to the current challenges of artificial intelligence.

The Grace CPU line, which includes configurations like the powerful Grace Hopper Superchip and the flagship Grace Blackwell platform, is showing substantial improvements in efficiency and computing power. These advances are particularly relevant for demanding tasks such as training large language models and running advanced physical simulations.

A design focused on energy efficiency

The single-processor Grace CPU C1 model is attracting special interest for its energy efficiency, which doubles that of traditional CPUs in distributed environments with power constraints. This feature makes it an ideal solution for deployments in remote locations, edge nodes, telecommunications data centers, and high-performance storage systems.

Among the manufacturers integrating this technology into their platforms are giants like Foxconn, Jabil, Lanner, MiTAC Computing, Supermicro, and Quanta Cloud Technology, demonstrating a rapid adoption of the chip in the professional hardware ecosystem.

AI for telecommunications, energy, and beyond

In the telecommunications sector, NVIDIA highlights the use of the Grace CPU C1 in its Compact Aerial RAN Computer, a system that also incorporates an NVIDIA L4 GPU and a SmartNIC ConnectX-7. This combination is designed to enable distributed AI in radio access networks (AI-RAN), meeting critical requirements for power, performance, and size for base station installations.

The Grace CPU has also been adopted by leading companies in the energy and technology sectors. ExxonMobil is using it in its Grace Hopper platform for seismic imaging, accelerating the processing of large volumes of geological data. Meanwhile, Meta employs the same chip to enhance the efficiency of ad filtering and delivery, thanks to its high-speed NVLink-C2C interconnection between CPU and GPU.

Additionally, high-performance computing centers like the Texas Advanced Computing Center and the National Center for High-Performance Computing in Taiwan (NCHC) are already using the Grace CPU for AI research and scientific simulation.

Outlook and next steps

The Grace architecture represents a solid step toward more sustainable, efficient, and scalable AI systems. With configurations that adapt to both extreme performance and energy-constrained environments, NVIDIA aims to solidify its presence from data centers to the edge of the network.

The deployment of Grace C1-based solutions will continue to expand in 2025, alongside new AI applications and advancements that will be showcased at NVIDIA GTC Taipei, from May 21 to 22, as part of COMPUTEX.

Source: Nvidia

Scroll to Top