Nokia and Hypertec connect the Nibi supercomputer in Canada: a leap in networking, cooling, and capacity for research in health, climate, and artificial intelligence

The debate over technological sovereignty often focuses on chips, models, and talent. However, in practice, a country’s scientific competitiveness is also decided in less visible places: the computing centers where models are trained, physical phenomena are simulated, and biomedical data is analyzed at scale. Within this context is the deployment of Nibi, the supercomputer housed at the University of Waterloo (Ontario, Canada). Its network infrastructure and data center design have been driven by Nokia and Hypertec Group with the aim of strengthening research capabilities in health, climate science, engineering, and Artificial Intelligence (AI).

The announcement was formalized on January 22, 2026, positioning Nibi as a project with national ambitions: the system is integrated into SHARCNET (Shared Hierarchical Academic Research Network), a Canadian high-performance computing environment serving 19 academic institutions. Due to its membership count, it is considered the largest HPC consortium in the country. The declared goal is clear: to provide computing power so that faculty, students, PhD candidates, and research staff can run demanding workloads without relying on external infrastructure, with an expected benefit to more than 4,000 researchers annually.

The Strategic Shift: Moving to Ethernet-Based Interconnection

Beyond simply increasing “more power,” the significant move lies in the architecture. The launch of Nibi is accompanied by a technical decision that reflects the future direction of HPC in the AI era: the switch to an Ethernet-based interconnection. From SHARCNET, its chief technology officer, John Morton, emphasizes that the leap to Ethernet, along with system design and integration, aims to ensure scalability, reliability, and performance for a wide range of workloads: from traditional simulations to AI models that strain network and storage capacity.

This is where Nokia’s proposal fits in: the company provides its Data Center Fabric and IP network infrastructure for the cluster—a deployment type that, according to Nokia, is its first implementation in North America of this “class” of data center network tailored for AI-HPC. Hypertec, for its part, has acted as system architect and primary integrator, focusing on data center design and advanced cooling technologies.

What’s Inside Nibi: CPUs, GPUs, Storage, and an AI-Ready Network

Nibi’s configuration isn’t defined solely by node count. Various industry sources describe a system built around Intel Xeon 6 processors and NVIDIA H100 accelerators, complemented by a storage subsystem exceeding 25 petabytes. The cluster is interconnected with 400 Gb/s Ethernet, a noteworthy detail indicating a design optimized for low latency and high throughput data flows typical in AI workloads (training, large-scale inference, big data analytics), while maintaining compatibility with traditional HPC tasks.

Nibi supercomputer server room
Nokia and Hypertec connect the Nibi supercomputer in Canada: a leap in networking, cooling, and capacity for research in health, climate, and artificial intelligence 4

The University of Waterloo announced months earlier that the project represented a significant upgrade to its computing capacity: Nibi replaced the previous supercomputer Graham, describing itself as a system with more than 700 nodes and around 140,000 CPU cores. It also includes GPU nodes aimed at AI models (including configurations with eight H100 GPUs per node) and high-capacity flash storage to improve performance and reliability.

Immersion Cooling and Heat Reuse: When “Sustainable” Design Is More Than Marketing

If there’s one element making Nibi a breakthrough for data center industry, it’s its thermal approach. The deployment incorporates immersion cooling (with several single-phase tanks) and direct-to-chip liquid cooling solutions for the GPU subsystems. Energetically, this engineering aims to reduce the heating and cooling costs associated with a continuous heat generator.

The narrative goes beyond savings: one of the most striking aspects of the project is the reuse of residual heat to climate control the Mike and Ophelia Lazaridis Quantum-Nano Centre, a building dedicated to quantum research on campus. In other words, part of the energy normally lost as heat is recovered and repurposed as a valuable resource for the university complex.

Cultural considerations are also part of the story. Waterloo explained that the name Nibi means “water” in anishinaabemowin (Ojibwe). The name was chosen after consultations with local indigenous communities, aligning the concept of “water” with the cooling system’s role in system efficiency.

“Sovereign” Infrastructure and Momentum: Why These Pieces Matter

In its corporate release, Nokia frames Nibi as an example of Canada’s ability to “design, deploy, and operate” competitive AI research infrastructure based on domestic expertise. The message aligns with a broader trend: countries aspiring to leadership in science need local capacity not only to “do science,” but to train talent and retain projects.

Furthermore, the project reinforces Nokia’s presence in Canada. The company links this deployment to the recent expansion of its Ottawa campus (Kanata North Tech Park), a move aimed at accelerating innovation in AI networking, next-generation connectivity, and quantum technologies. Nokia also emphasizes its long-standing presence in the country since the late 1980s and a workforce of more than 2,700 employees nationwide.

What Nibi Means for HPC in 2026

The value of Nibi isn’t just in “being faster.” Instead, it represents a pattern: AI, high-performance Ethernet network, massive storage, and advanced liquid cooling as baseline requirements. Practically, this means the research community can run more experiments, iterations, and simultaneous projects. Industrially, infrastructure for HPC increasingly resembles that of large AI data centers—and vice versa.


Frequently Asked Questions

What types of research will the Nibi supercomputer support?

It is designed for intensive computing workloads in health, climate sciences, engineering, and Artificial Intelligence, including simulations, advanced analytics, and GPU-accelerated tasks within the Canadian academic ecosystem.

What does a 400 Gb/s Ethernet network bring to an AI-focused supercomputer?

It provides bandwidth and scalability to move large data volumes between nodes (CPUs/GPUs) and storage, which is critical for training and large-scale analysis. It also facilitates integration of common data center tools and operations.

Why is immersion cooling important in HPC and AI?

Because it improves thermal efficiency in high-density systems: dissipating heat more effectively than air, reducing cooling energy use, and in some designs, reclaiming residual heat for other building applications.

What is SHARCNET, and why is Nibi integrated there?

SHARCNET is a Canadian high-performance computation consortium with 19 member institutions. Incorporating Nibi into SHARCNET broadens calculation resources for researchers across multiple universities, not just a single campus.

via: nokia

Scroll to Top