AMD and Celestica Accelerate Helios: Their Open Bet on AI Racks

AMD wants Helios to move beyond being just a roadmap promise and start taking shape as a real platform for AI data centers. The company announced on March 16 a strategic partnership with Celestica to bring this next-generation rack architecture to market, designed for large-scale AI deployments and based on open standards. The agreement makes Celestica responsible for R&D, design, and manufacturing of the system’s internal interconnection switches, a key component in any large-scale AI infrastructure.

At first glance, it might seem like another announcement in the flood of AI industry partnerships. But Helios points to something deeper. AMD isn’t just talking about selling GPUs or CPUs; it’s about packaging a complete rack architecture for training and inference with an industrial-grade focus, ready to deploy in cloud, enterprise, and research environments. Availability to customers is targeted for late 2026, confirming the company’s intention to reach the next major wave of AI infrastructure with a more integrated and, above all, more open approach compared to many competitors.

Helios is no longer just a name on AMD’s roadmap

Helios didn’t appear out of nowhere this month. AMD has been building the narrative around this platform for over half a year. In June 2025, it introduced Helios as its next big rack solution for AI, based on the new generation GPU Instinct, EPYC “Venice” CPUs, and “Vulcano” Pensando NICs. In October, it linked Helios to Meta’s newly promoted Open Rack Wide format within the Open Compute Project, and by January 2026, it defined Helios as the “blueprint” for yotta-scale infrastructure, with up to 3 exaflops of AI performance per rack according to its estimates. In other words, the partnership with Celestica doesn’t launch Helios from scratch; it marks the transition from a technical vision to system industrialization.

This nuance is important because the AI market no longer moves chip by chip but rack by rack. Large training loads and distributed inferences demand complete platforms where GPU, CPU, memory, network, and software work seamlessly together. AMD understands this and has long sought to sell not just accelerators but a more tightly integrated infrastructure—though open in standards. This is where Helios fits in: as a complete system combining silicon, networking, ROCm software, and physical rack design under a unified narrative.

Celestica’s role goes far beyond assembly

Within this plan, Celestica plays a less visible but probably equally critical role compared to AMD. According to the joint announcement, the Canadian company will handle R&D, design, and manufacturing of the scale-up switches for Helios architecture, based on the OCP Open-Rack-Wide format. These switches will use advanced networking silicon to interconnect AMD Instinct MI450 Series GPUs at high speed, utilizing the Ultra Accelerator Link over Ethernet (UALoE) architecture.

This is not a minor technical detail. In the new generation of AI systems, internal rack interconnection has become one of the major bottlenecks. Powerful accelerators alone are not enough; they must communicate with minimal latency, high bandwidth, and topologies that don’t turn the system into a maze of inefficiencies. UALink and its Ethernet-based variants aim precisely to open an interconnect space that for years has been more closed and tied to proprietary technologies. The UALink consortium advocates this approach as a way to deliver high bandwidth and low latency within a specification governed by industry standards rather than a single vendor.

Thus, Celestica is not just an assembler here; it’s a partner in realizing a crucial part of the rack: the internal network that connects GPUs and enables system scalability. This aligns with one of AMD’s major messages in 2025 and 2026: reducing deployment times, improving supply chain resilience, and offering an open standards-based alternative built through external manufacturing partners.

The real battleground is the infrastructure model

The announcement also hints at AMD’s strategic focus. The company openly states that its AI strategy now revolves around supporting a complete rack architecture rather than just comparing accelerators. In previous agreements, like those with Oracle and HPE, AMD linked Helios to specific deployments: Oracle announced in October 2025 that its future AI superclusters would be based on Helios, with a first public deployment of 50,000 GPUs planned for Q3 2026. HPE mentioned in December a version of the system with MI455X capable of reaching up to 2.9 exaflops FP4 per rack, according to AMD and the partnership’s figures.

This context helps explain why Helios is now significant. AMD aims to build not just a product family but a repeatable, rack-scale ecosystem: a reference design that can be deployed across public clouds, private data centers, scientific environments, or sovereign AI platforms, without relying entirely on proprietary architectures. The Open Rack Wide format, introduced by Meta at OCP 2025 as an open standard for double-width racks designed for extreme power, cooling, and maintenance, aligns well with this vision of interoperable and scalable infrastructure.

There’s also an underlying industrial message. AMD recognizes that competing in large-scale AI requires more than just a good GPU. A credible platform story, strong manufacturing partners, mature software, and a network capable of supporting ever-growing clusters are essential. Celestica provides part of that muscle: experience in design, manufacturing, supply chain management, and data center infrastructure solutions. This partnership aims to solve a practical challenge many clients are beginning to face: how to deploy complex AI systems without each project becoming a slow, artisanal, and risky integration.

Nevertheless, caution is advised. Helios remains largely a forward-looking platform. Its key components—MI450, large-scale UALoE, scale-up switches, and broad availability planned for late 2026—are part of an ambitious roadmap, not an already-installed mass product in hundreds of data centers. That’s why the partnership with Celestica is so relevant: it indicates AMD has moved beyond the conceptual phase and begun to establish industrialization agreements with capable partners to turn its design into deployable hardware.

Frequently Asked Questions

What exactly is AMD Helios for AI?
It’s a rack-scale architecture for AI developed by AMD that integrates next-generation Instinct accelerators, EPYC “Venice” CPUs, Pensando networking, and an internal interconnect designed for large-scale training and inference deployments. AMD presents it as an open platform based on Open Compute Project standards.

What will Celestica’s role be in the Helios project?
Celestica will be responsible for R&D, design, and manufacturing of the internal interconnection switches for Helios, a critical piece for high-speed GPU interconnection within the system.

When will Helios be available to customers?
AMD and Celestica have indicated that Helios will be available to customers by late 2026. Previous AMD announcements also anticipate major Helios deployments in the second half of 2026.

Why is there so much talk about open standards in AMD’s platform?
Because Helios relies on elements like OCP Open Rack Wide and UALoE, aiming to provide a more interoperable, scalable infrastructure that’s less dependent on proprietary technologies at the core of AI racks.

Scroll to Top