AMD used the showcase at CES 2026 in Las Vegas to deploy an ambitious narrative: Artificial Intelligence “everywhere and for everyone” will not be possible without a new generation of open, modular, and efficient infrastructure capable of scaling from data centers to PCs and industrial edge devices. In its opening keynote, the company’s President and CEO, Lisa Su, described a leap in scale that goes beyond product announcements: the industry is entering an era of yotta-scale computing, driven by simultaneous growth in training and inference.
The centerpiece of this message was a first look at “Helios”, a large-scale rack platform that AMD aims to provide as a “blueprint” for massive AI infrastructures. The company bases its starting point on the current global capacity of around 100 zettaflops and projects a leap to more than 10 yottaflops within the next five years, an evolution that would require rethinking not just raw power, but system design, networking, energy efficiency, and the software that controls it.
Helios: a “blueprint” for the future rack
AMD defines Helios as an open and modular rack architecture designed to evolve with future generations without forcing a “re-invent” of the data center every cycle. In its CES presentation, Helios is supported by three components: AMD Instinct MI455X accelerators, EPYC “Venice” CPUs, and “Vulcano” AMD Ethernet cards to interconnect thousands of accelerators in a unified system. All of this, the company emphasizes, is integrated through the AMD ROCm software ecosystem, aiming to reinforce an open platform approach against more closed-market solutions.
In numbers, AMD claims Helios can deliver up to 3 exaFLOPS of AI performance in a single rack, focusing on training models with trillions of parameters and maximizing bandwidth and efficiency per watt. This figure alone reflects the industry’s main concern today: adding GPUs is not enough; they must be powered and coordinated without network, memory, or operational management bottlenecks.
Instinct MI440X: a nod to “without redesigning the data center” enterprise
Alongside Helios, AMD expanded its accelerator family with the AMD Instinct MI440X, a model tailored for on-premise enterprise deployments. The idea is pragmatic: many companies want to train, fine-tune, and run models within their facilities — for control, cost, compliance, or sovereignty reasons — but are not always able (or willing) to transform their data centers into AI factories from scratch.
The MI440X is designed to integrate into existing infrastructure in a compact format of eight GPUs, aimed at training, fine-tuning, and inference workloads. Meanwhile, AMD reaffirmed that its roadmap for data centers and supercomputing relies on the Instinct MI400 family, with deployment examples in labs and national projects.
MI500 (2027): the promise of “up to 1,000 times” more AI performance
The most forward-looking announcement was a preview of the AMD Instinct MI500 series, scheduled for 2027. AMD states this generation aims to deliver up to 1,000 times more AI performance compared to the MI300X introduced in 2023, leveraging AMD CDNA 6 architecture, a 2 nm process, and HBM4E memory. As with long-cycle announcements, AMD frames these figures as projections and engineering targets, not final product results.
Practically, the message is clear: AMD wants its AI story to be seen not just as “another GPU alternative,” but as a sustained multi-generation race where efficiency and software — especially ROCm — will be as decisive as silicon.
AI reaches the PC: Ryzen AI 400, 60 TOPS, and a developer platform
The keynote also addressed the end user, where AMD sees a massive market: PCs with AI capabilities. The company introduced the Ryzen AI 400 Series and Ryzen AI PRO 400 Series platforms, featuring a 60 TOPS NPU and full ROCm support, aiming to enable “cloud-to-client” scaling for developers and businesses. AMD states that first systems will launch in January 2026, with broader availability in the first quarter of 2026 via OEMs.
The most notable feature for local inference was the mention of Ryzen AI Max+ 392 and Ryzen AI Max+ 388 platforms, equipped with 128 GB of unified memory capable of handling models with up to 128 billion parameters. Beyond marketing, these figures reflect a real trend: part of AI is “decentralizing” into devices, driven by privacy, latency, cost, or operational continuity considerations.
To complete the ecosystem, AMD previewed the Ryzen AI Halo Developer Platform, a compact desktop unit (SFF) aimed at developers, expected to be available in T2 2026. The idea is that if AI is to be “everywhere,” creators need accessible kits to test, optimize, and deploy without always relying on large remote clusters.
AI in the physical world: edge, automotive, and robotics
AMD reiterated its commitment to the edge with the announcement of Ryzen AI Embedded processors, x86-based embedded systems aimed at applications like digital cockpits in automotive, smart healthcare, and physical AI in autonomous systems, including humanoid robotics. The unifying theme remains: moving inference and decision-making close to the real environment, where latency, power consumption, and reliability matter as much as raw performance.
The political and social component: Genesis Mission and $150 million for education
The keynote included a moment of institutional tone. Lisa Su was joined by Michael Kratsios, director of the White House Office of Science and Technology Policy (OSTP), to discuss the Genesis Mission, a public-private initiative aimed at strengthening US leadership in AI and accelerating scientific discoveries. During the discussion, the creation of two supercomputers at Oak Ridge National Laboratory: Lux and Discovery, was mentioned.
Additionally, AMD announced a commitment of $150 million to expand AI access to more classrooms and communities, closing with recognition of over 15,000 students participating in the AMD AI Robotics Hackathon with Hack Club. For an industry often criticized for energy impact and concentration of technological power, this gesture aims to embed the “AI for everyone” message in tangible educational initiatives.
An ecosystem of partners supporting the vision
AMD reinforced its positioning with the presence of partners already using its technology to advance AI: OpenAI, Luma AI, Liquid AI, World Labs, Blue Origin, Generative Bionics, AstraZeneca, Absci, and Illumina, among others. On the competitive front, the CES scene reveals another key point: AMD emphasizes that its open platform and co-innovation approach is the way to gain relevance in a market where NVIDIA still sets the pace and computing demand remains high.
Frequently Asked Questions
What is AMD’s Helios platform and how does it serve AI data centers?
Helios is a large-scale rack architecture AMD presents as a “blueprint” for yotta-scale AI infrastructure, combining Instinct MI455X, EPYC “Venice,” Vulcano network cards, and ROCm software to connect thousands of accelerators.
How does the AMD Instinct MI440X differ from solutions used by hyperscalers?
The MI440X targets on-premise enterprise deployments, designed to fit into traditional infrastructures with a compact eight-GPU format for training, fine-tuning, and inference, without requiring a complete data center redesign.
When will the AMD Instinct MI500 series arrive, and what does it promise compared to MI300X?
Scheduled for 2027, AMD projects up to 1,000 times more AI performance than MI300X (2023), leveraging CDNA 6 architecture, 2 nm process, and HBM4E memory; these are early projections and subject to change before launch.
What does it mean for a PC to have a 60 TOPS NPU, and what real uses does this enable?
TOPS measure operations per second for AI tasks. A 60 TOPS NPU aims to accelerate local inference (like transcription, image/video enhancement, assistants, creative workflows), reducing latency and dependence on the cloud, though actual performance depends on models and software.
via: amd

