AMD accelerates global supercomputing: from Alice Recoque to Lux and Discovery AI factories

The revolution in supercomputing is no longer just about solving impossible equations; it’s about training AI models that help design new materials, understand complex diseases, or manage entire electrical grids. In this realm, AMD has positioned itself at the center by combining high-performance hardware with an open software layer to scale AI across businesses and research centers.

Its latest announcement brings together several key components: a new exascale supercomputer in Europe, two “AI factories” in the United States, increased presence in the Top500 and Green500 rankings, and a software platform, AMD Enterprise AI Suite, designed to take AI from the lab to production in minutes.


Alice Recoque: France enters the exascale league

Europe makes a giant leap with Alice Recoque, the new exascale supercomputer being built in France. It will be the country’s first and Europe’s second to surpass the exaflop barrier in double-precision floating-point performance (HPL). The system will be hosted by France’s GENCI agency, operated by CEA, and built alongside Eviden, with AMD’s next-generation technology in both CPUs and GPUs.

Alice Recoque will combine AMD’s upcoming EPYC processors, codenamed “Venice,” with AMD Instinct MI430X accelerators, specifically designed for high-performance workloads and frontier AI models. The goal is to exceed one exaflop of HPL performance, placing it among Europe’s most powerful supercomputers for traditional scientific simulation, while also serving as a European AI factory for climate, energy, industry, and health projects.

Beyond raw power, the system emphasizes energy efficiency and technological sovereignty: reducing power consumption per calculation while ensuring critical computing capacity is deployed within Europe, with components and architecture under European and trusted partner control.


Four of the world’s top ten supercomputers… and a growing footprint

This announcement comes at a favorable moment for AMD in supercomputing. The company states its technology now powers four of the top ten fastest supercomputers globally, including Frontier—the world’s first exascale system—and El Capitan, which currently leads the Top500 list.

Overall, 177 systems on the Top500 rely on AMD hardware, accounting for about 35% of the world’s most powerful supercomputers. In the Green500 ranking, which measures energy efficiency (performance per watt), 26 of the top 50 systems also use AMD technology.

This leadership extends beyond national supercomputing centers. AMD is promoting its EPYC architecture in the cloud with specific HPC instances: Microsoft Azure has generally available HBv5 virtual machines for CPU-based high-performance workloads, while Google Cloud offers H4D instances, almost quadrupling the performance of the previous C2D generation across typical HPC tasks—from industrial simulations to weather prediction and health applications. Meanwhile, providers like AWS have launched specialized EPYC-based instances for intensive technical computing.

The idea is clear: that the same hardware ecosystem powering supercomputers like Frontier and El Capitan is also available as a “service” in the cloud, allowing companies and labs to rent thousands of cores for days or hours without building their own systems.


Lux and Discovery: sovereign AI factories in the U.S.

On the frontier of AI, the most notable move is AMD’s agreement with the U.S. Department of Energy (DOE) to build two new AI-focused supercomputers at Oak Ridge National Laboratory (ORNL): Lux and Discovery. Valued at around $1 billion, the project aligns with the U.S. strategy of “Sovereign AI” and the American AI Stack initiative, which aims to develop domestic infrastructure for training next-generation models.

Lux is set to arrive first, with deployment expected from 2026. It is described as the first “AI Factory” dedicated to science in the U.S.: a system designed to train and deploy foundational models that advance materials, biology, clean energy, and national security. It will use AMD’s 5th generation EPYC processors, AMD Instinct MI355X accelerators, and AMD’s programmable networking, in an architecture optimized for both training and inference of large models.

Discovery will be the next flagship supercomputer at ORNL, inheriting the legacy of Frontier. It will leverage AMD Instinct MI430X GPUs and EPYC “Venice” CPUs, integrated into HPE Cray’s new supercomputing platform. Its deployment is planned for 2028–2029 and aims to push performance and energy efficiency further, combining traditional computing with large-scale AI workloads across applications from fusion energy to advanced material simulations and cybersecurity.

Both systems are envisioned as pillars of U.S. sovereign AI infrastructure, accessible for shared use among national labs, universities, and industry partners.


From the lab to the enterprise data center: AMD Enterprise AI Suite

Another piece AMD has unveiled is software. The company introduced AMD Enterprise AI Suite, an open, unified platform enabling businesses of any size to develop, deploy, and manage AI workloads at scale on AMD Instinct GPUs.

The suite is built as a comprehensive, Kubernetes-native stack connecting compute infrastructure (bare-metal or cloud) with orchestration, inference, and lifecycle management tools. Key components include:

  • AMD Inference Microservices (AIMs): prebuilt inference containers that package models, engines, and optimized hardware configurations, with APIs compatible with OpenAI and support for open-weight models.
  • AMD AI Workbench: a development environment for data scientists and AI teams, featuring low-code fine-tuning workflows, notebook or IDE-based workspaces, and tools to manage AIM deployments.
  • AMD Resource Manager: a control plane to manage GPU clusters with intelligent workload scheduling, team quotas, metrics monitoring, and access policies.

The value proposition is to enable a company to go from “raw” servers to a production AI platform in minutes, utilizing open components and avoiding vendor lock-in. For IT leaders, this means more control over costs, governance, and security; for data teams, a more direct path to GPU access and deploying models into production.


Accelerated science: proteins, materials, and beyond

Beyond infrastructure, AMD offers concrete examples of how this computing power translates into scientific breakthroughs. In collaboration with Lawrence Livermore National Laboratory and Columbia University, one of the largest and fastest protein structure prediction workflows ever executed was completed on El Capitan, harnessing its full capabilities.

Such simulations are crucial for speeding up drug discovery, understanding complex diseases, and designing enzymes and synthetic proteins for industrial and environmental applications. Combining AI models with traditional simulations allows exploring vast design spaces infeasible with classical methods.

Meanwhile, AMD announced a collaboration with Japan’s RIKEN Research Institute for joint research and technological development in supercomputing and AI. The agreement includes researcher exchanges, joint projects, and related activities, reinforcing AMD’s role as a technological partner in major scientific ecosystems.


A leadership mix of hardware, efficiency, and open software

The overarching message is that AMD aims to play a structural role in the new era of supercomputing: not just selling chips but orchestrating a combination of high-performance hardware, energy efficiency, and open software that enables governments, labs, and companies to build their own AI “factories.”

With supercomputers like Alice Recoque, Lux, and Discovery, the company secures a presence in some of Europe’s and North America’s most strategic scientific infrastructures. Its growing influence on the Top500 and Green500 lists underscores that energy efficiency will be as important as raw power. And with AMD Enterprise AI Suite, it seeks to bring that same approach to the corporate world, where the battle over control of enterprise AI platforms remains open.

Behind every figure—a single exaflop, four of the top ten systems, a billion-dollar investment—is a common goal: to shorten the time from idea to experiment, from model to discovery. The outcome won’t just be benchmarks but new materials, medicines, energy breakthroughs, and solutions to today’s seemingly intractable problems.


FAQs about AMD, supercomputing, and enterprise AI

What is an exascale supercomputer like Alice Recoque and why is it important?
An exascale supercomputer can perform at least one exaflop—one quintillion floating-point operations per second. Systems like Alice Recoque enable extremely complex scientific simulations—climate, plasma dynamics, materials, proteins—and training massive AI models within reasonable times. Its importance extends beyond science; it impacts industrial competitiveness, technological sovereignty, and national security.

What does it mean that Lux and Discovery are “sovereign AI factories”?
The term “AI Factory” describes infrastructure specifically designed to continuously train, tune, and deploy large-scale AI models. Calling them “sovereign” implies that key compute capacity, data, and models are controlled within a specific jurisdiction (here, the U.S.), reducing reliance on external vendors and strengthening autonomy in strategic areas like energy, defense, and health.

How does AMD Enterprise AI Suite differ from other enterprise AI platforms?
AMD Enterprise AI Suite is a fully open and modular platform built on Kubernetes, optimized for AMD Instinct GPUs. Unlike more closed approaches, it leverages open-source components and standard APIs, avoiding vendor lock-in and enabling integration of models and tools from various sources. It includes microservices for inference (AIMs), a development workbench, and resource management tools to govern GPU clusters at scale.

How can this AMD infrastructure benefit companies outside of research centers?
While the most visible examples are national supercomputers, the same technologies—EPYC processors, Instinct GPUs, and the Enterprise AI Suite—are being adapted for cloud and enterprise data centers. This allows companies in automotive, energy, finance, healthcare, and retail sectors to access advanced simulation and AI capabilities that were once only available to large laboratories—such as optimizing supply chains, training internal language models, or creating digital twins.

via: amd

Scroll to Top