AMD Prepares the Instinct MI450X IF128: Its Massive Bet to Challenge Nvidia in Rack-Scale AI Computing

The new system with 128 GPUs is shaping up to be a direct competitor to Nvidia’s VR200 NVL144 in 2026.

AMD is taking a significant step to compete in the highest league of artificial intelligence hardware. According to SemiAnalysis, the company is preparing two new accelerators in the Instinct series for 2026: the MI450X IF64 and MI450X IF128, capable of scaling up to 64 and 128 GPUs respectively in rack-scale architectures.

With this ambitious proposal, AMD aims to directly challenge the powerful Nvidia VR200 NVL144, a system with 72 GPUs that has already established itself as a benchmark in high-performance AI and inference environments. AMD’s new design promises greater scalability, advanced interconnectivity, and increased bandwidth among GPUs.


The Instinct MI450X IF128: Architecture and Power

The MI450X IF128 will be AMD’s first system with support for multiple AI accelerators distributed across two racks, connected via Infinity Fabric over Ethernet, representing a significant leap in the company’s scaling strategy.

Key Components:

  • 16 x 1U servers, each with:

    • 1 AMD EPYC ‘Venice’ CPU
    • 4 Instinct MI450X GPUs
    • Dedicated LPDDR memory
    • PCIe x4 SSD storage
  • 128 GPUs in total, each with:
    • Over 1.8 TB/s of unidirectional internal bandwidth for inter-GPU communication
    • Up to 3 x 800GbE Thinking network cards per GPU for inter-rack scaling
    • A total of 2.4 Tb/s of outgoing network bandwidth per GPU

This will enable AMD to build much larger and denser AI clusters than it has achieved so far with the MI300 series.


A More Compact Alternative: MI450X IF64

To minimize technical risk and facilitate initial deployment, AMD is also preparing a simplified variant: the MI450X IF64. This model will be limited to a single rack, with a more contained architecture but based on the same design and scalability principles.

This phased approach would allow AMD to launch a more stable and controlled system first, before expanding to the 128 GPU version, which presents significant technical challenges in integration, power consumption, and cooling.


What Sets AMD Apart from Nvidia?

While Nvidia leads the market with its GB200 NVL72 and VR200 NVL144 solutions, AMD’s approach has some distinct features:

  • Passive copper cabling instead of Nvidia’s use of active optical cables. This could reduce costs and energy consumption, though it presents potential limits on signal integrity or distance.

  • Higher GPU density, with 128 units compared to Nvidia’s 72, which could provide an advantage in highly parallelizable workloads, if thermal and energy efficiency can be maintained.

  • Interconnection based on extended Infinity Fabric, marking an evolution from AMD’s traditional designs and offering a more modular and extensible architecture.

The Challenge of Execution

The true challenge for AMD is not only technical but also logistical: manufacturing, assembling, and deploying a system of this complexity requires precise coordination among hardware, firmware, software, and suppliers.

While AMD has demonstrated in recent years that it can compete with Nvidia on many fronts (thanks to the success of EPYC and MI300X GPUs), reaching the level of scale and complete system integration that Nvidia offers with GB200 remains an open race.


The Instinct MI450X IF128 is not just another accelerator; it’s a bold move by AMD to position itself in the competitive AI infrastructure market at scale. If it successfully executes this ambitious design, the company could become a key player not only in model training but also in enterprise inference.

With 2026 around the corner and a new generation of increasingly demanding language models, the showdown between AMD and Nvidia in AI promises to reach a new level… quite literally, at the rack level.

Source: AMD, Tom’s Hardware, and SemiAnalysis.

Scroll to Top