The race for hyper-scale Artificial Intelligence is even changing the language tech companies use to describe their purchases. Where once they spoke of “number of GPUs” or “number of servers,” it’s increasingly common now to measure deployments in power consumption. In this metric, the announcement from AMD and Meta makes a bold statement: both companies have reached a strategic, multi-year, multi-generation agreement to deploy up to 6 gigawatts of AMD Instinct GPUs dedicated to Meta’s next-generation AI infrastructure.
The release, published on February 24, 2026, presents the agreement as an extension of an existing collaboration and, above all, as an effort to align roadmaps across three layers: silicon, systems, and software. The clear message is: to train and serve increasingly demanding models, purchasing accelerators alone is no longer enough; the entire platform—from rack to runtime—must be optimized for the types of workloads run at scale.
From “gigawatt” to rack: Helios architecture as the backbone
The initial deployment phase will be built on the AMD Helios architecture, a “rack-scale” design announced at the Open Compute Project (OCP) Global Summit 2025, developed jointly by AMD and Meta within the OCP ecosystem. Helios was created with a specific goal: enable scalable AI infrastructure at rack level based on open design principles, which is increasingly important for companies wanting to control costs, avoid supply bottlenecks, and tailor hardware to their internal needs.
In the current agreement, the first “gigawatt-scale” delivery is scheduled to begin shipping in the second half of 2026, driven by a custom AMD Instinct GPU based on the MI450 architecture, optimized for Meta’s workloads. The platform will be complemented by 6th-generation AMD EPYC CPUs, codenamed “Venice”, and AMD’s open compute ecosystem ROCm software stack.
Meta, for its part, frames this as a step in its compute diversification strategy. Mark Zuckerberg openly discussed “efficient inference compute” and linked the deal to his vision of advancing toward what he calls “personal superintelligence,” emphasizing that AMD will be a “major” partner for many years to come.
Why does this announcement matter: scale, efficiency, and technological dependence
The value of the agreement isn’t just in volume. 6 gigawatts suggest a scale leap that requires rethinking multiple aspects simultaneously: energy, cooling, networking, supply chain, and operations. But it also has a strategic reading: Meta isn’t just buying GPUs, it’s seeking roadmap alignment to prevent its evolution from relying on a single technological pathway.
For AMD, the announcement carries clear reputational significance. AMD CEO Lisa Su described the deal as a collaboration capable of delivering “high-performance and energy-efficient infrastructure” optimized for Meta, positioning it among the sector’s largest AI deployments. From a financial perspective, CFO Jean Hu stated the partnership should drive revenue growth over several years and positively impact earnings per share (non-GAAP), signaling this agreement is more than a one-off contract.
What caught Wall Street’s attention: performance-linked warrant
The agreement also includes an unusual component for its size: AMD has issued Meta a performance-based warrant for up to 160 million AMD common shares. This instrument vests in tranches as certain milestones linked to GPU shipments are met: the first tranche activates with the shipment of the initial 1 gigawatt, with subsequent tranches scaled to reach the full 6 gigawatts.
Furthermore, vesting is tied to specific AMD stock price thresholds, and exercise depends on Meta meeting certain technical and commercial milestones. In essence, this incentive aligns industrial execution (deliveries), technological goals (milestones), and market value creation.
Some media outlets have speculated that if all conditions are met, Meta could potentially hold a significant stake in AMD, with a valuation surpassing $100 billion. Neither AMD nor Meta have disclosed direct financial terms of the deal beyond the warrant structure and deployment milestones in the official announcement.
Beyond GPUs: EPYC “Venice” and the next step, “Summer”
The message from the release emphasizes a renewed focus on CPUs for large-scale platform design. AMD highlights that Meta has deployed “millions” of EPYC processors over several generations and currently uses GPUs from the MI300 and MI350 series. Now, with this new partnership, Meta will be a “leading customer” for the 6th-generation EPYC (“Venice”) processors and also for “Summer”, an upcoming EPYC CPU tailored with workload-specific optimizations aiming to improve performance-to-cost and performance-to-watt ratios.
The rationale is straightforward: at hyper-scale, CPUs are not just accessories. They orchestrate data flows, power networks and storage, and are a key driver of total cluster efficiency. When deploying AI at power levels measured in gigawatts, every percentage point of efficiency matters.
Industrial subtext: from “buying chips” to “buying platforms”
This announcement signals a phase shift in the market. AI is no longer purchased solely as accelerators; it’s bought as integrated platforms:
- Custom hardware (MI450 “custom”),
- Rack architecture (Helios),
- Aligned CPU and GPU (EPYC + Instinct),
- Software stack (ROCm),
- and a financial arrangement designed to enforce execution.
For Meta, the reward is faster scaling with more control. For AMD, it’s an opportunity to establish Instinct and ROCm as competitive alternatives in the major AI procurement landscape. And for the industry, the message is simple: future computing contracts will be decided not only by maximum performance but by the ability to deploy, operate, and evolve complete platforms over years.
Frequently Asked Questions (FAQ)
What does a “6 gigawatt deployment” of GPUs for AI infrastructure mean?
In this context, “gigawatts” is used as a measure of electrical power associated with massive computing capacity. It indicates a hyper-scale deployment where energy, cooling, networking, and operations are as critical as the chip itself.
What is the AMD Helios rack-scale architecture and why is it associated with Meta?
Helios is a rack-level AI infrastructure design developed in collaboration with Meta within the Open Compute Project ecosystem. It aims to enable scalable, open platforms for large deployments with design and efficiency control.
What is the performance-based warrant for up to 160 million shares, and why is it “performance-based”?
It’s an instrument that links share delivery to achievement milestones: it vests in tranches as AMD meets shipment targets (from 1 gigawatt to 6 gigawatts) and is also tied to AMD’s stock price thresholds and technical/business milestones.
What role do the EPYC “Venice” and “Summer” CPUs play in this AI partnership?
AMD and Meta aim to align CPUs and GPUs within the same platform. EPYC “Venice” (6th generation) is mentioned as part of the initial deployment, while “Summer” is an upcoming optimized EPYC processor designed to improve performance per dollar and per watt in AI environments.

