AMD Wants to Be the Default CPU in the Cloud: How It’s Gaining Ground in 2025 (and Why It Matters in 2026)

AMD is leveraging the momentum of cloud computing to position its processors EPYC as a key player in the new wave of workloads: ranging from Artificial Intelligence and HPC to databases and general-purpose computing. In a review published by the company, AMD emphasizes that AWS, Microsoft Azure, Google Cloud, and Oracle Cloud expanded their catalogs in 2025 with instances based on EPYC, with more growth expected in 2026.

AMD’s narrative isn’t just about “more instances”: it’s about how hyper-scalers tailor specific families for particular needs (memory bandwidth, per-thread performance, memory footprint, local NVMe, confidential computing…), while the market becomes more demanding in terms of efficiency, cost, and security.

AWS: new families and a clear message on x86 performance

In the case of AWS, AMD highlights the expansion of instances with EPYC and even notes that Amazon attributes “the highest x86 performance” within AWS to these configurations.
Some of the most notable examples include:

  • C8a (compute-optimized): AWS reports up to 33% higher memory bandwidth compared to the previous generation in this line.
  • X8aedz: aimed at EDA and also useful for relational databases that benefit from high per-thread performance and large memory.

The core message is straightforward: CPUs are once again central in migrating legacy “on-premises” workloads to the cloud without performance becoming a bottleneck.

Google Cloud: EPYC as a foundation for general-purpose VMs, HPC, and AI

Google Cloud also appears as an active player expanding families with EPYC, offering VMs tailored for different use profiles.

  • C4D: Google states that these VMs can deliver up to 80% more throughput for web servers compared to the previous generation.
  • N4D: 출시 문서/릴리즈 노트에서 이 가족의 가용성과 커퓨()ct가 클라우드 카탈로그에 어떻게 통합되는지 보여줍니다.
  • H4D: positioned for HPC, with performance and memory benchmarks available on their solution page.

For AMD, the key message is that the same “block” (EPYC) can scale from web and enterprise workloads to HPC, serving as a foundation for broad portfolios—crucial when standardizing infrastructure is a goal.

Microsoft Azure: broad catalog with a focus on “confidential computing”

Azure exemplifies a “large catalog for specific use cases”: families suited for general purpose, HPC, local NVMe storage, and confidential options. AMD emphasizes Confidential Computing and the role of SEV (Secure Encrypted Virtualization) in protecting data “in use” within trusted execution environments (TEE).

Meanwhile, AMD highlights families and launches tied to performance (e.g., HPC lines, local NVMe storage solutions), and Azure has published technical details on families like HBv5.

Oracle Cloud: EPYC as a lever for enterprise and data

Within OCI, the relevance is twofold: compute instances (including bare metal and flexible options) and, importantly, the “data world.” AMD underscores the use of EPYC in services such as Exadata and managed databases, as part of a strategy to modernize critical platforms while maintaining consistency and efficiency.


Quick table: what EPYC brings to each cloud (based on announcements and public data)

ProviderMentioned examplesMain focusKey public data
AWSC8a, X8aedzCompute, EDA, memoryC8a: up to 33% more memory bandwidth; X8aedz: EDA focus
Google CloudC4D, N4D, H4DGeneral purpose, cost/performance, HPCC4D: up to 80% web server throughput increase
Microsoft Azure(various families), HBv5; confidential VMsHPC, local storage, confidential computingEmphasis on Confidential Computing and SEV; detailed info on HBv5 by Azure
Oracle Cloud (OCI)Exadata/DB services, EPYC instancesEnterprise, data, scalingExadata Database Service with EPYC highlighted in public docs

What’s behind this: why this “CPU war” is once again relevant

For years, the media spotlight was on GPUs, driven by the explosion of AI. But 2025 signals another trend: efficiency per watt, sustained performance, and hardware-level security are reshaping platform decisions. AMD emphasizes four key ideas: performance, efficiency, security (confidential computing), and scalability—the reasons major providers are expanding their EPYC catalogs.

Practically, this translates into a very “enterprise-oriented” reality: more options to select instance types tailored to actual loads (web, analytics, databases, EDA, HPC), less over-provisioning, and a more serious discussion about data protection in use when cloud is used for sensitive processes.


Frequently Asked Questions (FAQ)

Does this mean AMD “beats” Intel in the cloud?
Not necessarily. But the announcements and the pace of new families indicate that AMD is establishing a very strong position in x86 catalogs, especially in segments where performance/efficiency and total cost matter a lot.

What is “confidential computing,” and why is it so important now?
It’s an approach that aims to protect data while in use (not just in transit or at rest) through hardware isolation (TEE) and encrypted memory. AMD links this to SEV as part of that ecosystem.

Which types of companies benefit most from these new instances?
Those handling intensive workloads (databases, analytics, EDA, HPC) or those seeking to improve efficiency without soaring costs. Also, high-security organizations that can take advantage of confidential environments.

Why are there so many different “families” (C, N, H, X…)?
Because cloud is no longer optimized only for “generic CPUs”: it’s tailored to specific bottlenecks (memory, per-thread performance, throughput, local NVMe, etc.). Specialization usually improves performance and cost/effective usage if the right family is chosen.

via: amd

Scroll to Top