The Rise of AI Servers: The Battle for Computing Among CPUs, GPUs, and New Accelerators

In the global race for dominance in artificial intelligence, servers designed specifically for these workloads have become the heart of data centers. According to a recent report from Bank of America, these systems—known as AI servers—are defined as basic computing configurations composed of a CPU, one or more accelerators (such as GPUs, ASICs, or XPUs), and the associated memory. Although their design may seem simple, the market surrounding them is increasingly complex, competitive, and strategic.

CPUs Remain Essential, but Their Dominance is Declining

Every server architecture begins with a CPU, but its relative weight in the performance and total value of the system has diminished significantly in recent years. Still, CPU shipments continue to grow, driven by the global increase in the number of servers. Shipments are expected to rise from 12.3 million units in 2023 to over 16 million by 2027.

Most traditional servers use a dual-socket configuration, meaning two CPUs per server. However, this setup is increasingly complemented by accelerators that handle the more intensive workloads, especially in artificial intelligence and machine learning.

Intel Loses Ground to AMD and ARM in the Data Center Wars

The BofA report details the transformation of the competitive landscape in the server CPU market, highlighting how Intel (INTC), once the undisputed leader, has lost ground to AMD and ARM architecture. Here’s a breakdown by supplier:

Intel (INTC)
Until 2017, Intel controlled over 99% of the market by value, thanks to its established ecosystem and technical leadership. However, since product and manufacturing cycle delays began around 2018, the company has started to lose market share to AMD. In the cloud computing sector, where Total Cost of Ownership (TCO) is critical, Intel has lost a significant portion of its dominance and now holds less than 50% of the market in the cloud, although it remains strong in traditional enterprise environments where switching costs are high.

AMD
The launch of EPYC processors based on the Zen architecture in 2017 marked a turning point. AMD has leveraged its more agile product cycles and chiplet-based designs—with a higher core count—to take market share from Intel, especially in the cloud sector. It currently exceeds 50% market share by value in this segment and is expected to reach a global share of 40% by 2027. Even in the enterprise market, where Intel has a favorable installed base, AMD continues to gain customers with its competitive products.

ARM
Unlike Intel and AMD, which use the x86 architecture, ARM processors utilize an Instruction Set Architecture (ISA) developed by ARM Holdings, based in the UK. Their main advantage is energy efficiency, making them attractive in contexts where TCO has become a priority. While historically less flexible and adaptable, recent developments are changing that perception.

Major data center operators (hyperscalers) have already begun adopting internally developed ARM CPUs. This includes Amazon with Graviton, Google with Axion, and Microsoft with Cobalt. Additionally, Nvidia has introduced configurations like the GB200 rack, which combines a Grace CPU (ARM) with its new Blackwell GPUs, positioning itself as a powerful solution for large-scale AI workloads.

The New Balance in Data Centers

The rise of ARM architectures and the consolidation of AMD highlight a paradigm shift in modern data centers. Energy efficiency, scalability, and TCO are reshaping the technological landscape, challenging x86 hegemony and enabling new deployment strategies from key industry players.

This shift impacts not only chip manufacturers but also the entire ecosystem of software, cloud services, and infrastructure management. The trend is clear: servers are no longer monolithic systems dominated by a single brand or architecture. They are heterogeneous platforms optimized for the new era of artificial intelligence.

Scroll to Top