AI-Agentic Returns the CPU to Center Stage in Data Centers

Over the past two years, the conversation around AI infrastructure has almost entirely revolved around GPUs. NVIDIA became the unavoidable name, hyperscalers competed to secure accelerators, and HBM memory became a strategic resource. But the next phase of AI may shift some focus back to a component many considered secondary in this race: the server CPU.

The reason lies in agentic AI. As models move beyond simply answering questions and begin executing workflows, calling tools, coordinating tasks, querying databases, moving data, and acting on applications, the need for processors capable of orchestrating all that work grows. While GPUs remain essential for heavy training and inference, CPUs are gaining importance as control, coordination, and general execution layers.

UBS sees a much larger CPU market than expected

According to a UBS note picked up by several financial and tech media outlets, the total addressable market for server CPUs could grow from around $30 billion in 2025 to approximately $170 billion in 2030. It’s an aggressive forecast and should be taken as such, but it reflects a shift in perception: AI not only consumes accelerators but also requires more general compute to make those accelerators work effectively.

UBS’s thesis is that agentic workloads could increase the number of CPU cores needed per user and GPU by three to five times. This is not just about hosting models but managing thousands or millions of small decisions around them: preparing context, dividing tasks, executing code, validating results, routing requests, maintaining sessions, applying permissions, and coordinating agents.

In this scenario, UBS considers Arm as a potential major winner, with a market share that could reach 40% or 45% of the server CPU market by 2030. AMD would appear as the second natural beneficiary, thanks to its many-core, high-performance EPYC processors per watt. Intel would also benefit from market growth but would need to prove that its upcoming platforms can compete in density, efficiency, and cost.

This outlook is uncomfortable for Intel because the so-called CPU renaissance doesn’t guarantee equal benefit for all providers. If demand shifts toward more cores, lower power consumption, and highly adaptable designs for AI data centers, Arm and AMD could capture a larger share of the growth. Intel, which has dominated server CPUs for decades, faces a market that is growing but also changing shape.

The CPU as the control plane of AI

Intel was among the first companies to try to shift this narrative. In its Q1 2026 earnings presentation, Lip-Bu Tan stated that the CPU is once again establishing itself as an essential foundation of the AI era. According to Intel’s CEO, the CPU acts as the orchestration layer and critical control plane across the entire AI stack.

This idea aligns with what AMD is also saying. Lisa Su argued in the latest earnings call that inferencing and agentic AI are increasing demand for high-performance CPUs and accelerators. AMD reported Q1 2026 revenues of $10.253 billion, a 38% year-over-year increase, with data center revenue up 57% to $5.8 billion. This business is now AMD’s main revenue and profit driver.

AMD emphasizes that the new demand for CPUs does not necessarily cannibalize GPU demand. Their perspective is that accelerators remain necessary for foundational models and heavy workloads, while agents generate additional tasks for CPUs. In traditional setups, a CPU might host multiple GPUs, such as one CPU for four or eight accelerators. With many agents coordinating tasks, that ratio could approach one-to-one in some deployments.

The distinction is important. Classic generative AI could be viewed as querying a model. Agentic AI resembles more a chain of actions: interpret a request, consult documentation, call an API, generate code, review the result, request validation, execute another tool, and deliver a response. Each step involves coordination, memory, input/output, networking, and general compute.

Why Arm and AMD have an advantage

Arm has been growing in data centers for years, thanks to cloud providers designing their own chips or adopting more efficient architectures. AWS Graviton was the most visible example, but this trend has expanded to other environments where power efficiency is as critical as raw performance. In agentic AI, that advantage may be even more decisive, as many tasks do not require a monolithic CPU running at maximum frequency, but rather many efficient cores working in parallel.

Arm is positioning its data center designs for AI workloads around this: greater efficiency, flexible deployments, and the capacity to handle agentic workloads. While Arm doesn’t manufacture chips itself, it licenses architectures adaptable for hyperscalers, server OEMs, and specialized providers.

On the other hand, AMD’s strong position with EPYC processors benefits from its multi-core strategy, excellent performance per socket, and efficiency gains, which have gained ground in servers recently. Additionally, AMD can combine EPYC CPUs with Instinct accelerators, networking components like Pensando, and rack platforms such as Helios, offering a more complete AI infrastructure solution.

Intel retains relevant strengths: a large installed base, established customer relationships, in-house manufacturing capabilities, advanced packaging, and a roadmap including CPUs like Coral Rapids. However, it needs to execute well. If the market shifts toward more cores per watt and designs that are highly tailored for agentic workloads, Intel’s historical brand alone may not suffice.

The personal computer also enters the picture

UBS’s note introduces another interesting point: agentic AI may increase CPU demand not only in data centers but also in personal computers. Tools like Claude Code, development assistants, personal agents, and automation workflows running on user devices could raise requirements for CPU, memory, and NPU in laptops and desktops.

Not all users will need workstation-level systems, but this indicates a split in the market. Heavy tasks will remain in data centers, while everyday AI workloads might run locally for privacy, latency, or cost reasons. In this environment, AMD and Intel could also benefit, though they will compete with Apple Silicon, Qualcomm, and other Arm-based designs.

The key will be balancing cloud and device capabilities. More complex agents will require access to large models, enterprise data, and cloud tools. Still, many intermediary steps, like context preparation, lightweight automation, local analysis, or code execution, can benefit from more powerful CPUs on the device itself.

A new architecture for AI data centers

The most significant implication is that AI data centers will be measured less by the number of GPUs and more by how many accelerators, how many CPUs, how much memory, network, storage, and orchestration software are needed to reliably support thousands of agentic workflows.

This shifts infrastructure economics. A cluster with many GPUs but insufficient CPUs may fall short in orchestration. Conversely, a system with efficient CPUs but too few accelerators won’t be able to run large models at scale. The advantage will come from designing balanced systems with appropriate ratios for different workloads: training, inference, enterprise agents, code generation, document analysis, or operational automation.

UBS’s thesis may sound optimistic in figures, but it points in a reasonable direction. Agentic AI makes compute more distributed and complex. It doesn’t eliminate the need for GPUs but layers more components around them. And within those layers, CPUs again play a central role.

For Intel, this presents both an opportunity and a threat. The market could grow strongly but not necessarily follow the rules that historically secured Intel’s dominance. For AMD, it’s a chance to strengthen its EPYC offerings and sell a broader AI platform. For Arm, it could be the moment to solidify its entry into data centers with a proposition focused on efficiency and scale.

The AI era won’t be just a war of accelerators; it will be a battle of complete systems. And in that competition, CPUs will once again matter greatly.

Frequently Asked Questions

Why does agentic AI increase CPU demand?
Because agents do more than generate responses. They coordinate tasks, call tools, move data, execute processes, enforce permissions, and manage parallel flows—all of which require general compute alongside GPUs.

Will CPUs replace GPUs in AI?
No. GPUs will remain essential for intensive training and inference. The thesis is that CPUs will grow as a necessary complement to orchestrate and run agentic workloads.

Why might Arm benefit more than Intel?
Because many agentic workloads value core count and energy efficiency over maximum single-thread performance. In that context, Arm-based designs can be more attractive for hyperscalers and AI servers.

What role does AMD play in this shift?
AMD benefits from its many-core EPYC processors and its comprehensive AI infrastructure strategy, including CPUs, GPUs, networking, and rack platforms, positioning it well to capitalize on these trends.

Scroll to Top