NVIDIA and Intel Form a Strategic Partnership to Co-develop x86 CPUs and SoCs with RTX and Native NVLink Integration: $5 Billion Investment and Multi-Generational Roadmap

Santa Clara, September 18, 2025. NVIDIA (NASDAQ: NVDA) and Intel Corporation (NASDAQ: INTC) announced a multi-year collaboration to develop multiple generations of custom products for data centers and PCs, aiming to accelerate applications and workloads in the hyperscale, enterprise, and consumer markets. The agreement is based on a clear technical premise: natively connecting both companies’ architectures via NVIDIA NVLink, merging NVIDIA’s AI and accelerated computing stack with Intel’s x86 CPUs and ecosystem.

In addition to technological commitments, NVIDIA will invest $5 billion in Intel common stock, at a price of $23.28 per share. The deal is subject to customary closing conditions, including necessary regulatory approvals.


What has been announced precisely

The joint official statement outlines two main product lines:

  1. Data Center
    • Intel will design and manufacture customized x86 CPUs for NVIDIA.
    • NVIDIA will integrate these customized CPUs into their AI infrastructure platforms and offer them to the market as part of their accelerated computing catalog.
  2. Personal Computing (PC/Client)
    • Intel will produce and market x86 SoCs that integrate NVIDIA RTX chiplets.
    • These x86 SoCs with RTX are aimed at a broad range of PCs requiring world-class CPU and GPU in a single package.

In both cases, NVLink connectivity emerges as the high-performance backbone to connect CPU and GPU, reducing bottlenecks, optimizing data movement between chips, and enabling shared memory topologies and scalable efficient AI and graphics workloads.


Voices of the CEOs: “two world-class platforms” and a stack reinvention

NVIDIA’s founder and CEO, , framed the deal as part of a broader transformation of the entire computing stack:

“AI is driving a new industrial revolution and reinventing every layer of the stack, from silicon to software. At the heart of this reinvention is CUDA. This historic collaboration tightly couples NVIDIA’s AI and accelerated computing stack with Intel’s CPUs and the extensive x86 ecosystem. Together, we will expand our ecosystems and lay the groundwork for the next era of computing.”

Meanwhile, Intel’s CEO, Lip-Bu Tan, emphasized the combination of data center and client platforms with Intel’s advanced manufacturing and packaging capabilities:

“Intel’s x86 architecture has been fundamental to modern computing for decades. Our leading data center and client platforms, combined with our process technology, manufacturing, and advanced packaging capabilities, will complement NVIDIA’s leadership in AI and accelerated computing to enable new advances. We appreciate Jensen and NVIDIA’s trust through their investment, and we look forward to innovating together for our customers.”


What it means in practice: from cleanroom to rack and laptop

1) Data center: Custom x86 CPUs for AI stacks

  • AI-oriented co-design: Intel’s x86 processors designed for NVIDIA can align with inference and training needs (memory bandwidth, telemetry, I/O accelerations) and with the orchestration logic NVIDIA already manages in its infrastructure platforms.
  • NVLink as backbone: the low latency and high throughput of NVLink are key to avoiding bottlenecks during high traffic between CPU and GPU (LLM prefill, vectorization, data pipelines).
  • Integrated delivery: by integrating and marketing these CPUs within their AI platforms, NVIDIA can optimize the entire stack (silicon, drivers, frameworks, runtime, deployment), reducing integration steps for hyperscale and enterprise customers.

2) PC/Client: x86 SoC with RTX chiplets

  • A single chip, two worlds: the x86 SoC with RTX chiplets aims to bring RTX-class graphics close to the PC market inside the processor, with potential to improve efficiency, latency, and integration cost over discrete solutions.
  • Scalable designs: the chiplet approach suggests modular configurations (ranging from premium laptops and light workstations to compact desktops) and unified drivers aligned with the RTX ecosystem.
  • Use cases: content creation, gaming, local AI, and professional applications (CAD, DCC) could benefit from RTX acceleration included, without needing a separate GPU in every SKU.

Technical implications: why NVLink and why now

The main bottleneck in generative AI and advanced rendering is not just FLOPS, but efficient data movement between CPU and GPU, and increasingly between GPU and GPU. In this context:

  • NVLink acts as a high-speed backbone for shared memory, kernel coordination, and feeding the accelerated computing with the right data at the right time.
  • Early architecture-level integration (not just at the motherboard level) allows for minimized latency, simplified communication software, and better hardware utilization in heavy workloads like AI, data science, graphics, and media.
  • In PCs, a x86+RTX SoC with optimized internal interconnect can bring local generative AI performance (video editing with models, upscaling, synthesis) closer to the end-user without heavy reliance on cloud services.

Market perspective: an alliance reshaping incentives

  • For hyperscale and enterprise customers: access to AI platforms where CPU and GPU are designed to work together over NVLink reduces integration risks and accelerates AI project time-to-value.
  • For OEMs and channel partners: x86 SoCs with RTX can simplify thermal designs, motherboards, and validation, with more streamlined product lines (and potentially more efficient) for laptops and desktops with local AI.
  • For developers: an ecosystem where CUDA/RTX native-to-x86 coexistence can lead to more in-depth framework, engine, and creative app optimizations.

The investment: $5 billion at $23.28 a share

The official release specifies that NVIDIA will invest $5 billion in Intel ordinary shares, at a $23.28 per share price. This operation is subject to regulatory approvals and typical closing conditions.

Important note: Both NVIDIA and Intel warn that the announcement includes forward-looking statements subject to risks and uncertainties (product acceptance, timelines, approvals, competitive developments, etc.), referring to their SEC filings 10-K and 10-Q for detailed risk factors.


What remains to be known: schedule, nodes, packaging, and software

The agreement details architecture and collaboration model, but leaves open aspects that will determine the real impact:

  • Availability schedule for the first customized x86 CPUs and x86 SoCs with RTX.
  • Manufacturing nodes and advanced packaging technologies Intel will employ (key for chiplets, interposers, and thermal budgets).
  • Delivery models: which products NVIDIA will directly include in its infrastructure portfolio and which ones Intel will market under its brand for OEMs and channels.
  • Software and drivers: how the compute stacks (CUDA/RTX, AI libraries) and the x86 toolchain will be integrated to maximize performance and simplify development.

Strategic context: two platforms, one goal

The alliance does not involve mergers or control changes: it is a technological collaboration with minority investment. The value proposition lies in combining the x86 scale and manufacturing of Intel with NVIDIA’s accelerated stack and CUDA/RTX ecosystem:

  • For NVIDIA, access to tailored x86 CPUs and enabling client SoCs with its RTX inside processors opens new formats and segments.
  • For Intel, co-designing data center CPUs for a broader AI infrastructure and offering x86 SoCs with RTX enhances its relevance across both data center and PC markets.

Press conference and live broadcast

Both CEOs will hold a press conference today at 10:00 a.m. PT / 1:00 p.m. ET. The broadcast will be accessible to the public via NVIDIA’s official link.


Conclusion: a turning point for “native CPU+GPU” AI

This news is not just a business deal: it is a joint announcement of two leading platforms to redefine how AI systems and high-performance PCs are built and deployed in the coming years. If the NVLink-native integration and delivery model live up to expectations, we will see:

  • Data centers featuring custom x86 CPUs designed to coexist with accelerated GPUs within the same interconnection architecture.
  • PCs where x86 SoCs and chiplets RTX erase traditional boundaries between integrated and discrete, enabling local AI and advanced graphics “out of the box”.

The real success will depend on execution: timing, manufacturing, drivers, tooling, and OEM capacity to turn technical promise into reliable, attractive products. Nonetheless, the announcement sets the stage for a new phase in which CPU and GPU stop being separate worlds to become a joint design solution, from silicon to software.


Frequently Asked Questions

What exactly did NVIDIA and Intel announce?
A multi-generational collaboration to develop custom x86 CPUs for NVIDIA (data center) and x86 SoCs with NVIDIA RTX chiplets (PC). The NVLink interconnection will be the high-performance fabric connecting both architectures natively.

Who will manufacture what?
Intel will design and produce the custom x86 CPUs for NVIDIA’s AI systems and also manufacture and supply x86 SoCs with RTX chiplets for the personal computing market. NVIDIA will integrate the data center CPUs into its AI platforms and market these solutions.

Is there financial investment involved?
Yes. NVIDIA will invest $5 billion in Intel’s ordinary shares at $23.28 per share, subject to regulatory approvals and customary closing conditions.

Why is NVLink important in this deal?
Because it reduces latency and increases bandwidth between CPU and GPU, which is critical for fueling workloads in AI, graphics, and media without bottlenecks. Integrating it from the design stage allows for predictable performance and simpler software.

When will the first products be available?
The press release does not specify dates. Timelines will depend on development, manufacturing, and approvals. NVIDIA and Intel have announced a press event today for more details.

What risks have the companies identified?
Both include forward-looking statements and refer to their SEC filings. Risks include product acceptance, competition, timelines, regulatory approvals, and manufacturing and software dependencies.

How will this impact the PC market?
x86 SoCs with RTX could enable local AI and advanced graphics “out of the box” for more formats (laptops and desktops), with lower integration complexity for OEMs and better efficiency for end-users.

via: nvidianews.nvidia

Scroll to Top