NVIDIA Brings CUDA to RISC-V and Opens a New Era of Open Architectures for Artificial Intelligence

At the RISC-V Summit China, NVIDIA announced a significant strategic move: its CUDA software stack will begin supporting processors based on RISC-V architecture. This milestone marks a major step in the evolution of accelerated computing, opening a new chapter for developing more open, efficient, and customizable AI platforms.

Previously, CUDA environments were limited to working with x86 and Arm architectures, reinforcing dependency on manufacturers like Intel, AMD, or Arm chip designers. With this announcement, RISC-V transitions from an emerging architectural choice mainly in academia or embedded systems to a serious contender for deploying high-performance systems in artificial intelligence, data centers, and edge infrastructures.

RISC-V stands out for its open and licensing-free approach, allowing companies and organizations to design their own processors without paying royalties. This flexibility enables the creation of chips optimized for specific workloads—such as AI model inference, data processing, or network management—with significantly higher energy efficiency than general-purpose solutions.

With CUDA support, these RISC-V processors can serve as the main CPU in AI systems, enabling direct integration with NVIDIA GPUs. This not only facilitates the development of more efficient AI platforms but also breaks away from the proprietary silicon model that has dominated the industry for decades.

From a technical perspective, porting CUDA code to RISC-V will allow reuse of existing libraries and kernels on open platforms. NVIDIA aims to ensure a consistent development experience, easing the transition to more flexible and controllable hardware. Chip designers based on RISC-V can tailor their architectures to meet the precise requirements of each application, optimizing both performance and power consumption.

Strategically, this evolution poses a direct challenge to traditional architectures. While x86 maintains a longstanding dominance in servers and Arm has grown significantly in cloud and mobile environments, RISC-V now emerges as a third pathway with high disruptive potential—especially in regions and sectors seeking technological sovereignty.

The ability to combine RISC-V processors with NVIDIA accelerators via CUDA could redefine next-generation server design. Large-scale data center operators—where every watt matters—may choose lighter, customized architectures without licensing constraints. This advantage is particularly attractive to governments, hyperscalers, and cloud service providers seeking greater control over their infrastructure.

Moreover, against the backdrop of global geopolitical tensions and technological market fragmentation, RISC-V offers a neutral, open, and scalable model, now strengthened by one of the world’s most powerful software ecosystems: CUDA.

NVIDIA’s move is more than a technical gesture; it’s a statement about the future direction of high-performance computing. With RISC-V, the company not only broadens options for AI developers but also fosters a more competitive and innovative environment.

As early RISC-V CPU and NVIDIA GPU-based AI systems mature, the real impact of this initiative will become clearer. However, the combination of open architectures, energy efficiency, and CUDA compatibility is poised to change the rulebook across many tech sectors.

What was once a marginal alternative is now emerging as a fundamental pillar for building the next generation of intelligent infrastructures. With this step, NVIDIA reaffirms its leadership in AI and actively contributes to diversifying the global tech ecosystem.

Scroll to Top