The race to “bottle a star” has taken a significant leap: NVIDIA and General Atomics, along with top-tier academic and supercomputing partners, have built an interactive digital twin of a fusion reactor that accelerates the simulation and testing of critical scenarios from weeks to seconds. Announced at GTC Washington, the project combines the NVIDIA Omniverse platform, CUDA-X libraries, and data center GPUs with real data and physical models to predict and control plasma behavior in near real-time.
This development relies on high-performance supercomputing resources — Polaris (ALCF, Argonne) and Perlmutter (NERSC, Berkeley Lab) — used for large-scale training of three surrogate AI models. The goal: speed up science and reduce risks before proceeding with the actual machine at the U.S. Department of Energy’s DIII-D national facility, which coordinates a consortium of 700 scientists from 100 organizations.
“Exploring scenarios virtually with an interactive digital twin is a game changer,” said Raffi Nazikian, fusion data science lead at General Atomics. “We can test, refine, and verify ideas orders of magnitude faster, accelerating the path toward practical fusion.”
From Weeks to Seconds: AI That Learns Decades of Operation
Simulating plasma — a “fourth state” of matter at hundreds of millions of degrees — has historically required weeks, even on the most advanced supercomputers. Now, the team uses surrogate AI models trained with decades of experimental data to predict key variables in the reactor in seconds:
- EFIT: estimates the plasma equilibrium.
- CAKE: delineates the plasma boundary.
- ION ORB: predicts the heat flux density of escaping ions.
Running on NVIDIA GPUs, these models offer accurate predictions at operational speed, enabling operators to maintain plasma stability and prevent damage to the facility while exploring “what-if” scenarios that would never be attempted on the physical machine.
A Digital Twin Merging Physics, Sensors, and Engineering
The DIII-D twin is built on NVIDIA Omniverse and runs on NVIDIA RTX PRO Servers and NVIDIA DGX Spark. It synchronizes in real-time:
- Sensors data from the reactor.
- Physics-based simulations and engineering models.
- Surrogate AI models to accelerate prediction.
The result is a unified, interactive environment where designers, physicists, and operators can test control strategies, optimize coils and profiles, or simulate entire campaigns risk-free. When a hypothesis works in the twin, it’s transferred to the real machine with greater confidence and fewer iterations.
Why It Matters for Commercial Fusion
- Scientific speed: reducing the cycle from weeks to seconds compresses the hypothesis → testing → adjustment loop.
- Safety and cost: avoiding maneuvers that could damage the reactor and prioritizing what truly warrants machine time.
- Design transfer: insights from the twin inform future reactor engineering, accelerating the leap from demonstrators to commercial plants.
This paradigm shift moves fusion from a physics-only challenge to a cyber-physical problem, where computing, data, and algorithms come together with diagnostics and plasma theory.
The Partners Making It Possible
In addition to General Atomics and NVIDIA, technical support has come from the San Diego Supercomputer Center (UC San Diego), Argonne Leadership Computing Facility (ALCF), and the National Energy Research Scientific Computing Center (NERSC). This public-private collaboration aims to industry-ize digital twin capabilities so that the fusion ecosystem can access design and control tools comparable to those of other complex industries.
What’s Next
The team will continue training and refining the surrogate models with new data and operational campaigns, expanding the twin to include more subsystems of the reactor. As models achieve real-time response with safety margins and explainability, their integration into control loops and experiment planning could bring fusion closer to viability and commercialization.
Frequently Asked Questions
What is a “digital twin” in fusion?
It’s a virtual replica of the reactor that combines physical models, real data, and AI, synchronized with the facility. It allows testing and optimizing risk-free before running on the physical machine.
Why use AI models if physical simulations already exist?
Plasma physics is costly to simulate and can take weeks. Surrogate AI learns from historic data and approximates results in seconds, enabling interactive and near real-time control.
Does AI replace high-fidelity physics?
No. AI complements: accelerates exploration and narrows the solution space. Physical-based models remain the benchmark for hypothesis validation and understanding causality.
What impact can this have on commercial fusion?
It shortens timelines and costs of R&D, reduces operational risks, and enables designing and controlling reactors with faster, quantitative evidence. An accelerator along the journey to viable, large-scale plants.
via: blogs.nvidia

