NVIDIA has made a significant move in its area of expertise: accelerated computing for artificial intelligence. The company has not only launched its new Jetson Thor module globally, designed for robots and physical AI systems, but also introduced the DRIVE AGX Thor Developer Kit for automotive applications, reinforced its AI factory strategy with new networking technologies, and formed a partnership with Japan to develop the FugakuNEXT supercomputer.
This deployment underlines NVIDIA’s ambition to extend its Blackwell platform from edge devices to exaflops in data centers, covering autonomous vehicles, humanoid robots, smart operating rooms, and national research systems.
Jetson Thor: Real-time AI for future robotics
Robots are becoming smarter and more autonomous with the general availability of Jetson Thor. This new module increases AI computing capacity by 7.5 times over its predecessor, Jetson Orin, delivering 3.1 times more CPU performance and double the memory.
Its key feature is the ability to run multimodal reasoning models—vision, language, and actions—in real time at the edge, independent of the cloud. This opens up possibilities for humanoid robots like Digit from Agility Robotics or Atlas from Boston Dynamics, which have announced integration with Jetson Thor.
The leap is not limited to industrial or logistics robotics. Surgeons aided by AI, smart tractors for agriculture, or visual agents monitoring factory safety could all benefit from its power.
Peggy Johnson, CEO of Agility Robotics, explained, “With Jetson Thor, we’re giving robots the ability to perceive and reason with the immediacy of a human.”
DRIVE Thor: the foundation of next-generation autonomous vehicles
Meanwhile, NVIDIA launched the DRIVE AGX Thor Developer Kit, a platform designed to accelerate autonomous vehicle development and intelligent mobility. Built on the Blackwell architecture, it includes Arm Neoverse V3AE CPUs and is certified to high safety standards like ISO 26262 ASIL-D and ISO/SAE 21434.
Early customers include manufacturers such as Volvo, BYD, and Xiaomi, as well as autonomous truck startups like Aurora and Waabi.
With up to 2,000 TOPS in FP4 and 64 GB of LPDDR5X memory, DRIVE Thor promises to handle complex reasoning models required for driverless cars, processing real-time data from cameras, radars, and lidars.
AI factories: the new engine of the digital economy
Perhaps the most strategic NVIDIA announcement is related to “AI factories”: massive data centers designed not just for traditional services but for “manufacturing” intelligence. These facilities consist of tens of thousands of GPUs interconnected with ultra-high-speed networks, functioning as a single digital brain.
The challenge lies in interconnection, where technologies like NVLink, InfiniBand Quantum-X, and Spectrum-X Ethernet come into play, enabling data movement of up to 130 TB/s between GPUs within a single rack at ultra-low latency.
The vision is clear: scale up to factories housing a million GPUs, capable of supporting applications ranging from research assistants to real-time translation services for entire populations.
FugakuNEXT: Japan and NVIDIA reshape supercomputing
In Tokyo, RIKEN and Fujitsu announced, together with NVIDIA, the start of construction for FugakuNEXT, the successor to the Fugaku supercomputer. This hybrid system will combine Fujitsu MONAKA-X CPUs with NVIDIA Blackwell GPUs via NVLink Fusion, enabling a unified environment for scientific simulation and AI workloads—HPC+AI.
Research priorities include modeling terrestrial systems for better disaster preparedness, drug discovery, advanced manufacturing, and optimized industrial design.
Satoshi Matsuoka, director of RIKEN’s supercomputing center, emphasized, “This isn’t just about more brute force, but redefining how Japan tackles its most urgent scientific challenges.”
An integrated ecosystem: from CUDA to Nemotron
All these developments are underpinned by NVIDIA’s software ecosystem. CUDA, TensorRT-LLM, Dynamo, and microservices like NIM supporting models such as Llama 3.1, Gemma, or DeepSeek allow deployment from Jetson Thor robots to data centers with the same programming framework.
NVIDIA’s proprietary models, such as Nemotron, complement open-source efforts to ensure transparency and flexibility. The company maintains over 1,000 open-source projects on GitHub and more than 450 models on Hugging Face, solidifying its role as a pillar of the global AI community.
Core message: AI everywhere
The deployment of Jetson Thor, DRIVE Thor, AI factories, and FugakuNEXT reflects a unified strategy: bringing artificial intelligence to every corner, from small devices to national infrastructure.
NVIDIA positions itself as not just a chipmaker but as the backbone of the new intelligence economy—a capability to unify hardware, software, and ecosystem within a coherent vision.
The race for sovereignty in AI, smart robots, and autonomous cars has only begun, but everything suggests Jensen Huang’s company will continue setting the pace.
FAQs
What is NVIDIA Jetson Thor, and what is it used for?
Jetson Thor is a computing module conceived as the “brain” for robots and physical AI systems. It offers up to 7.5 times the AI performance of its predecessor, Jetson Orin, with 3.1 times more CPU performance and double the memory. Its main advantage is running sophisticated AI models in real time directly on the device (“edge computing”), eliminating dependence on cloud services. This makes it ideal for humanoid robots, surgical assistants, smart tractors, or visual security agents needing quick decisions in dynamic environments.
How does DRIVE AGX Thor differ from Jetson Thor?
While both are based on the latest NVIDIA Blackwell architecture, they are tailored for different purposes.
- Jetson Thor targets general robotics and physical AI, such as humanoids, drones, or industrial robots.
- DRIVE AGX Thor is focused on automotive and intelligent mobility, with certifications like ISO 26262 ASIL-D and ISO/SAE 21434.
With up to 2,000 TOPS in FP4, DRIVE Thor can process sensor data in real time, making it suitable for autonomous vehicles and advanced driver-assistance systems (ADAS). In summary, Jetson Thor is geared toward robotics, and DRIVE Thor centers on automotive applications, although both rely on the same core technology.
What are AI factories, and why are they so important?
AI factories are a new class of data centers designed not merely to provide traditional services but to “produce” intelligence. They incorporate tens of thousands of interconnected GPUs operating as a collective “digital brain.”
These are essential because current models—large language models (LLMs) and multimodal reasoning systems—handle billions of parameters and require processing immense data sets in parallel. Without such infrastructure, training or deploying intelligent assistants, real-time translators, or predictive health systems on a national scale would be impossible. Technologies like NVLink, InfiniBand Quantum-X, and Spectrum-X Ethernet are central, enabling data transfer speeds of up to 130 TB/s within a single rack.
What will FugakuNEXT bring to Japan and the global scientific community?
FugakuNEXT, successor to the Fugaku supercomputer, results from collaboration between RIKEN, Fujitsu, and NVIDIA. It combines scientific HPC simulation capabilities with AI in a hybrid system, integrating Fujitsu MONAKA-X CPUs with NVIDIA Blackwell GPUs through NVLink Fusion. This setup will address critical challenges like climate modeling, material physics, earthquake prediction, drug discovery, and sustainable manufacturing, advancing both scientific progress and technological sovereignty.
What advantages does NVIDIA Blackwell architecture provide in these projects?
Blackwell is the unifying platform powering everything from humanoid robots to exascale supercomputers, offering:
- Enhanced energy efficiency via low-precision formats like NVFP4, reducing power and memory needs without sacrificing accuracy.
- Scalability from desktops to large interconnected GPU racks.
- Compatibility with CUDA and NVIDIA’s software ecosystem, easing deployment for millions of developers.
- Real-time reasoning capabilities critical for autonomous systems and health applications.
Designed as the backbone of the AI era, Blackwell facilitates innovation and scientific research at scale.

