Robotics is at a turning point: it’s no longer just about machines programmed to repeat a task, but about systems capable of seeing, reasoning, and acting in real-world environments with increasingly human-like flexibility. During CES 2026, NVIDIA introduced a package of open models, frameworks, and new infrastructure components aimed at what the company calls “Physical Artificial Intelligence,” alongside robots and autonomous machines from partners such as Boston Dynamics, Caterpillar, Franka Robotics, Humanoid, LG Electronics, and NEURA Robotics.
The announcement carries practical significance: building “generalist-specialist” robots—capable of learning multiple tasks and adapting—requires a volume of data, simulation, and training that many teams struggle to manage. NVIDIA’s strategy aims to lower this barrier by releasing base models and tools that let developers skip some pretraining steps and focus on fine-tuning, evaluation, and hardware integration.
Open models for robots to “understand” the world
At the core of the launch are the new models NVIDIA Cosmos and Isaac GR00T, designed to accelerate robotic learning and reasoning. NVIDIA announced Cosmos Transfer 2.5 and Cosmos Predict 2.5 as “fully customizable” world models for generating physically consistent synthetic data and evaluating policies in simulation. Additionally, Cosmos Reason 2 is a vision-language reasoning model that enables machines to “see, understand, and act” in the physical world.
The most notable piece for humanoids is NVIDIA Isaac GR00T N1.6, described as a VLA (vision-language-action) model focused on full-body control and supported by Cosmos Reason to enhance contextual reasoning. According to NVIDIA, several manufacturers are already using these workflows to simulate, train, and validate new behaviors before deploying them in physical robots.
This approach, practically speaking, aims to: reduce the cost of single-function robots—expensive and difficult to reprogram—and push toward more versatile machines capable of adopting new skills with less friction.
From lab to industry: evaluation, orchestration, and an open ecosystem
The company has also targeted a familiar challenge in robotics: the often fragmented pipeline involving simulation, data generation, training, testing, and deployment. To address this complexity, NVIDIA announced open-source tools on GitHub.
On one hand, Isaac Lab-Arena aims to standardize the evaluation and benchmarking of robotic policies in simulation, connecting with existing benchmarks and enabling large-scale testing before deployment on real hardware.
On the other hand, OSMO is presented as a “cloud-native” orchestration framework intended to unify workflows such as synthetic data generation, training, and software-in-the-loop testing across various computing environments—from workstations to hybrid cloud infrastructure. In essence: less custom glue, more of a “command center” to run complete development cycles.
Meanwhile, NVIDIA has enhanced its community component through integration with Hugging Face to bring Isaac models and libraries into the LeRobot ecosystem, one of the fastest-growing open source frameworks. The goal is for developers to refine and evaluate policies with fewer steps within a more integrated flow.
Hardware for the “edge”: Jetson T4000 and the leap to Blackwell in robotics
If software and models provide the brain, the body needs muscle—and energy efficiency. NVIDIA announced that the Jetson T4000 module, based on the Blackwell architecture, is now available, positioning it as an upgrade for those currently using Jetson Orin in robotics and autonomy. The announcement highlights up to a 4× increase in energy efficiency and AI computing power for this segment.
Later technical documentation describes the T4000 as delivering up to 1,200 FP4 TFLOPs, with 64 GB of memory and a configurable power envelope between 40 and 70 W, tailored for scenarios where power and cooling are constraints (industrial robotics, smart infrastructure, automation).
The roadmap also includes NVIDIA IGX Thor, expected “later this month,” extending high-performance computing to industrial edge applications, with enterprise software support and a focus on functional safety.
Robots “for each industry”: from factories and mines to homes
The list of partners illustrates where robotics is headed in 2026: from humanoids and mobile manipulators to autonomous machines for heavy sectors. NVIDIA includes companies like Caterpillar in the mix to bring autonomy and advanced AI to construction and mining equipment, while other ecosystem players showcase robots aimed at logistics, industry, or household tasks.
Additionally, NVIDIA highlights adoption examples in sensitive areas like healthcare, where robotics not only automate but also assist with precision and context-aware actions in real-time. The overall message is clear: the leap won’t depend solely on a “star robot,” but on the combination of models, simulation, evaluation, and efficient hardware that enable rapid iteration and confident deployment.
Frequently Asked Questions
What does “Physical Artificial Intelligence” mean, and how does it differ from a chatbot?
It refers to models and systems that, besides understanding language, can interpret their environment (vision), reason about what’s happening, and execute actions in the physical world via robots and sensors—often supported by simulation and synthetic data.
What advantage does a VLA model like Isaac GR00T have over a traditional LLM?
A VLA connects perception (vision), instructions (language), and execution (action), translating what the robot “sees” and “understands” into movements and physical control, which is especially important in humanoid robots.
In which scenarios is the Jetson T4000 suitable for robotics and edge applications?
It’s ideal for autonomous robots and systems requiring local model inference with constraints on energy and thermal dissipation—such as industrial automation, mobile robots, and real-time vision—where deploying in the cloud isn’t feasible due to latency, connectivity, or privacy considerations.
How does integration with Hugging Face LeRobot benefit small teams and startups?
It simplifies reuse of tools, environments, and evaluation workflows already integrated into an open-source ecosystem, reducing setup time and making robotic policy experimentation and benchmarking more accessible.
via: nvidianews.nvidia

