Jensen Huang has been insisting for months that the next big wave of Artificial Intelligence won’t come solely from question-answering models, but from systems capable of acting on their own—using tools, reading documentation, writing code, and executing complete tasks. This week, he took it a step further with a striking yet debatable comparison: according to NVIDIA’s CEO, OpenClaw achieved in just three weeks an adoption level that took Linux about 30 years.
The statement, made during the Morgan Stanley Technology, Media & Telecom Conference 2026, is meant to grab attention. And it succeeds. Not so much because Linux needs no introduction, but because equating the adoption speed of an AI agent project with the historical trajectory of the world’s most influential open-source software is, at best, provocative. However, beyond the headline, what’s interesting isn’t so much whether Huang is exaggerating but understanding why he chose that particular analogy and what message he’s sending to the market.
OpenClaw is real, existing, and not a NVIDIA rhetorical invention. Its official site explains that the project, renamed as OpenClaw, surpassed 100,000 stars on GitHub by the end of January and attracted 2 million visitors in a single week. Reuters reported weeks ago that its founder, Peter Steinberger, joined OpenAI to advance the next generation of personal agents, while the project continues as an open-source initiative supported by a foundation. So, there is a phenomenon behind the name, even if the literal equivalence with Linux is more of a metaphor than a technical fact.
The comparison to Linux says more about internet speed than about Linux itself
It’s important to differentiate contexts. Huang isn’t claiming that OpenClaw is more important than Linux in historical, technical, or industrial terms. What he’s suggesting is that the cycle of software adoption has changed radically. And here, he hits a real note. Linux emerged in a world without GitHub, viral social media distributions, global communities operating in real-time, and, of course, without the massive traction generated today by every tool connected to the AI boom.
Therefore, the comparison should be viewed more as a metaphor for the current rate of software proliferation than as a technical equivalence. Linux has become over the years the backbone of servers, cloud computing, Android, supercomputing, and a large part of the internet. OpenClaw, on the other hand, operates in a different realm: autonomous AI agents that promise to do things for the user, not just respond. Different leagues, different eras, and adoption metrics that are very hard to compare directly.
That said, Huang didn’t pick OpenClaw by chance. In the event transcript, he presents it as one of the biggest phenomena of the moment and uses it to illustrate that software is entering a new phase. The old prompts of “what,” “who,” or “when” are giving way to commands like “create,” “do,” “build,” or “write.” In other words, moving from query to action. When this shift happens, infrastructure consumption changes completely.
NVIDIA’s true message isn’t about OpenClaw but about computing
This is the core of Huang’s message. He doesn’t dwell so much on the symbolic value of OpenClaw as on the implications that software agents will have for infrastructure. In the same talk, he explained that the jump from basic generative AI to models with more reasoning significantly increased token consumption. Based on his own reasoning, the shift toward agents that research, read manuals, use tools, and work in the background further multiplies that demand.
The most important statement in his speech probably isn’t about Linux. It’s what’s coming next. Huang asserts that these agents consume orders of magnitude more tokens and operate continuously, not just as on-demand queries. In other words, it’s no longer about asking a question and receiving a response but about software working constantly for the user or the company. This compels a rethinking of what a data center truly is.
NVIDIA has long promoted the idea of the “AI factory,” a token-producing powerhouse. It might sound like marketing jargon, but it encapsulates a concrete thesis: future data centers won’t just store data or handle conventional cloud loads but will be optimized to produce tokens profitably and sustainably. If the industry fills with agents operating in the background on a permanent basis, the demand for computing power, memory, networking, cooling, and energy will vastly increase from current levels.
It’s no coincidence that Huang links this discourse to the need for more compute. Nor that he emphasizes each company will end up consuming many more tokens than today. From NVIDIA’s perspective, agents justify the next major infrastructure investment leap. If every piece of software becomes “agentic,” as he said during the event, then demand for accelerators, inference systems, and data centers prepared for continuous workloads will not only maintain current growth but potentially accelerate.
The industry has a problem if software advances faster than infrastructure
This brings us to a more uncomfortable question than the simple Linux comparison: is our infrastructure ready to support a massive wave of AI agents operating continuously? Today, it’s difficult to think so. The sector already faces tensions in energy, memory, storage, civil works, deployment timelines, and electrical capacity. Add a software layer that needs to work more intensively per task, and pressure could quickly escalate.
Indeed, the rise of OpenClaw illustrates this dynamic not just for what it is today but for what it represents: a type of software that installs, adapts, uses tools, and makes AI more operational. Its popularity has already brought security issues, vulnerabilities, and malicious versions—a sign that the ecosystem is moving as quickly as it tends to when something goes viral.
In this context, Huang’s statement may seem exaggerated, but it’s not just a haphazard comment. It functions as a warning wrapped in a slogan. What NVIDIA is trying to convey to investors, clients, and developers is that AI agents won’t be just an incremental improvement over current chatbots. If they really take hold, they will change the software people use and also the infrastructure needed by the world to support it.
The real question isn’t whether OpenClaw can be compared to Linux. Instead, it’s whether the energy industry, data centers, and hardware supply chains can keep pace with a new generation of software that no longer waits for questions but begins to work autonomously. In that scenario, Huang’s exaggeration might not aim at describing the current reality precisely but at preparing the market for the size of the bill that might come next.
FAQs
What is OpenClaw and why does Jensen Huang compare it to Linux?
OpenClaw is an open-source project focused on AI personal agents capable of executing tasks, using tools, and operating across various channels and devices. Huang compared it to Linux not to equate their historical importance but to highlight the rapid adoption potential that an AI project can have today.
Did NVIDIA really say OpenClaw surpassed Linux in three weeks?
Yes, Jensen Huang stated this during the Morgan Stanley Technology, Media & Telecom Conference 2026. However, this comparison should be seen as a very bold statement about adoption speed, not as an exact technical measure of the impact of either project.
Why do AI agents consume more resources than traditional chatbots?
Because they do more than respond to queries. They research, reason, read documentation, use tools, perform chained steps, and work in the background for longer periods. All of this increases token usage and, consequently, data center computational demands.
What impact could AI agents have on data centers and NVIDIA?
If this type of software becomes widespread, the demand for computational power, memory, energy, and storage could grow even further. For NVIDIA, this reinforces the thesis that the next major market expansion will depend not just on training models but on continuously executing millions of agent-driven tasks.

