The Artificial Intelligence market has been living on spectacular figures for months, but also on nuances that change the meaning of headlines. The latest episode involves NVIDIA and OpenAI, whose alliance announced in 2025 — with a commitment once described as up to $100 billion and tied to “multi-gigawatt” computational capacity — is now under scrutiny due to lack of clarity and emerging skepticism from the hardware provider side.
The key isn’t just the size of the check. It’s the type of infrastructure being promised: power capacity at grid scale, not just “a big cluster.” And when the conversation moves in gigawatts, the business stops looking like software and starts resembling energy, construction, and heavy industry.
A monumental announcement… with fine print
The starting point was the joint statement published in September 2025: both companies announced a “strategic partnership” to deploy at least 10 gigawatts of AI data centers with NVIDIA systems, “representing millions of GPUs,” aimed at the next-generation infrastructure of OpenAI. The same announcement detailed that the first gigawatt would be deployed in the second half of 2026 on the NVIDIA Vera Rubin platform, with NVIDIA’s planned investment being phased “as each gigawatt is deployed,” up to a maximum of $100 billion.
On paper, this staged architecture seemed like a risk control mechanism: investments linked to physical milestones. In practice, the big question is whether the chain of agreements — funding, system leasing, deployment of capacity, supply contracts — can be finalized smoothly in a market that’s changing weekly.
“On hold,” according to WSJ: doubts at NVIDIA and competitive pressure
The shift came with a report from The Wall Street Journal: the mega-deal is “on ice” and progressing much more slowly than expected. The report states that doubts have arisen within NVIDIA about the approach and that, despite expectations of formalizing the agreement shortly after the announcement, progress has been limited. It also notes that the memorandum was non-binding, a detail that, at this scale, makes the difference between “intention” and “contract.”
Within this context, the paper adds a sensitive point: the alleged concern from the CEO about OpenAI’s “business discipline” and the push from competitors like Anthropic and Google. The industry interpretation of this argument is less emotional than it appears: when infrastructure costs tens of billions, the provider needs confidence that the customer has a sustained plan to monetize the computing they will use.
Jensen Huang dismisses it: “that’s nonsense”
The public response was swift. Reuters reported that Jensen Huang denied being “discontent” with OpenAI and called the idea of trust issues “nonsense.” According to Reuters, he reaffirmed support for OpenAI and described the future investment as “huge” and potentially the largest in NVIDIA’s history, with a clear limit: not exceeding $100 billion.
This back-and-forth isn’t unusual in Silicon Valley: one thing is a deal being “on pause” contractually, another is a breakup in the business relationship. NVIDIA can continue to be a priority system provider while renegotiating the financial scope or timeline. In a market where major labs compete for capacity and talent, no one wants to appear as if they’re “turning off the tap” without a clear alternative.
The real issue: signing gigawatts in an industry still learning how to monetize
The debate has both a technical and a business layer. The technical layer is well known: training and serving cutting-edge models requires energy quantities that, in energy terms, approach city-scale levels. The business layer is the uncomfortable one: how long can an expansion plan be sustained if the economic return depends on millions of users and companies paying for inference, APIs, licenses, and services?
This refers to the “discipline” mentioned by WSJ: not as a moral judgment, but as a requirement of financial engineering. The higher the capex, the more demand, pricing, and margin predictability are needed. If competitors tighten with alternative products and aggressive pricing models, infrastructure investment stops being just a tech race and turns into a cash flow contest.
Advertising in ChatGPT: a sign that monetization matters
Meanwhile, OpenAI has introduced a move that sheds light on the pressure: its plan to test advertising in the United States for adult users in the free and ChatGPT Go plans, with clearly labeled ads at the end of responses when relevant to the conversation. The company emphasizes that, for now, “there are no ads” in ChatGPT but will begin internal and gradual testing.
For a tech outlet, this isn’t seen as a cosmetic change but as a clue: even the sector’s most mass-market product is seeking additional ways to support computational costs. And those costs are precisely at the core of any “multi-gigawatt” deal with a provider like NVIDIA.
An alliance that may evolve, not necessarily break
Based on what’s been published, the most likely scenario isn’t a breakup but a recalibration. The 2025 announcement’s design was modular: phased investment per gigawatt and an initial milestone in 2026 on Vera Rubin. This allows adjusting pace and scope without undermining the “strategic alliance” narrative.
What does change is the market’s tone. In 2023 and 2024, the debate was “how many GPUs are still available.” By 2026, it’s about “who can afford and profit from an AI factory measured in gigawatts.” When that question enters the room, negotiation shifts from promises to contracts, milestones, guarantees, and above all, business credibility.
Frequently Asked Questions
What does it mean that the NVIDIA–OpenAI deal is “non-binding,” and why is that important?
It indicates that it may reflect intent and a collaboration framework but doesn’t obligate investment or deployment on the announced terms until a final contract is in place.
Why is an AI deal measured in gigawatts rather than number of GPUs?
Because the limit isn’t just the chip anymore; it’s the data center’s capacity (energy, cooling, network, and space) needed for large-scale training and inference.
What role does the NVIDIA Vera Rubin platform play in this story?
According to official statements, it’s the platform planned to deploy the first gigawatt of systems in the second half of 2026, serving as the first major industrial milestone of the agreement.
What does advertising in ChatGPT say about AI’s economy?
It suggests monetization is becoming central: maintaining massive AI services involves enormous computational costs, and exploring additional revenue streams helps sustain free or low-cost access.

