NVIDIA has become more than just the dominant provider of chips for artificial intelligence. Jensen Huang’s company is using its massive cash flow generation and market capitalization to finance much of the ecosystem it needs to keep growing: model developers, neoclouds, data center operators, fiber-optic manufacturers, photonics companies, networking firms, and silicon partners.
This move is not minor. According to CNBC, NVIDIA has already committed over $40 billion in investments for 2026, a figure that confirms its transformation into a sort of industrial bank for AI. The logic is clear: if the bottleneck in artificial intelligence is in computing power, energy, optical connectivity, networking, and data centers, NVIDIA cannot just wait for others to build that infrastructure. It has incentives to accelerate it.
The strategy can be seen as a brilliant move to control the ecosystem or as a warning sign of financial circularity. In many cases, NVIDIA invests in companies that may end up acquiring or deploying its own technology. This relationship does not invalidate the genuine growth of AI but does require a closer look at how much of the demand is organic and how much is driven by the profit motives of the provider most benefiting from that demand.
From GPU supplier to supply chain architect
NVIDIA closed its fiscal year 2026 with record revenues of $215.9 billion, up 65% from the previous year, and a data center division that reached $62.3 billion in the fourth quarter — a 75% increase year over year. This size grants it a level of freedom few semiconductor manufacturers have had in recent history.
The $5 billion investment in Intel, announced in September 2025, was one of the clearest signals of this shift. It was not just a financial move. The agreement included Intel developing custom x86 CPUs for NVIDIA’s AI infrastructure platforms and SoCs for PCs with RTX chiplets. According to CNBC, this stake might have increased in value to over $25 billion in just a few months, driven by the rise in Intel’s stock price.
Then came a much wider wave. In February 2026, OpenAI announced a $110 billion funding round with a pre-money valuation of $730 billion, including $30 billion from NVIDIA, $30 billion from SoftBank, and $50 billion from Amazon. Meanwhile, NVIDIA continued signing agreements with companies that expand computational capacity, connectivity, and manufacturing infrastructure.
The key point is that the company is not investing randomly. It is putting money into areas that could limit AI growth: data center access, energy, fiber optics, photonics, interconnection, custom chips, and specialized cloud providers. This strategy aims to expand its future market and make it more difficult for clients and competitors to escape its architecture.
| Company | Committed Investment or Investment Rights | Role in NVIDIA’s Strategy |
|---|---|---|
| OpenAI | $30 billion | Strategic model customer and mass computing consumer |
| Intel | $5 billion | Custom x86 CPUs, RTX PCs, infrastructure collaboration |
| IREN | Up to $2.1 billion | Data centers and deployment of up to 5 GW of DSX infrastructure |
| Corning | Up to $3.2 billion | Fiber optics and connectivity for AI data centers |
| CoreWeave | $2 billion | Specialized neocloud for GPU capacity and data centers |
| Marvell, Lumentum, Coherent | Billions in operations, according to market info | Photonics, interconnection, and critical components for AI farms |
Neoclouds, fiber, and photonics: the AI bottlenecks
The agreements with IREN and CoreWeave clearly explain the new priority. GPUs are no longer the only bottleneck. Land, electrical power, substations, cooling, fiber, data center operations, and clients capable of contracting large-scale capacity are also needed.
On May 7, IREN and NVIDIA announced a partnership to support the deployment of up to 5 GW of AI infrastructure aligned with NVIDIA DSX. As part of the deal, IREN issued NVIDIA a five-year right to purchase up to 30 million ordinary shares at $70 each, representing an investment right of up to $2.1 billion. The deployment will initially focus on the Sweetwater campus in Texas, with 2 GW planned.
In January, CoreWeave received a $2 billion investment from NVIDIA, making it the company’s second-largest shareholder. Transitioning from crypto mining to AI infrastructure, CoreWeave aims to surpass 5 GW of data center capacity by 2030. It exemplifies the neocloud model: companies specializing in providing GPU capacity to AI laboratories, big tech, and firms reluctant to build all infrastructure themselves.
The partnership with Corning targets another critical point: optical connectivity. Building large-scale AI factories—whether rack, room, or campus—requires moving enormous amounts of data with lower energy consumption and higher bandwidth. Corning plans to increase its manufacturing capacity in the U.S. tenfold and boost fiber production by more than 50%, with three new facilities in North Carolina and Texas, according to Data Center Dynamics.
This shift toward optics is not incidental. As clusters grow, copper loses appeal in terms of distance, density, and power consumption. AI is pushing the industry toward more fiber,more photonics, and greater integration of computing and networking. NVIDIA wants to ensure that its supply chain advances at the pace its platforms require.
The big question: strategic ecosystem or financed demand
The strategy makes industrial sense. If NVIDIA needs the world to build more AI capacity, investing in those who can build it seems a logical way to accelerate the market. It also helps reinforce its position against a real threat: large hyperscalers developing their own chips, from Google’s TPU to AWS’s Trainium or custom ASICs with partners like Broadcom or Marvell.
The concern is circularity: NVIDIA invests in companies that buy, deploy, or depend on its technology. In some cases, these companies might use the capacity built to sell cloud services to third parties. Others could end up generating more demand for NVIDIA itself. The line between strategic investment and indirect sales financing becomes less clear.
This resembles the vendor financing during the dot-com bubble. In the late nineties, some companies financed customers to boost sales, inflating revenues that later proved less solid. Today’s situation isn’t identical: AI demand is real, with increasing spending from actual customers and tangible energy and capacity constraints. Still, over-extrapolating the trend carries risks.
For investors, the question is not only how much NVIDIA sells but who is financing the infrastructure enabling those sales. If neoclouds, data centers, and AI labs secure clients, contracts, and sustainable margins, investments will support a long-term ecosystem. If part of that demand relies excessively on cheap capital, high valuations, and cross-financing, the market may re-evaluate its expectations.
A hard-to-replicate competitive advantage
Beyond financial debates, NVIDIA is building a competitive moat that extends well beyond CUDA and GPUs. The company seeks to control the entire AI stack: accelerators, networks, CPUs, DPUs, switches, software, rack design, photonics, data centers, and cloud partners.
This comprehensive approach offers a clear advantage. When a client wants to deploy AI at scale, they don’t just buy chips. They need a complete infrastructure: compute, networking, storage, cooling, power, orchestration software, support, inference, training, and operations. NVIDIA aims to become the de facto standard for this entire pipeline.
Investing across multiple layers also reduces dependence. If fiber is missing, NVIDIA can accelerate Corning; if data centers are lacking, it can support IREN or CoreWeave. If x86 alternatives or PC integration are needed, it invests in Intel. If clients develop their own chips, it tightens ties with interconnection and photonics players. This is an ecosystem strategy but also a defensive one.
The result is a NVIDIA that functions less as a traditional manufacturer and more as an industrial infrastructure company. It sells essential components, designs complete systems, and funds actors that can expand its market. This position explains its valuation but also increases scrutiny. The more central NVIDIA becomes, the more vital it is to distinguish between healthy growth, concentration of power, and bubble risks.
AI demands a staggering amount of physical capital: chip factories, data centers, power lines, fiber, optical components, cooling systems, and operational software. NVIDIA has chosen not to wait for the market to solve these issues. Instead, it is buying influence along every segment of the supply chain. This could be one of the most effective industrial strategies of the decade—or a potential source of tensions if the cycle cools. For now, the message is clear: NVIDIA no longer just supplies AI; it’s financing the world that makes it possible.
Frequently Asked Questions
How much has NVIDIA committed to AI investments in 2026?
According to CNBC, NVIDIA has already surpassed $40 billion in investment commitments tied to the AI ecosystem during 2026.
Why does NVIDIA invest in companies that buy its chips?
Because these companies help expand the compute capacity, data centers, networks, and cloud services needed by AI demand. They also reinforce NVIDIA’s ecosystem.
What are the risks of this strategy?
The main risk is circularity: part of the demand may appear financed by the provider itself. If end customers do not sustain growth, the market might question the durability of future revenues.
Is this comparable to the dot-com bubble?
There are similarities in concerns over cross-financing, but also important differences: AI demand is real, with large clients increasing spending and tangible physical constraints. The comparison serves as a warning, not as an exact equivalence.
via: CNBC

