Oracle borrows $18 billion to accelerate AI data center rollout

Oracle has made a major financial pivot to support its AI cloud ambitions. The company founded by Larry Ellison has executed a bond issuance totaling $18 billion, according to a filing with U.S. regulators, with the explicit goal of financing capital investments (capex) related to a massive rollout of data centers and provision of computational capacity for AI clients. The issuance includes staggered maturities: the first tranche matures in 2030, while others extend out to 2065, as detailed in official documents and industry reports.

The move coincides with Oracle’s emergence as a central player in the recent “AI boom”, especially thanks to its partnership with OpenAI within the Stargate project, a platform infrastructure promising gigawatts of compute capacity to support cutting-edge generative models. Recently, OpenAI, Oracle, and SoftBank announced five new data center locations in the U.S., including expansion of the Abilene (Texas) campus. With these sites—Shackelford County (Texas), Doña Ana County (New Mexico), Milam County (Texas), Lordstown (Ohio), and a yet-unrevealed Midwestern location— Stargate nears 7 GW of planned capacity and over $400 billion in committed investments across the coming years, according to official statements and industry reports.

A multi-decade bet

The debt issuance reveals two key ideas. First, Oracle aims to compete as the “de facto” provider of compute capacity for major AI players, requiring a financial cushion with decades of runway, hence the maturity through 2065. Second, the window of opportunity is now: the race to build data centers is capital intensive, reliant on hardware supply chains (GPUs, HBM memory, networking, storage) and conditioned by power availability and local permitting in regions with robust electrical grids.

Oracle has been dropping clues for some time. During its latest analyst call, Safra Catz—former CEO and current Chair—highlighted that Oracle secured “significant” contracts with leading AI firms, and that three of these agreements are worth hundreds of millions in the first quarter of 2025. This tone translated into impressive figures: Oracle announced $455 billion in remaining performance obligations (RPO)—a metric of future revenue commitments—shocking Wall Street and temporarily elevating Larry Ellison into the top tier of global wealth, driven by the skyrocketing value of his holdings.

Stargate’s ecosystem: gigawatts, geographies, and partners

Stargate has become the main consolidator of AI data centers in the U.S. The roadmap presented by OpenAI and allies aims for 10 GW of capacity—a figure akin to a regional power grid—and total investments projected to reach $500 billion upon full deployment, according to various reports and announcements. The recent expansion of five sites pushes short-term capacity to over 6.5–7 GW, with Abilene (Texas) as the flagship and a build pace aimed at meeting training and inference demands.

Oracle’s role encompasses dual aspects. It provides OCI capacity—both on its own infrastructure and in third-party data centers (Microsoft, Google)—a “cloud everywhere” strategy letting it place hardware close to demand instead of forcing customers to migrate workloads to distant regions. Additionally, Oracle signs long-term contracts with AI operators (including OpenAI), creating a joint revenue outlook aligned with economies of scale in T&I purchasing.

The $300 billion contract and sustainability debates

One of the most sensational headlines has been the reported $300 billion five-year cloud services agreement between OpenAI and Oracle, leaked by financial outlets. While this figure is subject to skepticism—due to its unprecedented scale and OpenAI’s historic funding needs—it signals the *magnitude* of the AI infrastructure race. The ecosystem also sees additional deals (e.g., with CoreWeave, Nvidia) that strengthen chip supply chains, forming a complex financial puzzle that some analysts view as circular and potentially strained.

Simultaneously, analysts and industry outlets have called for “reality checks”: although AI demand is tangible—and growing—monetizing at this level isn’t trivial. Building data centers requires long cycles, with supply bottlenecks for GPUs and memory, plus the challenge of providing reliable, affordable electrical power for plants of 300–600 MW per site. even with 40-year debt, the mismatch between investment phases and cash flows could stress financial balances if revenue streams don’t materialize as planned.

Why now, debt-wise

The macro environment partly explains the timing. The corporate bond market is currently experiencing robust demand for “investment grade” issues tied to growth stories. Reports indicate Oracle initially considered raising $15 billion but ultimately expanded its offering to $18 billion, driven by investor appetite—reflecting confidence in AI’s growth potential and Oracle’s ability to secure long-term contracts.

The scale of capex also matters. A cutting-edge AI campus with hundreds of megawatts needs billions in civil works, power systems, cooling, networking, physical security, and silicon (GPUs, NVLink/Infiniband interconnects, high-performance storage, HBM memory). Financing that wave of procurement without capital dilution and with competitive cost-of-capital justifies debt issuance, aligning debt maturities (2030–2065) with asset lifespans and multi-year customer contracts.

Energy, the strategic variable

None of this works without abundant, reliable, and cost-predictable energy. Stargate’s announcements point toward power grid nodes where gigawatts can be assured through combinations of traditional generation, renewables, “sleeved” agreements, and storage. This week, project partners provided numbers: up to 25,000 local jobs during construction and operation, plus an added boost to local supply chains. Yet, the main bottleneck remains grid connection, with wait times ranging from 3 to 7 years in regions lacking existing infrastructure.

For states and counties hosting these projects, balancing economic impact and electrical demand has become a hot political issue. Some jurisdictions demand commitments to energy efficiency (PUE, heat reuse, liquid cooling, BESS) and training programs to attract and retain talent. Oracle, experienced in operating in third-party data centers as well as its own, could adjust deployment strategies and speed up commissioning by leveraging colocation and build-to-suit agreements instead of relying solely on greenfield builds.

How does Oracle fit with Microsoft, Google, and other hyperscalers?

Although perceived as a competitor, Oracle also functions as an infrastructure provider to Microsoft and other hyperscalers. Its “OCI colocated in third-party data centers” approach reduces commercial friction: clients can consume Oracle Database, Exadata, and OCI services where their workloads are—even in Azure or Google centers—while Oracle monetizes hardware and adds RPO. This hybrid approach, unconventional for a “pure cloud,” blends colocation and cloud services with interoperability agreements that were rare five years ago and now form a core part of the AI infrastructure landscape.

Risks: execution, costs, and regulatory “left lane”

The most evident risk is execution failure: deploying several gigawatts in a few years involves parallel work across multiple states, dozens of suppliers, and thousands of subcontractors. Delays in substations, transmission lines, cooling equipment, or GPU supply could derail timelines and inflate costs.

The financial risk is also significant. While Oracle’s historical leverage has helped generate shareholder returns, a volatile interest rate environment and front-loaded contracts (advance payments, milestones) could strain liquidity if AI demand— or prices— decline sooner than expected. Some analysts see signals of exuberance, with interconnected contracts across chip suppliers, cloud providers, and AI labs, creating a circular funding system that could attract antitrust scrutiny in the U.S. and EU.

On the regulatory front, generative AI faces increasing oversight, covering not just privacy and copyright, but also security and critical infrastructure resilience. Large AI campuses compete for electrical capacity with industry, residential, and renewable sources, making them central to energy-policy debates.

What does this mean for the market?

In the short term, the $18 billion issuance gives Oracle fuel to secure land, power, chips, and EPC contracts. In the medium term, if Stargate’s pace stays on track and AI clients continue to consume committed capacity, Oracle will likely solidify its role as “AI landlord”—leasing compute and managed services to labs and corporations that do not wish or cannot build at that scale. Over the long haul, returns will hinge on the elasticity of AI compute prices, advancements in energy efficiency (lower PUE, heat recovery, liquid cooling), and hardware evolution (new GPU generations, NPUs, HBM memory, optical interconnects).

What is already clear is that Oracle has committed to the fast lane. With bonds through 2065, billion-dollar contracts, and hundreds of megawatt campuses, its strategy is not temporary: it is about positioning itself at the physical core of the AI economy for decades to come. The key question— for Oracle, OpenAI, and the entire ecosystem—is whether the revenue curve will keep matching the investment curve at the scale demanded by gigawatts.


Frequently Asked Questions

1) What does Oracle’s $455 billion in “remaining performance obligations” mean, and why is it relevant for AI data centers?
RPOs represent committed revenues from signed contracts not yet recognized. For AI deployment, they provide long-term visibility into demand for compute and cloud services. Practically, this supports debt issuance and multi-year investment plans in data centers, showing that the built capacity has demand under contract.

2) Where will Stargate’s new data centers be located, and what capacity will they add short-term?
OpenAI, Oracle, and SoftBank confirmed five locations: Shackelford County (TX), Doña Ana County (NM), Milam County (TX), Lordstown (OH), and an undisclosed Midwest site. Alongside Abilene (TX) and other projects, planned capacity approaches 7 GW, aiming for 10 GW on Stargate.

3) How does the $18 billion issuance align with the timelines and costs of AI data centers?
The bonds allow Oracle to pre-purchase energy, land, civil works, and hardware (GPUs, networking, storage) at scale. With maturities until 2065, they align with the lifespan of physical assets and multi-year contracts with AI clients, reducing refinancing risks during the investment cycle.

4) Is a $300 billion contract between OpenAI and Oracle realistic? What do analysts think?
Financial media report a five-year $300 billion deal; however, its scale is being questioned due to the financing demands and uncertainty in monetization. Meanwhile, OpenAI has signed additional agreements (e.g., with CoreWeave), and the ecosystem’s interconnected financial dependencies are viewed by some as potentially strained. Consensus: the order of magnitude is high, and execution will be critical.

5) What regulatory and energy risks does a 5–10 GW AI deployment in the U.S. face?
Electricity permits, transmission capacity, PPAs, and environmental requirements (efficiency, water use, noise, heat dissipation) could delay or increase costs. Furthermore, AI’s electrical consumption competes with other grid uses, positioning data centers at the heart of energy and sustainability debates.

6) How does Oracle’s approach differ from other hyperscalers?
Oracle doesn’t just compete—it places hardware and services in third-party data centers (Azure, Google), with an interoperable hybrid model combining colocation and cloud. This allows Oracle to enter markets faster where others are already present and sign AI contracts without building everything from scratch.

7) What is the impact on employment and local economies of new AI campuses?
Partners estimate thousands of direct jobs during construction and operation—up to 25,000 on-site—plus positive effects on suppliers, technical training, and tax revenues. The ultimate impact depends on state incentives and grid connection timelines.

8) Which efficiency technologies will be decisive for controlling operating costs in AI campuses?
Advances include direct liquid cooling, immersion cooling, PUE below 1.2, heat recovery, and BESS storage for peak management. These investments reduce OPEX, increase rack density, and extend infrastructure lifespan amid more powerful, hotter GPUs, as widely discussed in technical literature and industry reports.

via: datacenterdynamics

Scroll to Top