Meta has decided that its artificial intelligence ambitions are no longer managed as “just another project” within the infrastructure department. The company has created a new division, Meta Compute, to lead its next phase of data center expansion: a plan that, in Mark Zuckerberg’s words, aims to build dozens of gigawatts during this decade and aspire to hundreds of gigawatts or more on the horizon. If this roadmap materializes, it would position Meta among the most energy-hungry and highest-capacity computing players on the planet.
The new unit arrives with a clear goal: turning capacity building into a dedicated corporate discipline, with teams focused on long-term planning, vendor agreements, financing models, and government coordination. AI infrastructure — and not just the models — is becoming the competitive advantage.
What is Meta Compute and why does it matter
Meta Compute is the clearest sign that the company is separating two previously intertwined worlds: on one side, the race for models and talent; on the other, the race for land, energy, concrete, networks, and cooling.
The division will be co-led by Santosh Janardhan, responsible for global infrastructure at Meta, and Daniel Gross, who will focus on capacity planning, vendor partnerships, and the economic logic of deployments. Additionally, the company has brought in Dina Powell McCormick to work on agreements with governments and sovereign actors regarding deployments, investments, and infrastructure financing.
This move comes at a time when major labs and tech platforms are confronting an uncomfortable reality: simply buying GPUs is no longer enough. Scalability depends on permits, available electrical power, delivery times, supply chains, and increasingly, on societal acceptance of data centers.
The context: more capacity, more contracts, and greater energy dependence
The announcement aligns with a recurring pattern in the industry: securing capacity at any cost and through any means. Over recent months, contracts for compute and infrastructure agreements with third parties have multiplied, while large tech companies accelerate their own data campus projects.
In Meta’s case, various reports have outlined a map of agreements to secure capacity externally — from specialized GPU cloud providers to major platforms — along with negotiations with other market players to ensure supply and timelines. Parallel to this, the company is also exploring financing models for large complexes: projects where costs are no longer measured in tens of millions, but in billions.
Quick overview: key elements of Meta’s “scale plan” (based on public information and reports)
| Section | What’s happening | Relevance |
|---|---|---|
| Organization | Creation of Meta Compute and new leadership roles | Infrastructure becomes a corporate strategy, not just technical execution |
| Capacity | Goal of dozens of GW this decade | Scale comparable to major utilities, with real impact on electric markets |
| Financing | JV structures and external capital on mega-projects | Reduces cash pressure but introduces commitments and dependencies |
| External compute | Contracts with cloud/GPU providers | Speeds up deployments while proprietary campuses are built |
| Energy | Agreements and exploration of firm sources (including nuclear) | Electrical power is the number one bottleneck |
The major risk: building fast without breaking ecological trust
This massive expansion has a counterpoint: data centers are no longer perceived as “invisible infrastructure.” In many regions, public discourse has grown more critical due to electricity consumption, water use, urban impact, and pressure on local grids. As projects grow from hundreds of megawatts to gigawatts, permitting becomes increasingly politicized.
Meta seems to be proactively addressing this scenario by incorporating profiles focused on institutional relations, government agreements, and long-term planning. In other words, scaling gigawatt-size data centers requires more than engineering; it demands diplomacy, sophisticated financing, and a narrative emphasizing local benefits.
What changes with “tens of gigawatts”: AI enters the industrial logic
The gigawatt figure signifies a category shift. Traditional “large” data centers might operate in tens of megawatts. AI pushes toward campuses measuring in hundreds of megawatts, and now openly in gigawatts.
This drives fundamental transformations:
- Electrical planning: substations, lines, long-term contracts, and sometimes dedicated generation.
- Supply chain: from transformers to cooling systems and network equipment.
- Economic model: more structured financing, capacity pre-sales, and long-term agreements.
- Governance: permits, environmental compliance, transparency, and managing local opposition.
In that sense, Meta Compute appears less as an internal reorganization and more as a response to a fact: AI is pushing tech companies to operate as industrial infrastructure firms.
Frequently Asked Questions
What is Meta Compute, and how does it differ from Meta’s traditional infrastructure team?
Meta Compute is a division created to coordinate large-scale AI data center expansion, with a clear focus on capacity planning, vendor partnerships, financing, and deployment strategy.
What does it mean to build “gigawatt-scale” data centers?
It means moving from typical projects of tens of MW to campuses requiring as much power as a medium-sized city. This impacts permits, electrical grid integration, energy agreements, and construction timelines.
Why is Meta signing energy agreements (including nuclear) for data centers?
Because the bottleneck is no longer only hardware but also the availability of stable, firm energy to power thousands of GPUs, especially as rapid growth is pursued.
What risks does this data center expansion pose for local communities?
Main friction points often include grid capacity, electricity consumption, water for cooling, urban impact, and the perception of “local benefits” versus social costs.

