Musk shakes up the AI race with a sci-fi idea: orbiting data centers… and a new bottleneck

Elon Musk is once again putting forward a proposal that sounds like something straight out of a novel: relocating part of AI infrastructure to space. Not as a metaphor, but as a plan that, in his narrative, would become “economically viable” within just a few years. His reasoning isn’t technological but electrical: the current expansion of AI is running into a very earthly limit — the available energy and the actual capacity of the grids to power new data centers.

The idea has gained traction in recent days through two channels. On one hand, Musk argued in a lengthy interview that the industry’s “limiting factor” is shifting: first, it will be energy, and once that’s overcome, chips will be the new bottleneck. On the other hand, SpaceX has applied to the U.S. FCC for permission to deploy a massive constellation of satellites designed to operate as “solar data centers” in orbit, with proposals mentioning up to 1,000,000 units on paper.

In the interview, Musk presents an ambitious outlook: “within 36 months, but probably closer to 30,” the cheapest place to deploy AI would be space. He reasons that Earth introduces too many frictions to grow at the pace demanded by models: permits, grid connection issues, industrial bottlenecks, and an electrical system that isn’t expanding fast enough to meet demand. He encapsulates this with a compelling image: the U.S. consumes about 0.5 terawatts on average; talking about 1 terawatt would double that consumption. In such a scenario, implicit question: how many power plants and data centers would need to be built to sustain this wave?

Musk’s argument is rooted in a debate that’s becoming less theoretical. The International Energy Agency (IEA) estimates that global electricity consumption by data centers could more than double to around 945 TWh by 2030, with an annual growth rate of nearly 15% from 2024 to 2030, driven largely by AI. Put simply: even if the industry manages to produce more accelerators and servers, powering and maintaining them becomes the real challenge.


Elon Musk – "In 36 months, the cheapest place to put AI will be space”

That’s where the “orbital escape” comes in. In space, Musk argues, energy supply would be simpler thanks to nearly constant solar power. There would be no nights or weather constraints, and the system could scale with less dependence on land, water, or the electrical grid. However, this promise faces a critical obstacle echoed by engineers: heat.

On Earth, data centers emit heat through air, water, and large convection cooling systems. In a vacuum, convection doesn’t exist: heat can only be dissipated via radiation, which requires large, heavy, and costly radiators. The Associated Press reported skepticism from experts warning that without proper cooling, chips in space could overheat quickly, especially in an environment that paradoxically feels cold. Musk, in his interview, attempts to preempt this criticism with a concrete solution: designing chips more tolerant to radiation and capable of operating at higher temperatures. He even claims that, in large neural networks, random radiation-induced “bit flips” would be less problematic than in traditional heuristic programs, thanks to the statistical resilience of models with trillions of parameters. Still, this approach doesn’t eliminate the core issue: heat must be removed somehow.

This isn’t just a theoretical discussion. Reuters reported that AWS CEO Matt Garman downplayed the enthusiasm, calling orbital data centers “quite far” from being economically feasible, citing logistical challenges and launch costs. The same coverage noted that other players are exploring similar ideas, indicating that the sector is increasingly examining the problem from extreme angles.

Meanwhile, early experiments are already fueling the narrative: Starcloud announced the deployment of an NVIDIA H100 GPU in orbit as part of a satellite (Starcloud-1), claiming it as a milestone in high-performance computing outside Earth. The leap from that demonstration — regardless of its significance — to hundreds of gigawatts of orbital infrastructure, as Musk suggests, is immense. But it highlights an important point: the industry is starting to test the terrain.

Musk seems most vigorous when describing the “next obstacle” if energy becomes unviable: supply chain of semiconductors. In the interview, he details how Tesla is reserving capacity across multiple fronts — TSMC in Taiwan and Arizona, plus Samsung in Korea and Texas — and reminds that setting up a factory and achieving high-volume production with good yields can take around five years. That’s why he emphasizes the idea of a “TeraFab”: a massive semiconductor factory capable not only of producing logic chips but also memory and advanced packaging, aiming for over a million wafers per month.

This emphasis on the supply chain isn’t accidental. Musk aims to transform Tesla into more than a car manufacturer: his plans for humanoids like Optimus rely on components, actuators, and industrial know-how concentrated mainly in Asia. A recent article detailed how Chinese suppliers are positioning themselves as key players in the ecosystem of components for humanoid robots tied to the project. It’s a sign of the central tension in this new cycle: even those seeking to “reindustrialize” domestically still depend on a global network to build the future.

In summary, the debate about orbital data centers serves as a symptom: the AI race is no longer decided solely by models and algorithms but by electricity, cooling, permits, rockets, and chip factories. What Musk presents — with his usual mixture of provocation and ambition — is an uncomfortable question for the industry: what if the actual limit of AI isn’t computing power alone but the ability to sustain it?

Frequently Asked Questions

What is an “orbital data center” and how does it differ from a regular satellite?
An orbital data center would be a platform in orbit designed to process and store data (including AI), powered by solar energy, with communications back to Earth. Unlike conventional satellites, it would prioritize computational power and thermal dissipation.

Why has the electrical supply become the main bottleneck for AI?
Because the demand for data centers is growing faster than grid expansion and power generation. Additionally, connecting new facilities may require construction, permits, and timelines incompatible with AI’s rapid investment pace.

What is the main technical challenge of running AI in space?
Cooling. In a vacuum, there’s no air or water for convection; heat must be radiated away through surfaces and designs that add mass, complexity, and cost.

What is a “TeraFab” and why does Musk mention it as a solution?
It’s the idea of a colossal manufacturing plant capable of producing chips at an unprecedented scale, including logic, memory, and packaging. The goal is to overcome the bottleneck in semiconductor fabrication if the demand for accelerators continues to surge.

Scroll to Top