Why Space Data Centers Are (Today) a Bad Idea

In recent months, a striking yet problematic idea has become fashionable: launching data centers into space to power the AI revolution. On paper, it sounds futuristic and sleek: limitless solar energy, natural cooling in the cold of space, zero impact on terrestrial power grids…

The technical reality is quite the opposite. A former NASA engineer with a PhD in space electronics and ex-Google (where he worked precisely on deploying AI capacity in the cloud) summed it up bluntly: it’s a “terrible” idea that “makes no sense” from an engineering standpoint.

And when you look at the details, it’s hard to disagree.


1. No, there is no “infinite energy” in space

One of the most common arguments is that space offers abundant solar energy to power GPUs and TPUs without the limitations of Earth’s atmosphere. But the numbers don’t support this.

The largest solar installation we have deployed outside Earth is on the International Space Station (ISS). It’s a massive system, roughly 2,500 m² of panels, which, under ideal conditions, can produce just over 200 kW of power. Installing it required multiple shuttle missions and extensive work in orbit.

If we consider a high-performance GPU like an NVIDIA H200, it consumes about 0.7 kW per chip, which practically approaches 1 kW when accounting for losses and electrical conversion. Based on this, an “ISS solar farm” in orbit could at most power around 200 GPUs.

That may seem like a lot until you compare it to a real data center: the new AI megacenter that OpenAI plans to deploy in Norway is estimated to need around 100,000 GPUs. To match just this setup, you’d need to launch roughly 500 satellites the size of the ISS. And that’s ignoring all the supporting systems.

Similarly, nuclear options aren’t any better. It’s not about placing a nuclear reactor in orbit, but using RTGs (radioisotope thermoelectric generators), like those powering spacecraft: typically 50–150 W. Not even enough for a single latest-generation GPU, plus the added risk of repeatedly launching radioactive material with significant danger if a rocket fails.


2. The myth of “cold space” and the reality of vacuum cooling

Another common misconception: “Space is cold, so cooling servers there will be easy.”
Short answer: no. Long answer: definitely no.

On Earth, data centers primarily rely on convection: air (or liquid) absorbs heat and moves it elsewhere. Fans, heat exchangers, and increasingly liquid cooling transfer heat from chips to systems that dissipate it into the environment.

In space, there is virtually no air—almost vacuum. This means convection is impossible. Only two mechanisms remain:

  • Conduction: transferring heat within the structure itself.
  • Radiation: emitting heat into space via radiators.

The ISS uses an active thermal control system with ammonia loops and large radiators. This system can dissipate around 16 kW of thermal power—roughly the equivalent of 16 H200 GPUs. Each radiator panel covers about 42.5 m².

To dissipate 200 kW (the same 200 GPUs as before), this system would need to be scaled about 12.5 times: more than 500 m² of radiators. The resulting satellite would be enormous, with thermal panels far exceeding the size of the solar panels needed to power that “mini data center,” which, remember, is equivalent to just three standard racks.

And that assumes perfect orientation to keep radiators “facing darkness” and managing extreme temperature swings between sunlight and shadow. Thermal engineering in deep space is already a complex art for loads of just 1 W; ramping it up to hundreds of kilowatts of GPUs is simply a nightmare.


3. Radiation: the invisible enemy of GPUs

Even if energy and cooling challenges are addressed, another huge obstacle remains: space radiation.

Outside Earth’s atmosphere, and depending on orbit, inside or outside the Van Allen belts, electronic systems are exposed to a constant flux of high-energy particles from the Sun and deep space: relativistic electrons, protons, and atomic nuclei that pass through silicon at nearly the speed of light.

This causes several effects:

  • SEU (Single Event Upset): particles induce pulses that change bits in memory or logic, resulting in random errors.
  • Latch-up: a particle triggers a conduction path between power lines within a chip, potentially causing permanent damage if not mitigated in time.
  • Accumulated dose effects: over time, chips degrade; transistors with tiny geometries—like those in modern GPUs—become slower and less efficient, increasing power consumption and lowering maximum stable frequencies.

The standard solution in space missions is to design radiation-hardened electronics: larger geometries, special gate topologies, circuit-level redundancy, conservative timing margins. These processors perform like CPUs from 15–20 years ago but can survive in orbit for years.

Attempting to operate a cutting-edge GPU or TPU with nodes smaller than 7 nm and giant silicon dies with integrated HBM memory—areas highly susceptible to radiation—is almost the worst possible combination: large exposed area, tiny transistors, and extreme density.

In theory, a “space GPU” could be designed with thicker nodes and RHBD (radiation-hardened by design) techniques… but its performance would be a fraction of what’s achievable on Earth, contradicting the entire purpose of orbital data centers.


4. Communications: an unavoidable bottleneck

A modern AI data center relies on internal networks of 100–400 Gbps per link, with dedicated interconnection fabrics and low latency for distributed training or large-scale inference.

In contrast, a typical satellite communicates with ground via radio at around 1 Gbps as a reasonable sustained rate. Optical communications (laser-based) promise much higher bandwidth, but depend on ideal atmospheric conditions and still-emerging technology for widespread use.

Even if data transfer rates could be increased dramatically, the physical latency of being hundreds or thousands of kilometers above the Earth—plus the limitations of parallel links—render the idea of a “datacenter in orbit” as part of a commercial AI cloud highly impractical.


5. Outrageous costs for mediocre performance

Combining all the factors—energy, cooling, radiation, communications, operation, launches, hardware replacements—the economic conclusion is clear: achieving in orbit what amounts to just a few GPU racks would require investments and risks that are unjustifiable compared to building advanced data centers on Earth.

At best, it would be a prohibitively expensive infrastructure, difficult to maintain, with limited performance and a lifespan constrained by radiation degradation and the inevitable failures of rockets or satellites.

Meanwhile, on the surface, the industry continues to develop much more sensible solutions:

  • Data centers near renewable energy sources (hydropower, wind, nuclear).
  • Efficient liquid cooling and, in some cases, immersion cooling.
  • Reusing waste heat for district heating or industrial processes.
  • Optimizing energy consumption and workload management for AI models.

A good marketing idea, a bad engineering one

Can a “space data center” be technically built? With enough money and bright minds, nearly anything is possible. But being possible doesn’t mean it makes sense.

In this specific case, honest comparison with terrestrial alternatives reveals that orbital data centers are more of a futuristic illusion—eye-catching headlines and presentations, but deeply inefficient and fragile when examined in technical detail.

This debate reminds us of a simple lesson: before dreaming of literal clouds overhead, it’s better to make the most of what we already do well here on Earth, on a planet that—luckily—still has atmosphere, reasonable gravity… and data centers that don’t need to be launched by rockets.

via: taranis.ie

Scroll to Top