Crusoe and Starcloud Bring the Cloud to Orbit: The First Public Cloud to Run AI Workloads in Space

Crusoe has announced a partnership with Starcloud to become the first public cloud provider to run workloads in space. The plan, unveiled on October 22, 2025, involves deploying a Crusoe Cloud module on a Starcloud satellite that will launch by late 2026 and offer limited GPU capacity from orbit in early 2027. The initiative targets an ambitious goal: breaking the energy bottleneck currently hindering the growth of AI data centers and harnessing the sun as an almost limitless energy source.

This venture isn’t coming out of nowhere. Starcloud is a startup focused on building orbital data centers powered by dedicated solar panels and designed to radiate heat into space’s vacuum instead of relying on complex ground-based cooling systems. Their roadmap includes a bold milestone: placing an NVIDIA H100 GPU into orbit in November 2025, which is claimed to be a hundred times more powerful than any previous computational unit sent to space, according to the company. The alliance with Crusoe—specialized in an “energy-first” model that already co-locates computing with unconventional power sources on Earth— extends this philosophy beyond the atmosphere.

Why it matters: energy, cooling, and scale

The rise of generative AI has skyrocketed power density in GPU clusters and the electricity demand of data centers. On Earth, expanding power and cooling capacity has become a complex puzzle involving permissions, saturated electrical grids, and water consumption that is politically sensitive. Space, in principle, offers three advantages:

  1. 24/7 solar energy (depending on orbit and thermal management): in low Earth orbit, satellites can optimize solar capture with a stability that is hard to match on terrestrial sites.
  2. Radiative cooling: the vacuum forces heat to radiate in infrared, eliminating cooling towers and water use; if well-designed, this reduces complexity and water footprint.
  3. Modular scalability: as it grows through orbital modules, space-based cloud could expand capacity without land use or increasing local electricity costs.

Crusoe and Starcloud propose co-locating compute with the most abundant energy source—the sun—and moving only essential data between orbit and Earth.

What about latency? How low orbit changes things

Running AI workloads “in the sky” raises obvious questions about latency and bandwidth. Here, the technical context helps: Low Earth Orbit (LEO) constellations typically operate with 20–50 ms round-trip latency, comparable to many terrestrial connections. This suggests that some inference and analytics loads could indeed be feasible from orbit, especially if data is pre-processed on the satellite and results returned as compact summaries. For massive training or flows requiring large volumes of data movement, it will still be critical to select what is uploaded and when.

What they plan to deploy: the “Crusoe Cloud module” onboard

The Starcloud satellite scheduled for late 2026 will incorporate a dedicated module where Crusoe Cloud will run, enabling select clients to deploy AI workloads from an space-based infrastructure. The roadmap indicates limited capacity initially—GPU in orbit with controlled access— sufficient for demonstrations, “edge orbital” use cases, and an initial commercial validation: AI near its power source.

Meanwhile, Crusoe and Starcloud envision larger orbital data centers as the architecture matures and demand grows. This is a logical step if the expectation that the energy infrastructure and thermal design in orbit can reliably scale proves correct.

Likely applications (early phase)

  • Earth observation with AI: detecting fires, floods, or land use changes with pre-processed orbital data and only downloading alerts or inferred maps.
  • Communications and security: pre-filtering and encryption at the orbital edge to ease ground links and enhance data sovereignty.
  • Always-on specialized models: inference of medium-sized models that consume less bandwidth than raw input data (e.g., embeddings, classification, segmentation).

Challenges that the renders don’t show: hurdles to overcome

The project faces considerable challenges:

  • Thermal management: while the vacuum helps radiate heat, designing efficient radiators with reasonable mass and surface area remains an art.
  • Radiation resilience: electronics and memory need to harden or be protected to withstand solar events and the harsh space environment.
  • Operation and maintenance: without hands-on ground support, ensuring fault tolerance and enabling automatic recovery (both hardware and software) are critical.
  • Communications: link windows, bandwidth, and traffic prioritization will dictate which workloads make sense in orbit versus on the ground.
  • Costs and supply chain: integrating cutting-edge GPUs into a space bus, certifying and launching them is costly; update cycles will be longer than on Earth.

Nevertheless, the industry’s signal is clear: investment and talent are moving in this direction. Beyond Starcloud, Axiom Space and Lonestar have already conducted orbital and lunar edge tests with miniature—but real—data centers to validate software, storage, and autonomous operations outside Earth.

What Crusoe gains from jumping into orbit

Crusoe has been building a cloud optimized for AI with an unconventional energy focus—from mitigated vent gases to renewables and customized agreements. With Starcloud, the company expands its edge: if it can demonstrate availability, service quality, and security in orbit, it will offer a unique offering compared to traditional hyperscalers. While it won’t replace terrestrial data centers, it could shift some parts of the innovation and industrial chain—those that are energy-intensive and less bandwidth-sensitive — into orbit.

How it could be contracted (and who it makes sense for)

Initial access will be limited. This is sensible for clients with orbital use cases (observation, defense, telecom) or for those wanting to validate split AI pipelines: pre-processing on orbit and heavy training/deployment on ground. If latency expectations align and SLAs are clear, the energy benefits may outweigh the costs.

For generalist enterprises, the timetable and logistics advise caution: until technical details, pricing, jurisdictions, and sovereignty controls are clear, the prudent approach is to pilot non-critical workloads first.

A step further into an already existing trend

Bringing compute close to power sources and/or data isn’t new. The novelty is doing it beyond Earth with data center-class hardware. By 2025, prototypes already exist in the ISS, lunar data center tests are underway, and now, a commercial deal aims to launch AI GPUs into orbit for service. The path is still long, but the vector is clear.


Frequently Asked Questions

When will GPU “rentals” be available in orbit?
According to the announced plan, the satellite with the Crusoe Cloud module will launch by late 2026 and offer limited capacity in early 2027.

What latency can be expected from low Earth orbit (LEO)?
Current LEO connections typically have round-trip latencies of 20–50 ms, similar to many terrestrial networks. Feasibility depends on data volume and the design of each workload (pre-processing on orbit, results back down).

Why does space help with cooling?
Because in vacuum, there’s no convection: heat is radiated as infrared into the cold of space. With proper radiators and thermal control, this eliminates water and much of the ground cooling infrastructure typical of data centers on Earth.

Will this replace terrestrial data centers?
Not in the short term. A hybrid model seems more realistic: certain energy-intensive and low bandwidth processes move into orbit; the rest remains on ground near users and data.

via: crusoe.ai

Scroll to Top