Starcloud aims to bring AWS Outposts to space and targets 88,000 satellites

The race for Artificial Intelligence computing is no longer confined to land, chip factories, fiber networks, and data centers. A new generation of companies is attempting to open another front: low Earth orbit. In this context, the startup Starcloud has proposed a plan as ambitious as it is challenging: deploying AWS Outposts hardware in space and, long-term, building a constellation of up to 88,000 satellites focused on computing workloads.

This announcement is not merely seen as a “curious experiment.” It’s a sign of a market where the bottleneck is no longer just processing power but also how and where that power is supplied, cooled, and scaled without terrestrial infrastructure (energy, permits, land, water, networks) becoming the real limitation. The promise of orbital computing aims to exploit obvious physical advantages — solar energy available most of the time and radiative cooling — though transitioning from demonstration to a massive fleet involves numerous technical, economic, and regulatory uncertainties.

What does AWS Outposts have to do with a satellite story?

AWS Outposts is essentially AWS “at your site”: racks/servers managed by Amazon Web Services to run services and workloads close to the data, delivering an experience similar to the cloud but on-premises or at the edge. Traditionally, it has been positioned for use cases like low latency, data residency, integration with local systems, or industrial plants.

The strategic interpretation of “Outposts in orbit” is clear: if computing moves toward the edge, that edge can literally be… space. For workloads generating data far from data centers (ground observation, sensors, communications, distributed analytics), processing “up there” reduces the volume of data that needs to be brought down to Earth and can accelerate decision-making. In other words: less “transporting data,” more “transporting outcomes.”

Moreover, Outposts is not a static product. AWS continually updates its platform and hardware to keep pace with new instance types and enterprise needs — an important detail if a company plans to rely on such a product line to establish a computing layer beyond our planet.

From demos with “data center-grade” GPUs to the orbital dream

Starcloud’s approach isn’t entirely new. The company (whose recent history is tied to the idea of “data centers in space”) has sought to validate step by step that high-performance computing can operate under orbital conditions. In November 2025, they launched a satellite with an NVIDIA H100 GPU, followed by reports of running language models in that environment, using a “data center-class” GPU in orbit. Such milestones are used to demonstrate basic feasibility (power, thermal dissipation, telemetry, stability), but they still don’t answer the key question: can the model scale reliably, cost-effectively, and with sustainable maintenance?

The declared ambition of a constellation with up to 88,000 satellites pushes the discussion from lab prototypes to heavy industry: scaled design and manufacturing, launch logistics, spectrum management, continuous operation, replacements, radiation degradation, and, importantly, the impact on orbital congestion and space sustainability regulations.

Why the idea is appealing… and why it’s frightening

Orbital computing has a compelling appeal on paper for three reasons:

  1. Energy: Power availability is a major enabler for AI. On Earth, access to electrical power and substation infrastructure often limits new deployments. In space, solar energy and panel sizing open new possibilities, though practical limits and costs remain.
  2. Cooling: Data centers are being redesigned for higher densities and liquid cooling. In space, cooling doesn’t work the same way as on Earth, but thermal radiation and satellite design can be leveraged for alternative strategies.
  3. True edge: If data originates in space (via sensors, observation, telecommunications), processing close to the source reduces latency and minimizes downlink needs.

However, the risks are equally evident:

  • Reliability and maintenance: Terrestrial data centers are visited, repaired, and upgraded regularly. In orbit, physical maintenance is rare and costly. Systems must be designed to fail less often… and to be more easily replaceable.
  • Security and trust chain: Sensitive workloads require more than just encryption and access controls; trust in firmware, secure boot, telemetry, and tenant isolation are critical. Attack surfaces change when nodes are hundreds of kilometers away and rely on RF links.
  • Networks and latency: Space-based computing doesn’t bypass physics. There are communication links, coverage windows, handovers, and dependency on ground stations. The experience can be excellent for some workloads… and infeasible for others.
  • Regulation: Deploying tens of thousands of satellites involves licensing, international coordination, and compliance with operational regulations — increasingly coupled with public debates about space debris and congestion.

What these trends could mean for the industry

Although “orbiting data centers” sounds futuristic, its market signal is immediate: AI’s growth is driving exploration of any physical or geographical advantage that can reduce the marginal costs of compute, energy, and cooling.

For sysadmins and developers, the takeaway is significant: if a standardized layer of computing infrastructure (like Outposts) becomes ubiquitous, application deployment could extend into locations currently deemed extreme. This will challenge existing paradigms for observability, secure updates, failure models, and will demand more strict adherence to “immutable infrastructure” and “zero trust,” as environments won’t be just “your rack in your room,” but remote nodes with variable connectivity.

Starcloud’s announcement, therefore, should not be viewed solely as a space anecdote. It’s an attempt to redefine where the “edge” can exist when the planet begins to feel too small for the demands of computation.


Frequently Asked Questions

What is AWS Outposts and what is it typically used for in companies?
It’s hardware managed by AWS (racks/servers) designed to run services and workloads with AWS experience in on-premises or edge environments. It’s usually used for low latency, data residency, integration with local systems, and consistent operation across cloud and data centers.

What real use cases could orbital computing serve in AI?
Data processing near the source (ground observation, sensors, telecommunications), filtering/compression before downlink, distributed inference, and analytics where the bottleneck lies in the link rather than computation.

What are the main technical challenges of a “satellite data center”?
Reliability without regular maintenance, radiation resistance, specialized thermal management, secure boot and firmware, intermittent connectivity, and operation at scale (monitoring, updates, replacements).

Can this replace terrestrial data centers?
In the short to medium term, no. It’s more plausible as a complement for extreme edge niches or workloads with clear advantages in orbit. Most computation will still occur where energy, network, and logistics are more efficient.

via: LinkedIn

Scroll to Top