I've always been fascinated by practical solutions to seemingly impossible problems. In my role as cloud eng, I'm constantly thinking about infrastructure scaling challenges. The energy demands of modern AI training are staggering, and they're only getting bigger. That's why I was intrigued when I came across Starcloud's whitepaper titled "Why we should train AI in space" (originally published when they were called Lumen Orbit).
I really don’t know how I stumbled on the company’s website, but the idea of space data centers intrigued me.
The Energy Crunch Problem
We're facing a serious energy crunch that threatens to slow AI development (low key in favor of that tbh). As the whitepaper points out, electrical utilities are being hit by a tidal wave of new demand from various sectors:
AI data centers requiring massive amounts of power
Electrification of transportation
Electrification of heating and industrial processes
Leaders across the tech industry have acknowledged this problem:
Sam Altman has talked about needing cheaper renewable energy
Elon Musk predicted electricity shortages within years
Mark Zuckerberg admitted they would build larger clusters if they could get enough energy
The situation is clear - we can't keep scaling AI the way we have been. But what's the alternative?
The Space-Based Solution
Starcloud proposes a bold but pragmatic solution: move gigawatt-scale data centers to space. According to their research, orbital data centers offer several fundamental advantages over terrestrial counterparts:
1. Superior Power Generation
Solar arrays in space have a massive advantage over those on Earth. While terrestrial solar farms in the US achieve only a 24% capacity factor (and under 10% in northern Europe), space-based solar can achieve over 95% capacity factor. Why? No day/night cycle, no weather, no atmospheric attenuation, and optimal panel orientation. I mean duh.
A space-based solar array generates over 5 times the energy as the same array on Earth. Their calculations show they could offer energy at approximately $0.002/kWh - that's 22 times cheaper than average wholesale electricity costs in the US, UK, and Japan.
2. Efficient Cooling
Space is cold - really cold. The effective ambient temperature in deep space is around -270°C. This creates an ideal heat sink that can be leveraged for passive radiative cooling.
According to their thermal analysis, a 1m² radiator panel maintained at 20°C can dissipate about 630 W of heat in space. The radiator panels can be designed to be less than half the size of the solar arrays, making the entire system quite efficient.
3. Unlimited Scalability
Terrestrial data centers are hitting scaling limits. A 5GW cluster (which we'll need for future AI models) would exceed the capacity of the largest power plants in the US. These clusters simply aren't possible with today's energy infrastructure.
In space, compute modules, power, cooling, and networking can be assembled in a modular fashion that scales nearly indefinitely without the physical and planning constraints that plague terrestrial projects.
4. Faster Deployment
In Western countries, large-scale energy and infrastructure projects often take a decade or more to complete due to permitting requirements, rights of way issues, and environmental reviews. These bottlenecks are already being felt - the whitepaper mentions that xAI recently had to use natural gas generators when the grid couldn't provide enough power.
Orbital data centers avoid most of these roadblocks, potentially allowing for significantly faster deployment.
How Would This Actually Work?
The whitepaper outlines a detailed technical approach that seems surprisingly feasible:
Physical Architecture
Power: A 5GW data center would require a solar array approximately 4km x 4km in size, using silicon solar cells with 22% efficiency. These would be thin film cells (<25 μm thickness) that achieve power densities >1000 W/kg.
Cooling: Heat would be transferred from compute modules to large radiators using two-phase cooling systems to reduce pumping losses. The radiators would face deep space, which has an average temperature of 2.7 Kelvin.
Compute Modules: Each container would be similar to terrestrial data center containers, with racks containing compute and storage units, networking, power, and cooling infrastructure. A single container could be launched on one heavy-lift rocket.
Network Architecture
The data center would use a daisy-chain-style network for low latency between compute nodes. For Earth connectivity, they would use laser-based links with other satellite constellations like Starlink or Kuiper (I am a fan of Starlink btw).
An interesting approach they mention is "data shuttles" - small docking modules launched from Earth that could transport petabytes or even exabytes of data in a single trip. Interesting.
Orbit Selection
They've chosen a low-Earth, dawn-dusk sun-synchronous orbit (lol SSO means something different here). In this orbit, the spacecraft follows the day/night line on Earth (the "terminator"), with the orbital plane remaining perpendicular to the sun's rays year-round. This provides near-continuous solar illumination - crucial for consistent power generation.
Launch & Maintenance
With next-generation reusable heavy-lift rockets, they estimate launch costs around $30/kg to LEO. At these prices, launch costs become a relatively small component of the overall system cost.
For a 5GW data center, they estimate fewer than 200 total launches would be required - something that could be accomplished in 2-3 months with the planned cadence of new launch vehicles.
The system is designed for containers to be swapped out when they become outdated or fail. This modular approach allows for incremental updates rather than requiring replacement of the entire data center.
The Economics Make Sense
The whitepaper includes a cost comparison between a 40MW cluster operated for 10 years in space versus on land. The numbers are eye-opening:
Terrestrial: $167 million (primarily energy costs, backup power, and cooling)
Space-based: $8.2 million (primarily launch costs and solar array)
That's a 20x cost reduction for the space-based solution!
As a note, whitepapers are very theoretical. This number is probably going to be closer to the larger number at first and eventually it will go down.
Conclusion
While this sounds like science fiction, the underlying principles are sound. We're at the intersection of four key trends:
Drastically falling launch costs
The upcoming electricity demand crunch
Growing demand for large, energy-intensive GPU clusters
Proliferation of low-cost connectivity from mega-constellations
The technology exists today, and the economics appear favorable. As we push toward AGI, we'll need new approaches to scale our computing infrastructure, and space-based data centers could be an elegant solution that's both practical and environmentally responsible.
Anyway, I thought it was interesting and wanted to put it on the radar.
Cheers,
Joe
This blog post summarizes the whitepaper "Why we should train AI in space" by Ezra Feilden PhD, Adi Oltean, and Philip Johnston from Starcloud (formerly Lumen Orbit), published in September 2024.