Standing before a friendly crowd in March, Elon Musk outlined a literal space-age plan: SpaceX, recently merged with his AI company xAI, would put data centers into orbit. “You’re power constrained on Earth,” he said. “Space has the advantage that it’s always sunny.” Musk described swarms of data-crunching satellites powering the AI revolution from above and predicted the cost of deploying AI in space could drop below terrestrial costs in “two or three years.”
Experts call that timeline optimistic. Brandon Lucia, a Carnegie Mellon professor who studies computing on satellites, says the napkin math looks appealing, but many obstacles remain. Power may be abundant in sunlight, but moving usable power, cooling chips, networking vast amounts of data and launching the massive hardware are all major challenges.
Why move off Earth?
AI’s electricity demand is soaring. Global data-center power consumption is projected to roughly double to nearly 1,000 terawatt-hours by the end of the decade, according to the International Energy Agency. Companies are building gas turbines and investing in nuclear to keep up, but some entrepreneurs see orbital data centers as a way to sidestep terrestrial energy constraints.
Philip Johnston, CEO of Starcloud (which aims to build orbital data centers), warns that terrestrial limits on where to site new energy projects could leave chips idle. Starcloud launched its first craft last fall with an Nvidia H100 on board and ran a version of Google’s Gemini from space; a second craft slated for October will generate about 8 kilowatts — far short of data-center scales but a technical demo.
Big players and prototypes
Google is pursuing orbital data centers via Project Suncatcher: an envisioned 81-satellite cluster built with Planet. Two prototype satellites are planned for early 2027. Planet CEO Will Marshall says orbital data centers are “an idea whose time has come” though exact economics are unsettled.
Musk showed a first-generation “AI Sat Mini” with solar arrays about 180 meters (roughly 600 feet) wide and has proposed launching very large constellations — even up to a million satellites — including polar orbits. He sees his Starship rocket as central to the plan; Starship is still in development and would lower launch costs if it succeeds.
Scale and power
One way to grasp the scale needed: the International Space Station’s solar panels produce about 100 kilowatts of average power and take up roughly half a football field. Olivier de Weck, an MIT professor of astronautics, says that’s roughly what a single big car engine produces. A 100-megawatt data center in space would require facilities 500 to 1,000 times larger, depending on orbit. That is feasible in principle but not on the three-year timetable Musk suggested.
Cooling in vacuum
Space feels cold, but it’s a vacuum, so heat from running processors can’t convect away — it builds up. Satellites must use radiators: circulating liquids to large panels that radiate heat into space. That means in addition to massive solar arrays, AI satellites would need extensive radiator surfaces. Rebekah Reed (Harvard Belfer Center, former NASA official) notes combining big radiators and solar arrays leads to either very large single satellites or very large constellations.
Constellation trade-offs
Smaller satellites flying in tight constellations could distribute power and cooling demands. But that requires huge inter-satellite data transfers, likely using laser links. Even at light speed, the distances introduce latency that can slow computation. Google’s Suncatcher aims to fly satellites in extremely tight clusters to reduce latency; Musk has suggested dense global constellations and polar coverage.
Launch costs and logistics
Today launch costs are roughly $1,000 per kilogram to orbit; Google estimates costs must fall to about $200/kg before space data centers begin to make economic sense. Musk hopes Starship — a heavy-lift, reusable rocket — will drive costs down, but Starship remains under development. Launching the enormous arrays and radiators needed would still be expensive even with lower per-kilogram prices.
Operational realities
Data centers on Earth are dynamic, requiring constant maintenance, upgrades and physical access. Raul Martynek, CEO of DataBank (which runs 75 mostly U.S.-based data centers), emphasizes that facilities see vendors and technicians every day installing servers, upgrading chips and fixing equipment. DataBank’s Ashburn, Virginia, center consumes around 13 megawatts at any moment — about 130 times the ISS — and relies on hands-on operations.
Space-based centers could rely more on software and pre-flight testing of chips, and some servicing might be done robotically in the future. But many customers want physical access or rapid hardware replacement. Martynek says he isn’t losing sleep over immediate competition from space data centers: “It seems like there’s a lot of ifs and a lot of advancements that would have to occur.”
Who wins if it works?
If launch costs fall, radiators and solar arrays can be scaled, and inter-satellite networking and on-orbit servicing mature, orbital data centers could become a new computing frontier. Companies already experimenting — Starcloud, Google’s Project Suncatcher, and startups — are building prototypes and demonstrating pieces of the puzzle, like running AI models from low-power satellites.
For now, however, most experts view orbital data centers as a long-term possibility rather than an imminent disruption. The physics of power, heat dissipation and latency, plus launch economics and the need for maintenance, mean space data centers remain technically challenging and costly. They may become practical eventually, but “not next year and certainly not in three years,” MIT’s Olivier de Weck says.
Contact Geoff Brumfiel on signal at gbrumfiel.13
