In a sense, this was inevitable. Elon Musk and his coterie have been talking about AI in space for years, primarily in the context of Iain Banks’ sci-fi series about a far-future universe where sentient spaceships roam and control the galaxy.
Now, Musk sees an opportunity to realize a version of this vision. His company, SpaceX, is seeking regulatory permission to build solar-powered, orbital data centers distributed among as many as 1 million satellites, potentially moving as much as 100 gigawatts of computing power off Earth. He reportedly suggested building some of the AI satellites on the moon.
“The cheapest place to deploy AI is going to be in space within 36 months,” Musk said on a podcast hosted by Stripe co-founder John Collison last week.
he is not alone. xAI’s head of computing reportedly bet his colleagues at Anthropic that 1% of the world’s computing will be on track by 2028. Google (which has a significant ownership interest in SpaceX) has announced a space AI effort called Project Suncatcher that will launch a prototype vehicle in 2027. StarCloud, a startup that raised $34 million with backing from Google and Andreessen Horowitz, has submitted its own plan. Last week there were 80,000 satellite constellations. Even Jeff Bezos says this is the future.
But behind the hype, what does it actually take to set up a data center in space?
In the first analysis, today’s terrestrial data centers are still cheaper than orbital data centers. Space engineer Andrew McCalip has created a handy calculator to compare the two models. His baseline results show that a 1 GW orbital data center could cost $42.4 billion, nearly three times as much as a ground-based data center, thanks to the initial costs of building the satellites and launching them into orbit.
Changing this equation will require multidisciplinary technology development, significant capital investment, and significant effort in the space-grade component supply chain, experts say. It also depends on increasing on-site costs as increased demand strains resources and supply chains.
tech crunch event
boston, massachusetts
|
June 23, 2026
Satellite design and launch
The main driver of any space business model is how much it costs to launch something into space. Musk’s SpaceX has already lowered the cost of reaching orbit, but analysts looking at what it would take to make an orbital data center a reality need even lower prices to make the business case work. In other words, while an AI data center may seem like talk of a new business line ahead of SpaceX’s IPO, the plan hinges on completing Starship, the company’s longest-running unfinished project.
Consider that the cost to orbit a reusable Falcon 9 is currently around $3,600 per kg. According to the Project Suncatcher whitepaper, space data centers will require prices close to $200 per kg, an 18x improvement expected in the 2030s. But at that price, the energy provided by Starlink satellites today is cost-competitive with terrestrial data centers.
SpaceX’s next-generation Starship rocket is expected to bring these improvements, and other rockets in development do not promise comparable savings. However, the spacecraft is not yet operational and has not even reached orbit. The third version of Starship is expected to launch for the first time in the coming months.
But even if Starship were to be completely successful, the premise of being able to offer it to customers at low prices right away might not pass the smell test. Economists at the consultancy Rational Futures make a convincing case that SpaceX has no intention of charging significantly less than its biggest competitor, as it did with Falcon 9. Otherwise, the company will be leaving its money at risk. For example, if Blue Origin’s New Glenn rocket retails for $70 million, SpaceX won’t take on Starship missions for external customers for much less than that, which is more than the space data center builder has publicly assumed.
“We don’t have enough rockets to launch a million satellites yet, so we’re a long way from there,” Amazon Web Services CEO Matt Gorman said at a recent event. “If you think about the cost of getting a payload into space today, it’s huge. It’s just not economical.”
Still, if launch is the bane of every space business, the second challenge is production costs.
“At this point, we always think it’s a given that Starship will cost hundreds of dollars per kilogram,” Makarip told TechCrunch. “People don’t take into account that satellites currently cost close to $1,000 per kilogram.”
The manufacturing cost of a satellite is the largest part of its price, but if high-power satellites can be built for about half the cost of current Starlink satellites, the numbers start to make sense. SpaceX is making great strides in satellite economics while building Starlink, its record-breaking communications network, and hopes to achieve even more through scale. One of the reasons behind 1 million satellites is undoubtedly cost reduction through mass production.
Still, the satellites used for these missions must be large enough to meet the complex requirements of running powerful GPUs, including large solar arrays, thermal management systems, and laser-based communication links to receive and distribute data.
Project Suncatcher’s 2025 white paper provides one way to compare terrestrial and space data centers by power costs, a fundamental input needed to run chips. On the ground, data centers spend approximately $570 to $3,000 per year for 1 kW of power, depending on local power costs and system efficiency. SpaceX’s Starlink satellites instead derive their power from onboard solar panels, but the cost to acquire, launch, and maintain these spacecraft costs $14,700 per kW of energy per year. Simply put, to be cost-competitive with metered power, satellites and their components need to be significantly cheaper.
The space environment cannot be fooled.
Proponents of orbital data centers often say that thermal management is “free” in space, but that’s an oversimplification. Without an atmosphere, it would actually be more difficult to disperse heat.
“Just dissipating that heat into the blackness of space relies on very large radiators, so there’s a huge amount of surface area and mass to manage, which we recognize is one of the key challenges, especially in the long run,” said Mike Safian, an executive at Planet Labs, which is building a prototype satellite for Google Suncatcher, scheduled to launch in 2027.
In addition to the vacuum of space, AI satellites must also deal with cosmic radiation. Cosmic rays can also degrade chips over time and cause “bit flip” errors that can corrupt data. Chips can be protected with shields, use radiation-hardened components, or operate in series with redundant error checking, but all of these options involve an expensive trade in mass. Still, Google used particle beams to test the effects of radiation on tensor processing units, chips designed explicitly for machine learning applications. SpaceX executives said on social media that the company acquired the particle accelerator for just that purpose.
Another challenge comes from the solar panels themselves. The logic of this project is energy arbitrage. Placing solar panels in space can be five to eight times more efficient than on Earth, and if you’re in the right orbit, you’ll be able to see the sun more than 90% of the day, increasing efficiency. Electricity is the main fuel for chips, so more energy equates to cheaper data centers. But even solar panels get more complicated in space.
Space-rated solar panels made from rare earth elements are durable, but too expensive. Solar panels made from silicon are cheap and increasingly popular in space, such as those used by Starlink and Amazon Kuiper, but they degrade much faster under the influence of cosmic radiation. This would limit the lifespan of AI satellites to about five years, meaning they would have to see a faster return on investment.
Still, some analysts think it’s not that big of a deal based on how quickly new generations of chips come to market. “Even after five or six years, it’s not going to be profitable on dollars per kilowatt hour, because it’s not cutting-edge technology,” StarCloud CEO Philip Johnston told TechCrunch.
Danny Field, an executive at Celestial, a startup that makes space-rated silicon solar panels, said the industry sees orbital data centers as a key driver of growth. He said he is talking to several companies about possible data center projects, and that “any company that is big enough to have a dream is at least considering it.” But as a longtime spacecraft design engineer, he doesn’t discount the challenges of these models.
“It’s always possible to extrapolate physics to a larger size,” Field said. “I’m looking forward to seeing how some of these companies get to the point where it makes economic sense and the business case is closed.”
How do space data centers fit in?
There is one unanswered question regarding these data centers. The question is what to do with the data center. Are they general purpose, for inference, or for training? Based on existing use cases, they may not be fully compatible with terrestrial data centers.
The main challenge when training new models is getting thousands of GPUs to work together. Most model training is not distributed and occurs in separate data centers. Hyperscalers are working to change this to increase the power of their models, but this has not yet been achieved. Similarly, training in space requires coherence between GPUs on multiple satellites.
Google’s Project Suncatcher team points out that the company’s terrestrial data centers connect TPU networks at hundreds of gigabits per second of throughput. Currently, the fastest off-the-shelf intersatellite communications links using lasers can only reach up to about 100 Gbps.
This led to Suncatcher’s interesting architecture. This would require flying 81 satellites in formation and flying them far enough to use the types of transceivers used by ground-based data centers. Of course, this comes with its own challenges. Autonomy is needed to ensure that each spacecraft remains at the correct station, even if maneuvering is required to avoid orbital debris or another spacecraft.
Still, our research offers the following warnings: Although inference work can withstand orbital radiation environments, further research is needed to understand the potential impact of bit flips and other errors on training workloads.
Inference tasks do not require thousands of GPUs to work simultaneously. This work could probably be done using dozens of GPUs on a single satellite. This architecture is a kind of minimum viable product and could be the starting point for an orbital data center business.
“Training in space is not ideal,” Johnston said. “I think almost all inference workloads will be done in space,” he says, imagining everything from customer service voice agents to ChatGPT queries being computed in orbit. He said the company’s first AI satellite is already generating revenue by performing inference in orbit.
There are few details in the company’s FCC filing, but SpaceX’s orbital data centers are expected to deliver about 100kW per ton of computing power, about twice as much as current Starlink satellites. The spacecraft work in conjunction with each other and share information using the Starlink network. The application claims that Starlink’s laser link can achieve petabit-level throughput.
In SpaceX’s case, the company’s recent acquisition of xAI (which is building its own ground data center) will give it a position in both ground and orbital data centers to see which supply chain it can adapt to faster.
This is the benefit of fungible floating point operations per second, if you can make it work. “A flop is a flop, and it doesn’t matter where it is,” McAlip said. “[SpaceX] can be scaled up to [it] Hitting permitting and capital investment bottlenecks on-site and then getting back on track. [their] Space expansion. ”
Got confidential information or documents about SpaceX? For secure communication, you can contact Tim via Signal (tim_fernholz.21).
Source link
