Cooling a datacenter in space isn't really any harder than cooling a starlink in space, the ratio of solar panels to radiating area will have to be about the same. There is nothing uniquely heat-producing about GPUs, ultimately almost all energy collected by a satellite's solar panels ends up as heat in the satellite.
IMO the big problem is the lack of maintainability.
Sure, but cooling a starlink in space is a lot more difficult than cooling a starlink on earth would be. And unlike starlink which absolutely must be in space in order to function, data centers work just fine on the ground.
According to Gemini, Earth datacenters cost $7m per MW at the low end (without compute) and solar panel power plants cost $0.5-1.5m per MW, giving $7.5-8.5m per MW overall.
Starlink V2 mini satellites are around 10kW and costs $1-1.5m to launch, for a cost of $100-150m per MW.
So if Gemini is right it seems a datacenter made of Starlinks costs 10-20x more and has a limited lifetime, i.e. it seems unprofitable right now.
In general it seems unlikely to be profitable until there is no more space for solar panels on Earth.
I think that it's not just about the ratio. To me the difference is that Starlink sattelites are fixed-scope, miniature satellites that perform a limited range of tasks. When you talk about GPUs, though, your goal is maximizing the amount of compute you send up. Which means you need to push as many of these GPUs up there as possible, to the extent where you'd need huge megastructures with solar panels and radiators that would probably start pushing the limits of what in-space construction can do. Sure, the ratio would be the same, but what about the scale?
And you also need it to make sense not just from a maintenance standpoint, but from a financial one. In what world would launching what's equivalent to huge facilities that work perfectly fine on the ground make sense? What's the point? If we had a space elevator and nearly free space deployment, then yeah maybe, but how does this plan square with our current reality?
Oh, and don't forget about getting some good shielding for all those precise, cutting-edge processors.
> Cooling a datacenter in space isn't really any harder than cooling a starlink in space
A watt is a watt and cooling isn't any different just because some heat came from a GPU. But a GPU cluster will consume order of magnitudes more electricity, and will require a proportionally larger surface area to radiate heat compared to a starlink satellite.
Best estimate I can find is that a single starlink satellite uses ~5KW of power and has a radiator of a few square meters.
Power usage for 1000 B200's would be in the ballpark of 1000kW. That's around 1000 square meters of radiators.
Then the heat needs to be dispersed evenly across the radiators, which means a lot of heat pipes.
Cooling GPU's in space will be anything but easy and almost certainly won't be cost competitive with ground-based data centers.