logoalt Hacker News

HarHarVeryFunny01/22/20252 repliesview on HN

Largest GPU cluster at the moment is X.ai's 100K H100's which is ~$2.5B worth of GPUs. So, something 10x bigger (1M GPUs) is $25B, and add $10B for 1GW nuclear reactor.

This sort of $100-500B budget doesn't sound like training cluster money, more like anticipating massive industry uptake and multiple datacenters running inference (with all of corporate America's data sitting in the cloud).


Replies

internetter01/22/2025

Shouldn't there be a fear of obsolescence?

show 1 reply
anonzzzies01/22/2025

Don't they say in the article that it is also for scaling up power and datacenters? That's the big cost here.

show 1 reply