> I'd imagine the power and cooling requirements are more specialised than your average datacenter
But are they actually doing things differently than the high compute parts of the hyperscaled datacenters? Are there radical new ways of distributing heat in the datacenter that only makes sense at that level of energy usage per square foot? Is AI energy use that much higher per square foot of other high-compute parts of datacenters, or is it just that its now something like 90% of the floor plan versus maybe only 50-60%?
> handle transmitting large amounts of data over the internet
I certainly can't speak for all datacenters, and I've never been in a hyperscaler datacenter. But of all the datacenters I've spent time in, the space for the outside network connectivity was rather small compared to the rest of the space for storage and compute. Think a few small office suites dedicated to outside networks coming in and connecting to the clients in the datacenter compared to a medium to large sized warehouse full of compute and storage.
There's "high compute", and then there's proper HPC. AI these days is way more on the HPC end of the scale. The GPUs are doing computations using 2-bit and 4-bit numbers and not 64-bit, but everything else is going to be comparable.