It's just the same dynamic as old servers. They still work fine but power costs make them uneconomical compared to latest tech.
It’s far more extreme: old servers are still okay on I/O, and memory latency, etc. won’t change that dramatically so you can still find productive uses for them. AI workloads are hyper-focused on a single type of work and, unlike most regular servers, a limiting factor in direct competition with other companies.
Manipulating this for creative accounting seems to be the root of Michael Burry’s argument, although I’m not fluent enough in his figures to map here. But, commenting that it interesting to see IBM argue a similar case (somewhat), or comments ITT hitting the same known facts, in light of Nvidia’s counterpoints to him.
I'm a little bit curious about this. Where do all the hardware from the big tech giants usually go once they've upgraded?
That could change with a power generation breakthrough. If power is very cheap then running ancient gear till it falls apart starts making more sense