> This is simply run-of-the-mill supply and demand.
That's definitely NOT run-of-the-mill demand because it comes from companies buying hardware at operating loss, which can only be recouped by scalping higher prices at the expense of a starved market.
Demand funded by circular financial agreements and off-the-book debt isn't "run-of-the-mill" by any stretch.
What happens if some clever HN programmer develops a new algorithm such that you can do training and inference with 1/10 or even 1/100 of the GPU horsepower?