AI had a kind of Jevons paradox approach to efficiency improvements, unfortunately - if you halve the compute requirements with an algorithmic advance, you can run a model twice as big.
The large SOTA models have hit very diminishing returns on further scaling, I think. So you’d rather double the number of models you can run in parallel.
The large SOTA models have hit very diminishing returns on further scaling, I think. So you’d rather double the number of models you can run in parallel.