Improvements in LLM efficiency should be driving hardware to get cheaper.
I agree with everything you've said, I'm just not seeing any material benefit to the statement as of now.
Inference costs falling 2x doesn’t decrease hardware prices when demand for tokens has increased 10x.
Inference costs falling 2x doesn’t decrease hardware prices when demand for tokens has increased 10x.