This is a real danger that I think a lot of people will run into as prices go up more and more in the future.
Completely outside of the productivity debate, offloading cognitive tasks to LLMs leaves you less practiced in them and less ready to do them when the LLM isn't available. When you have to delegate only certain tasks to the LLM for financial reasons, you may find yourself very frustrated.
This is the bet of many of the big AI companies, and why they're subsidizing majorly the calls. With the latest cracks by the US gov, it seems Anthropic is starting to reduce those subsidies given their edge in the game. I am starting to consider local models more seriously beside just testing, but nowadays the ram/gpu market is bloated.
Seriously, who isnt planning a local first strategy?
I'm really hoping locally hosted llms get to the point of competing with current-day frontier models so that we all have "unlimited" usage.