"Most users don't need frontier model performance" unfortunately, this is not the case.
Any citations? Because that was my impression, too. I want frontier model performance for my coding assistant, but "most users" could do with smaller/faster models.
ChatGPT free falls back to GPT-5.2 Mini after a few interactions.
> unfortunately, this is not the case
Most users are fixing grammar/spelling, summarising/converting/rewriting text, creating funny icons, and looking up simple facts, this is all far from frontier model performance.
I've a feeling that if/when Apple release their onboard LLM/Siri improvements that can call out if needed, the vast majority of people will be happy with what they get for free that's running on their phone.
"Hey dingus, set timer for 30 minutes"
eh, its weird how thetech world wants to build trillions of data centers for...what, escapingthe permanent underclass?
I think what "need" youspeak of is a bit of a colored statement.
It depends. If they're using a small/medium local model as a 1:1 ChatGPT replacement as-is, they'll have a bad time. Even ChatGPT refers to external services to get more data.
But a local model + good harness with a robust toolset will work for people more often than not.
The model itself doesn't need to know who was the president of Zambia in 1968, because it has a tool it can use to check it from Wikipedia.