logoalt Hacker News

AugSuntoday at 5:05 AM7 repliesview on HN

"Most users don't need frontier model performance" unfortunately, this is not the case.


Replies

theshrike79today at 8:46 AM

It depends. If they're using a small/medium local model as a 1:1 ChatGPT replacement as-is, they'll have a bad time. Even ChatGPT refers to external services to get more data.

But a local model + good harness with a robust toolset will work for people more often than not.

The model itself doesn't need to know who was the president of Zambia in 1968, because it has a tool it can use to check it from Wikipedia.

show 1 reply
selcukatoday at 5:58 AM

Any citations? Because that was my impression, too. I want frontier model performance for my coding assistant, but "most users" could do with smaller/faster models.

ChatGPT free falls back to GPT-5.2 Mini after a few interactions.

show 2 replies
helsinkiandrewtoday at 7:47 AM

> unfortunately, this is not the case

Most users are fixing grammar/spelling, summarising/converting/rewriting text, creating funny icons, and looking up simple facts, this is all far from frontier model performance.

I've a feeling that if/when Apple release their onboard LLM/Siri improvements that can call out if needed, the vast majority of people will be happy with what they get for free that's running on their phone.

show 1 reply
blitzartoday at 8:45 AM

"Hey dingus, set timer for 30 minutes"

cyanydeeztoday at 10:27 AM

eh, its weird how thetech world wants to build trillions of data centers for...what, escapingthe permanent underclass?

I think what "need" youspeak of is a bit of a colored statement.

AugSuntoday at 5:24 AM

[flagged]

show 1 reply