logoalt Hacker News

jorvitoday at 8:24 AM0 repliesview on HN

I mean.. LLMs have hit a pretty hard wall a while ago, with the only solution being throwing monstrous compute at eking out the remaining few percent improvement (real world, not benchmarks). That's not to mention hallucinations / false paths being a foundational problem.

LLMs will continue to get slightly better in the next few years, but mainly a lot more efficient. Which will also mean better and better local models. And grounding might get better, but that just means less wrong answers, not better right answers.

So no need for doomerism. The people saying LLMs are a few years away from eating the world are either in on the con or unaware.