logoalt Hacker News

wrxdyesterday at 9:43 PM0 repliesview on HN

The example in the post confirms my theory that for local models to succeed they need to be "good enough", not big enough that they can compete with frontier models.

They need to be able to do a small task well and they need to be able to run reasonably on consumer-class devices. Even better if they can run on mobile phones.

In my experiments with local LLMs I noticed that while increasing the size of the model is nice the real thing that turns a barely useless model into something useful is the ability to use tools. Giving my models the ability to search the web and fetch web pages did way more to solve hallucinations than getting a bigger model. And it doesn't have a training cutoff. Sure, the bigger model is probably better at using tools but I often find the smaller models to be good enough.