logoalt Hacker News

gjm1101/21/20252 repliesview on HN

What's your sense of how useful local LLMs are for things other than ... writing blog posts about experimenting with local LLMs? :-)

(This is a serious question, not poking fun; I am actually curious about this.)


Replies

simonw01/21/2025

Six months ago I had almost given up on local LLMs - they were fun to try but they were so much less useful than Sonnet 3.5 / GPT-4o that it was hard to justify using them.

That's changed in the past two months. Llama 3 70B, Qwen 32B and now these R1 models are really impressive, to the point that I'm considering trying to get real work done with them.

The catch is RAM: I have 64GB, but loading up a current GPT-4 class model uses up around 40GB of that - which doesn't leave much for me to run Firefox and VS Code.

So I'm still not likely to use them on a daily basis - but it does make me wonder if I should keep this laptop around as a dedicated server next time I upgrade.

show 1 reply
jhonof01/21/2025

If you are worried about security or IP at all, it's preferable to run locally, or spin up your own box that you can query running one of these models.

show 1 reply