logoalt Hacker News

simonw01/21/20251 replyview on HN

Six months ago I had almost given up on local LLMs - they were fun to try but they were so much less useful than Sonnet 3.5 / GPT-4o that it was hard to justify using them.

That's changed in the past two months. Llama 3 70B, Qwen 32B and now these R1 models are really impressive, to the point that I'm considering trying to get real work done with them.

The catch is RAM: I have 64GB, but loading up a current GPT-4 class model uses up around 40GB of that - which doesn't leave much for me to run Firefox and VS Code.

So I'm still not likely to use them on a daily basis - but it does make me wonder if I should keep this laptop around as a dedicated server next time I upgrade.


Replies

gjm1101/21/2025

Thanks!

One reason why I'm asking is that I'm in the market for a new laptop and am wondering whether it's worth spending more for the possible benefits of being able to run ~30-40GB local LLMs.

Unfortunately it doesn't look as if the answer is either "ha ha, obviously not" or "yes, obviously". (If the question were only about models available right now I think the answer would be no, but it seems like they're close enough to being useful that I'm reluctant to bet on them not being clearly genuinely useful a year from now.)

show 1 reply