logoalt Hacker News

simonw01/21/20252 repliesview on HN

Yeah, it's not an obvious answer at all. Spending ~$3,000+ on a laptop to run local models is only economically sensible if you are VERY paranoid about using APIs (there are plenty of API providers that I personally trust not to train on my data) - otherwise that $3,000 will buy you many years worth of access to the best available models via API.


Replies

gjm1101/21/2025

Well, I unfortunately have expensive tastes in laptops anyway, so the delta is substantially less than $3k, and it's possible that from time to time I'll run across other things that benefit from the fancier machine, and if I don't get a 64GB Mac one of the other possibilities is a 48GB Mac which would still be able to run some local LLMs. But, all that said, it's still potentially a sizable chunk of money for a dubious benefit.

I've been assuming that privacy isn't the only benefit of local; it seems like a local model would offer more flexibility for fine-tuning, RAG, etc., though I am completely ignorant of e.g. what size of model it's actually feasible to do any useful fine-tuning to on given hardware.

KarraAI01/28/2025

[dead]