logoalt Hacker News

rTX5CMRXIfFGtoday at 4:53 AM2 repliesview on HN

Affordability of hardware that can run local LLMs is a real factor, too. Not sure when RAM prices are going down, but with everything that’s happening and can happen in the world right now, it doesn’t look like it’ll drop in the near or medium-term


Replies

pjeremtoday at 8:12 AM

Open weight models does not means you can run them on your laptop (except for the small ones). It means that someone independent (a cloud provider, another company ...) can build big computers that are capable ton run those models and provide you a metered usage.

At the end of the day, as a consumer, you still pay per token (or per something) to your provider, except you can chose from multiple providers with your own criteria. If you want to use DeepSeek v4 hosted in Europe, it's possible.

wahnfriedentoday at 4:56 AM

No one is going to run models that are comparable to frontier locally without spending enormous sums for use at scale or in large orgs. Even with cheap RAM, you will still need a very large budget for frontier-level capability.

Open models that are competitive with frontier will be used on shared hosts.

show 2 replies