logoalt Hacker News

wmftoday at 12:27 AM1 replyview on HN

just want to run a 7-8b model locally

This is already solved by running LM Studio on a normal computer.


Replies

zozbot234today at 12:30 AM

Ollama or llama.cpp are also common alternatives. But a 8B model isn't going to have much real-world knowledge or be highly reliable for agentic workloads, so it makes sense that people will want more than that.

show 1 reply