logoalt Hacker News

canpantoday at 12:12 AM1 replyview on HN

Llama.cpp with automatic offload to main memory. You can also use Ollama, it is easier, but slower.


Replies

reverius42today at 5:21 AM

For those who want a GUI, LM Studio does this too (with llama.cpp as the backend I think). I'm getting great (albeit slow) results with Qwen3.6-35B MoE on 8GB GPU RAM, 40GB system RAM.