logoalt Hacker News

cientificotoday at 6:57 AM2 repliesview on HN

For most users that wanted to run LLM locally, ollama solved the UX problem.

One command, and you are running the models even with the rocm drivers without knowing.

If llama provides such UX, they failed terrible at communicating that. Starting with the name. Llama.cpp: that's a cpp library! Ollama is the wrapper. That's the mental model. I don't want to build my own program! I just want to have fun :-P


Replies

anakainetoday at 7:10 AM

Llama.cpp now has a gui installed by default. It previously lacked this. Times have changed.

show 3 replies
FrozenSynapsetoday at 7:30 AM

but if ollama is much slower, that's cutting on your fun and you'll be having better fun with a faster GUI

show 1 reply