logoalt Hacker News

misiti3780today at 8:17 PM1 replyview on HN

what HW are you running them on ? are you using OLLAMA ?


Replies

vunderbatoday at 8:37 PM

I'm using the default llama-server that is part of Gerganov's LLM inference system running on a headless machine with an nVidia 16GB GPU, but Ollama's a bit easier to ease into since they have a preset model library.

https://github.com/ggml-org/llama.cpp