logoalt Hacker News

bityardyesterday at 4:27 PM1 replyview on HN

I have a Framework Desktop too and 20-25 t/s is a lot better than I was expecting for such a large dense model. I'll have to try it out tonight. Are you using llama.cpp?


Replies

UncleOxidantyesterday at 4:53 PM

LMStudio, but it uses llama.cpp to run inference, so yeah. This is with the vulkan backend, not ROCm.