logoalt Hacker News

Tostinoyesterday at 5:19 PM1 replyview on HN

Yup, even with 2x 24gb GPUs, it's impossible to get anywhere close to the big models in terms of quality and speed, for a fraction of the cost.


Replies

mirekrusinyesterday at 9:05 PM

I'm running unsloth/GLM-4.7-Flash-GGUF:UD-Q8_K_XL via llama.cpp on 2x 24G 4090s which fits perfectly with 198k context at 120 tokens/s – the model itself is really good.

show 1 reply