logoalt Hacker News

johndoughyesterday at 6:18 PM1 replyview on HN

I've been running it with llama-server from llama.cpp (compiled for CUDA backend, but there are also prebuilt binaries and instructions for other backends in the README) using the Q4_K_M quant from ngxson on Lubuntu with an RTX 3090:

https://github.com/ggml-org/llama.cpp/releases

https://huggingface.co/ngxson/GLM-4.7-Flash-GGUF/blob/main/G...

https://github.com/ggml-org/llama.cpp?tab=readme-ov-file#sup...

    llama-server -ngl 999 --ctx-size 32768 -m GLM-4.7-Flash-Q4_K_M.gguf
You can then chat with it at http://127.0.0.1:8080 or use the OpenAI-compatible API at http://127.0.0.1:8080/v1/chat/completions

Seems to work okay, but there usually are subtle bugs in the implementation or chat template when a new model is released, so it might be worthwhile to update both model and server in a few days.


Replies

misterchephyesterday at 6:36 PM

I think the recently introduced -fit option which is on by default means it's no longer necesary to -ngl, can also probably drop -c which is "0" by default and reads metadata from the gguf to get the model's advertised context size

show 1 reply