logoalt Hacker News

baranmelikyesterday at 4:59 PM4 repliesview on HN

For anyone who’s already running this locally: what’s the simplest setup right now (tooling + quant format)? If you have a working command, would love to see it.


Replies

johndoughyesterday at 6:18 PM

I've been running it with llama-server from llama.cpp (compiled for CUDA backend, but there are also prebuilt binaries and instructions for other backends in the README) using the Q4_K_M quant from ngxson on Lubuntu with an RTX 3090:

https://github.com/ggml-org/llama.cpp/releases

https://huggingface.co/ngxson/GLM-4.7-Flash-GGUF/blob/main/G...

https://github.com/ggml-org/llama.cpp?tab=readme-ov-file#sup...

    llama-server -ngl 999 --ctx-size 32768 -m GLM-4.7-Flash-Q4_K_M.gguf
You can then chat with it at http://127.0.0.1:8080 or use the OpenAI-compatible API at http://127.0.0.1:8080/v1/chat/completions

Seems to work okay, but there usually are subtle bugs in the implementation or chat template when a new model is released, so it might be worthwhile to update both model and server in a few days.

show 1 reply
ljouhetyesterday at 6:59 PM

Something like

    ollama run hf.co/ngxson/GLM-4.7-Flash-GGUF:Q4_K_M
It's really fast! But, for now it outputs garbage because there is no (good) template. So I'll wait for a model/template on ollama.com
show 1 reply
zackifyyesterday at 7:25 PM

LM Studio Search for 4.7-flash and install from mlx community

pixelmeltyesterday at 5:25 PM

I would look into running a 4 bit quant using llama cpp (or any of its wrappers)