Yes I usually run Unsloth models, however you are linking to the big model now (355B-A32B), which I can't run on my consumer hardware.
The flash model in this thread is more than 10x smaller (30B).
There are a bunch of 4bit quants in the GGUF link and the 0xSero has some smaller stuff too. Might still be too big and you'll need to ungpu poor yourself.
When the Unsloth quant of the flash model does appear, it should show up as unsloth/... on this page:
https://huggingface.co/models?other=base_model:quantized:zai...
Probably as:
https://huggingface.co/unsloth/GLM-4.7-Flash-GGUF