We also made some dynamic MLX ones if they help - it might be faster for Macs, but llama-server definitely is improving at a fast pace.
https://huggingface.co/unsloth/Qwen3.6-27B-UD-MLX-4bit
What exactly does the .sh file install? How does it compare to running the same model in, say, omlx?
What exactly does the .sh file install? How does it compare to running the same model in, say, omlx?