Unsloth quants available:
128GB (112 GB avail) Strix AI 395+ Radeon 8060x (gfx1151)
llama-* version 8889 w/ rocm support ; nightly rocm
llama.cpp/build/bin/llama-batched-bench --version unsloth/Qwen3.6-27B-GGUF:UD-Q8_K_XL -npp 1000,2000,4000,8000,16000,32000 -ntg 128 -npl 1 -c 34000
| PP | TG | B | N_KV | T_PP s | S_PP t/s | T_TG s | S_TG t/s | T s | S t/s |
|-------|--------|------|--------|----------|----------|----------|----------|----------|----------|
| 1000 | 128 | 1 | 1128 | 2.776 | 360.22 | 20.192 | 6.34 | 22.968 | 49.11 |
| 2000 | 128 | 1 | 2128 | 5.778 | 346.12 | 20.211 | 6.33 | 25.990 | 81.88 |
| 4000 | 128 | 1 | 4128 | 11.723 | 341.22 | 20.291 | 6.31 | 32.013 | 128.95 |
| 8000 | 128 | 1 | 8128 | 24.223 | 330.26 | 20.399 | 6.27 | 44.622 | 182.15 |
| 16000 | 128 | 1 | 16128 | 52.521 | 304.64 | 20.669 | 6.19 | 73.190 | 220.36 |
| 32000 | 128 | 1 | 32128 | 120.333 | 265.93 | 21.244 | 6.03 | 141.577 | 226.93 |
More directly comparable to the results posted by genpfault (IQ4_XS):llama.cpp/build/bin/llama-batched-bench -hf unsloth/Qwen3.6-27B-GGUF:IQ4_XS -npp 1000,2000,4000,8000,16000,32000 -ntg 128 -npl 1 -c 34000
| PP | TG | B | N_KV | T_PP s | S_PP t/s | T_TG s | S_TG t/s | T s | S t/s |
|-------|--------|------|--------|----------|----------|----------|----------|----------|----------|
| 1000 | 128 | 1 | 1128 | 2.543 | 393.23 | 9.829 | 13.02 | 12.372 | 91.17 |
| 2000 | 128 | 1 | 2128 | 5.400 | 370.36 | 9.891 | 12.94 | 15.291 | 139.17 |
| 4000 | 128 | 1 | 4128 | 10.950 | 365.30 | 9.972 | 12.84 | 20.922 | 197.31 |
| 8000 | 128 | 1 | 8128 | 22.762 | 351.46 | 10.118 | 12.65 | 32.880 | 247.20 |
| 16000 | 128 | 1 | 16128 | 49.386 | 323.98 | 10.387 | 12.32 | 59.773 | 269.82 |
| 32000 | 128 | 1 | 32128 | 114.218 | 280.16 | 10.950 | 11.69 | 125.169 | 256.68 |at this trajectory, unsloth are going to release the models BEFORE the model drop within the next weeks...
Getting ~36-33 tok/s (see the "S_TG t/s" column) on a 24GB Radeon RX 7900 XTX using llama.cpp's Vulkan backend: