logoalt Hacker News

simjndtoday at 6:19 PM1 replyview on HN

Probably a testament to how good Qwen3.6 is considering Qwen3.6-35B-A3B is not only ahead of their similar weight class XS.2 but also their M.1 (close to 10x bigger at 225B-A23B).

Interestingly, Gemma 4 26B-A4B and Qwen3.6 27B (dense) have been left out of the comparison.

The smaller models are becoming very good and quantization techniques like importance weighting and TurboQuant on model weights let you run aggressively quantized version (IQ2, TQ3_4S) on consumer hardware with extremely acceptable perplexity and quality loss.

Very exciting times for local LLMs.


Replies