The fact theyre shipping the actual agent harness alongside the weights is the part that matters. Most labs dump the model and make you figure out the agent layer yourself. If its the same runtime they use for RL training, its actually been exercised in production rather than being some demo wrapper.
Been testing these via their "pool" agent. It's fast, and the agent adheres to the ACP spec pretty well (better than codex, opencode etc.) so it's a good experience in Zed.
For similarly sized models, not looking very good on the slightly-less-benchmaxxed Terminal-Bench 2.0:
Laguna XS.2 33B-A3B params: 30.6
Qwen 3.6 35B-A3B : 51.5
Devstral 2 123B : 31.2
Quite a huge lead for Qwen... well, at least it's catching up to other smaller Western labs.Has anyone tried these models?
I like their honesty in benchmarks, looks like Qwen3.6 35B is outperforming their Laguna M.1 225B model
Please update the charts. Consider using textures or filling patterns.
I usually score pretty well in colour perception tests but distinguishing between those two purples made me doubt myself.
Very cool to see more small open models being worked on!
One nit: I've seen on this homepage, and many others, this notion that the people behind the models are "working towards AGI".
I get that this is marketing speak, but transformers are not AGI, and they will never be AGI, so it'd be great if people stopped saying that as it sort of wears out the meaning of "working towards AGI".
Felt like they would never come out of stealth mode but very nice to see it materialized into something competitive.
the color-codes make those benchmarks charts impossible to understand. very pretty though.
They're not winning any popular benchmark. Is there some niche where it excels?
Probably a testament to how good Qwen3.6 is considering Qwen3.6-35B-A3B is not only ahead of their similar weight class XS.2 but also their M.1 (close to 10x bigger at 225B-A23B).
Interestingly, Gemma 4 26B-A4B and Qwen3.6 27B (dense) have been left out of the comparison.
The smaller models are becoming very good and quantization techniques like importance weighting and TurboQuant on model weights let you run aggressively quantized version (IQ2, TQ3_4S) on consumer hardware with extremely acceptable perplexity and quality loss.
Very exciting times for local LLMs.