logoalt Hacker News

Laguna XS.2 and M.1

68 pointsby toshtoday at 4:17 PM27 commentsview on HN

Comments

simjndtoday at 6:19 PM

Probably a testament to how good Qwen3.6 is considering Qwen3.6-35B-A3B is not only ahead of their similar weight class XS.2 but also their M.1 (close to 10x bigger at 225B-A23B).

Interestingly, Gemma 4 26B-A4B and Qwen3.6 27B (dense) have been left out of the comparison.

The smaller models are becoming very good and quantization techniques like importance weighting and TurboQuant on model weights let you run aggressively quantized version (IQ2, TQ3_4S) on consumer hardware with extremely acceptable perplexity and quality loss.

Very exciting times for local LLMs.

show 1 reply
vijgauravtoday at 7:08 PM

The fact theyre shipping the actual agent harness alongside the weights is the part that matters. Most labs dump the model and make you figure out the agent layer yourself. If its the same runtime they use for RL training, its actually been exercised in production rather than being some demo wrapper.

rohitpaulktoday at 4:25 PM

Been testing these via their "pool" agent. It's fast, and the agent adheres to the ACP spec pretty well (better than codex, opencode etc.) so it's a good experience in Zed.

orliesaurustoday at 6:32 PM

The colors used in the charts are borderline criminal

show 1 reply
jaentoday at 5:07 PM

For similarly sized models, not looking very good on the slightly-less-benchmaxxed Terminal-Bench 2.0:

  Laguna XS.2  33B-A3B params: 30.6
  Qwen 3.6     35B-A3B       : 51.5
  Devstral 2   123B          : 31.2
Quite a huge lead for Qwen... well, at least it's catching up to other smaller Western labs.
show 1 reply
throwaw12today at 4:44 PM

Has anyone tried these models?

I like their honesty in benchmarks, looks like Qwen3.6 35B is outperforming their Laguna M.1 225B model

speedgoosetoday at 5:07 PM

Please update the charts. Consider using textures or filling patterns.

I usually score pretty well in colour perception tests but distinguishing between those two purples made me doubt myself.

show 1 reply
gslepaktoday at 5:48 PM

Very cool to see more small open models being worked on!

One nit: I've seen on this homepage, and many others, this notion that the people behind the models are "working towards AGI".

I get that this is marketing speak, but transformers are not AGI, and they will never be AGI, so it'd be great if people stopped saying that as it sort of wears out the meaning of "working towards AGI".

show 2 replies
franksiemtoday at 4:56 PM

Felt like they would never come out of stealth mode but very nice to see it materialized into something competitive.

show 2 replies
kingjimmytoday at 4:54 PM

the color-codes make those benchmarks charts impossible to understand. very pretty though.

show 1 reply
esafaktoday at 5:39 PM

They're not winning any popular benchmark. Is there some niche where it excels?

show 1 reply