logoalt Hacker News

2ndorderthoughttoday at 10:54 AM3 repliesview on HN

I test drove it yesterday. It's pretty impressive at 8b. Runs on commodity hardware quickly.

Qwen3.6 35b a3b is still my local champion but I may use this for auto complete and small tasks. Granite has recent training data which is nice. If the other small models got fine tuned on recent data I don't know if I would use this at all, but that alone makes it pretty decent.

The 4b they released was not good for my needs but could probably handle tool calls or something


Replies

vessenestoday at 11:52 AM

Have you tried the Gemma 4 series, out of curiosity? I haven’t run a local model in a while, but the benchmarks look good. I’d take a free local tool-use model if it was relatively consistent.

show 3 replies
cyanydeeztoday at 12:34 PM

Qwen3-Coder-Next seems to be perfect sized for coding. I tried the new and just found the verbosity not really useful for coding. But probably for more analytical tasks or writing docs.

steveharing1today at 10:57 AM

Yea, No doubt Qwen 3.6 open weights are far more strong

show 1 reply