logoalt Hacker News

ein0p01/21/20251 replyview on HN

This is from a small model. 32B and 70B answer this correctly. "Arrowroot" too. Interestingly, 32B's "thinking" is a lot shorter and it seems to be more "sure". Could be because it's based on Qwen rather than LLaMA.


Replies

cbo10001/21/2025

I get the right answer on the 8B model too.

It could be the quantized version failing?

show 1 reply