logoalt Hacker News

codemogyesterday at 7:54 AM5 repliesview on HN

Can someone explain how a 27B model (quantized no less) ever be comparable to a model like Sonnet 4.0 which is likely in the mid to high hundreds of billions of parameters?

Is it really just more training data? I doubt it’s architecture improvements, or at the very least, I imagine any architecture improvements are marginal.


Replies

aleccoyesterday at 3:15 PM

AFAIK post-training and distillation techniques advanced a lot in the past couple of years. SOTA big models get new frontier and within 6 months it trickles down to open models with 10x less parameters.

And mind the source pre-training data was not made/written for training LLMs, it's just random stuff from Internet, books, etc. So there's a LOT of completely useless an contradictory information. Better training texts are way better and you can just generate & curate from those huge frontier LLMs. This was shown in the TinyStories paper where GPT-4 generated children's stories could make models 3 orders of magnitude smaller achieve quite a lot.

This is why the big US labs complain China is "stealing" their work by distilling their models. Chinese labs save many billions in training with just a bunch of accounts. (I'm just stating what they say, not giving my opinion).

otabdeveloper4yesterday at 8:10 AM

There's diminishing returns bigly when you increase parameter count.

The sweet spot isn't in the "hundreds of billions" range, it's much lower than that.

Anyways your perception of a model's "quality" is determined by careful post-training.

show 2 replies
spwa4yesterday at 9:24 AM

The short answer is that there are more things that matter than parameter count, and we are probably nowhere near the most efficient way to make these models. Also: the big AI labs have shown a few times that internally they have way more capable models

girvoyesterday at 9:39 AM

Considering the full fat Qwen3.5-plus is good, but barely Sonnet 4 good in my testing (but incredibly cheap!) I doubt the quantised versions are somehow as good if not better in practice.

show 2 replies
revolvingthrowyesterday at 9:23 AM

It doesn’t. I’m not sure it outperforms chatgpt 3

show 2 replies