logoalt Hacker News

otabdeveloper4yesterday at 10:07 PM1 replyview on HN

> are materially ahead of every open source model out there at this time

They aren't. Any difference is in sampling parameters and post-training flavor choices. These aren't things that are "materially ahead", that's basically just LLM themes.


Replies

achompasyesterday at 10:25 PM

I’m sorry but you’re demonstrably incorrect.

Listen, I want more open weight models in the world. They create entrepreneurial opportunities and support use cases which the foundation labs don’t want to support.

But open weight models are consistently three to six months behind on performance compared to closed models, as confirmed by both benchmarks and personal use. They’re closer on coding and much further away on non-coding tasks.

There are theories as to why these models lag, which I won’t get into. But anyone claiming open-weight models are close to closed-weight models is ignoring significant evidence to the contrary.

show 2 replies