logoalt Hacker News

throwaw12yesterday at 3:50 PM6 repliesview on HN

Aghhh, I wished they release a model which outperforms Opus 4.5 in agentic coding in my earlier comments, seems I should wait more. But I am hopeful


Replies

wyldfireyesterday at 3:55 PM

By the time they release something that outperforms Opus 4.5, Opus 5.2 will have been released which will probably be the new state-of-the-art.

But these open weight models are tremendously valuable contributions regardless.

show 1 reply
frankcyesterday at 4:47 PM

One of the ways the chinese companies are keeping up is by training the models on the outputs of the American fronteir models. I'm not saying they don't innovate in other ways, but this is part of how they caught up quickly. However, it pretty much means they are always going to lag.

show 3 replies
WarmWashyesterday at 5:44 PM

The Chinese just distill western SOTA models to level up their models, because they are badly compute constrained.

If you were pulling someone much weaker than you behind yourself in a race, they would be right on your heels, but also not really a threat. Unless they can figure out a more efficient way to run before you do.

show 1 reply
OGEnthusiastyesterday at 3:58 PM

Check out the GLM models, they are excellent

show 1 reply
auspivyesterday at 4:57 PM

There have been a couple "studies" and comparing various frontier-tier AIs that have led to the conclusion that Chinese models are somewhere around 7-9 months behind US models. Other comment says that Opus will be at 5.2 by the time Qwen matches Opus 4.5. It's accurate, and there is some data to show by how much.

lofaszvanittyesterday at 4:36 PM

Like these benchmarks mean anything.