logoalt Hacker News

ninjagooyesterday at 9:34 PM2 repliesview on HN

> We're not reading the same numbers I think.

We must not be.

That's why I listed out the ones where it is barely competitive from @babelfish's table, which itself is extracted from Pg 186 & 187 of the System Card, which has the comparison with Opus 4.6, GPT 5.4 and Gemini 3.1 Pro.

Sure, it may be better than Opus 4.6 on some of those, but barely achieves a small increase over GPT-5.4 on the ones I called out.


Replies

nltoday at 1:11 AM

> barely competitive

It's higher than all other models except vs Gemini 3.1 Pro on MMMLU

MMMLU is generally thought to be maxed out - as it it might not be possible to score higher than those scores.

> Overall, they estimated that 6.5% of questions in MMLU contained an error, suggesting the maximum attainable score was significantly below 100%[1]

Other models get close on GPQA Diamond, but it wouldn't be surprising to anyone if the max possible on that was around the 95% the top models are scoring.

[1] https://en.wikipedia.org/wiki/MMLU

nimchimpskyyesterday at 9:56 PM

barely competitive ? Mythos column is the first column.

You are the only person with this take on hackernews, everyone else "this is a massive a jump". Fwiwi, the data you list shows the biggest jump I remember for mythos

show 1 reply