logoalt Hacker News

throwaw12today at 4:43 PM7 repliesview on HN

How is that Meta spent so much money for talent and hardware, but the model barely matches Opus 4.6?

Especially, looking at these numbers after Claude Mythos, feels like either Anthropic has some secret sauce, or everyone else is dumber compared to the talent Anthropic has


Replies

strulovichtoday at 4:50 PM

Meta did a bunch of mistakes, and look like Zuckerberg spent a lot of money on talent and made big swings to change it (that happened about a year ago)

I think it’s unrealistic to expect them to come back from that pit to the top in one year, but I wouldn’t rule them out getting there with more time. That’s a possible future. They have the money and Zuckerberg’s drive at the helm. It can go a long way.

impulser_today at 4:51 PM

It's not even on par with Sonnet. It's on par with open source models and it not even open source and sit behind a private preview API.

Might as well not release anything.

coffeebeqntoday at 5:02 PM

Matching Opus 4.6 would be pretty good? It’s the SOTA actually available model

show 1 reply
solenoid0937today at 4:50 PM

It's benchmaxxed.

If they actually matched Opus 4.6 on such a short timeline, it would have been mighty impressive. (Keep in mind this is a new lab and they are prohibited from doing distills.)

show 1 reply
username223today at 4:59 PM

Facebook is working with the talent that can’t find a job at some other company. It doesn’t surprise me they ship mediocrity.

wotsdattoday at 4:47 PM

[dead]

zozbot234today at 4:48 PM

> has some secret sauce

Yup, it's called test-time compute. Mythos is described as plenty slower than Opus, enough to seriously annoy users trying to use it for quick-feedback-loop agentic work. It is most properly compared with GPT Pro, Gemini DeepThink or this latest model's "Contemplating" mode. Otherwise you're just not comparing like for like.

show 1 reply