logoalt Hacker News

Too dangerous or just too expensive? The real reason Anthropic is hiding Mythos

114 pointsby chbinttoday at 12:49 PM125 commentsview on HN

Comments

saithoundtoday at 1:37 PM

It's pretty clear at this point that Mythos' capability to discover and exploit zero-day vulnerabilities at scale is but an incremental improvement over existing models like the ones available to OpenAI's Plus/Pro subscribers.

Anthropic tries to create marketing hype around Mythos using two psychological tricks.

1. Put large numbers in the headlines.

"Mythos discovered 271 vulnerabilities in Firefox" makes the model seem extremely capable to the uninitiated.

But it's actually meaningless as a measure of capability _improvement_.

Anthropic gave away $100mil specifically as Mythos credits to these projects and companies (that's $2.5mil per project). Spending the same exorbitant amount of compute analyzing the same codebases in an older model like GPT 5.x Pro would have turned up 260 of these vulnerabilities, or could even have turned up more than 271 ones.

No need to speculate, since this is exactly what we saw in the few code bases where we have such comparisons (like in the curl codebase). Supposedly weaker models, working with a much lower budget, turned up dozens of vulnerabilities. Mythos turned up only one, which ended up as a low severity CVE.

2. Do the whole "too dangerous to release" shtick. This is one of Dario Amodei's favorite moves. When he was vice president of research at OpenAI, he declared GPT-3 (which wasn't able to produce coherent text beyond 3-4 sentences at the time) too dangerous [1] as well.

Long story short, it's the ChatGPT 4.5 situation again: a company trained a model that's too slow and expensive, but not much more capable than what came before. It therefore requires these marketing stunts.

[1] https://www.itpro.com/technology/artificial-intelligence-ai/...

show 9 replies
goldenarmtoday at 1:22 PM

When your logo is AI, your illustrations are AI, and you profile pic is AI, I'm going to assume the text is AI too and won't read it.

show 6 replies
djvu97today at 1:06 PM

> Resource Limit Is Reached The website is temporarily unable to service your request as it exceeded resource limit. Please try again later.

I guess it was too dangerous to even read the article

show 4 replies
wood_spirittoday at 1:11 PM

My thinking is that if it really was super duper then Anthropic could charge eye watering amounts and have willing customers and set up expectations going forward that SOTA costs a lot to use.

That they don’t suggests that really it is only incrementally better than Opus 4.7 and that the market won’t bear a price increase that makes it economical to serve let alone profit from serving.

So the cynical me imagines execs sitting around the table and worrying that releasing it at anywhere close to break even would risk actually hurting the brand instead of setting them up as a premium company, and this at a time just before ipo when they can ill afford that rumour.

So they wonder what to do, and think playing national security card is the obvious way out. It’s incrementally better enough to find bugs that previous sota missed, it doesn’t get used widely so it’s cheap to serve and they get the good publicity without the economic scrutiny?

Making a loss selling to a small number of users using it in a limited way is entirely affordable. Making a loss selling it at scale is correspondingly unaffordable?

show 1 reply
smcatoday at 1:31 PM

(I work at Anthropic) We have publicly stated[1] that our goal is to deploy Mythos-class models at scale when we have the requisite safeguards for offensive cyber risks in place. Mythos is a general frontier model, not a cyber-specific model so there are many reasons why we think our users will benefit from access (with the aforementioned safeguards in place) in due course. Compute has also not factored into our decision[2] to rollout the model in a limited fashion to defenders. We'll be sharing more soon on the first month or so of the project and rollout.

[1] https://www.anthropic.com/glasswing#:~:text=deploy%20Mythos%...

[2] https://x.com/logangraham/status/2054613618168082935

show 5 replies
thadktoday at 2:03 PM

Article does not mention the other reason: in the interview with Dwarkesh, Amodei remarked about how other organizations are copying or training off Opus for their models.

By delaying allowing others to train off Mythos, they hold their SWE-Bench Pro head start longer so among other things, the USG can't but notice Anthropic's lead when they're deliberating on whether to further substantiate the "supply chain risk".

show 1 reply
Salgattoday at 3:01 PM

Reminds me of the paper launches NVidia/Intel/AMD sometimes do where they announce some amazing tech (such as the old Titan GPUs) that placed their hardware at the top of the benchmarks, but with basically zero actual stock available.

irthomasthomastoday at 2:29 PM

I don't believe anything out of these startups anymore unless its backed by evidence.

Too expensive? Why would anthropic train a model too expensive to run? I doubt they would. Let's look at the evidence: Opus 4.5 came in at double the speed and half the price of old opus. Its speed matched older sonnet models. Higher Speed + Lower price = smaller model. So they rebranded sonnet sized models to opus. Where is the og opus sized model?

sherrtoday at 2:23 PM

Whatever the reason for "hiding" Mythos, it seems clear that these systems are getting very good at finding software security exploits. Mythos has made more people, even the US government, sit up and pay more attention. Regarding who should control the release of powerful systems like this, as Bruce Schneier and David Lie write in "Mythos and Cybersecurity" :

"Until that changes, each Mythos-class release will put the world at the edge of another precipice, without any visibility into whether there is a landing out of view just below, or whether this time the drop will be fatal. That is not a choice a for-profit corporation should be allowed to make in a democratic society. Nor should such a company be able to restrict the ability of society to make choices about its own security."

https://www.schneier.com/blog/archives/2026/04/mythos-and-cy...

It is reasonable to be concerned.

jstummbilligtoday at 2:43 PM

This makes it sound like some kind of open question or even mystery.

Amodei himself stated quite clearly in recent interviews that they simply can't satisfy all demand, compute wise. Of course, Mythos could get more of the already too small pie, but clearly it's a more resource intensive model and would further increase strain.

waynecochrantoday at 1:31 PM

Conclusion: both are true which makes sense. The KV cache scaling yields both the emergent power and requires the enormous capacity.

show 1 reply
yanis_ttoday at 1:22 PM

My posts* got to the first spot on hackernews couple of times. Never once it broke down like that. And why would it, it's just a bunch of html and css files served through (free) vercel (don't think it matters). I wonder what do people run their blogs these days, so they fail under the pressure so easily.

* https://news.ycombinator.com/from?site=yanist.com

show 2 replies
ed_elliott_asctoday at 1:12 PM

It all sounds a bit too marketing-ey to me “we have this amazing model that is too good to release” but the goal is still AGI? Ok right.

show 3 replies
holysolestoday at 1:17 PM

The thought of this didn't even cross my mind until yesterday. I previously figured the hype was primarily around marketing, but after watching this Primagen video, I have the same suspicion.

https://www.youtube.com/watch?v=zaGOKd4jqEk

show 1 reply
whyenottoday at 1:18 PM

It's probably a little of both: dangerous and expensive. This article makes a good case that the cost is at least part of the reason.

I wish the article could have been a lot tighter and shorter. This is not earth shattering information that requires a New Yorker length piece of investigative journalism.

show 2 replies
joriswtoday at 1:55 PM

I found this an illuminating piece, though I don't think percentages needed to be assigned between "is it about cost" vs "is it about security"

einszweitoday at 1:25 PM

Opus Fast Mode is 30$/150$/M Input/Output cost. Mythos's pricing (from model card) is 25$/125$ Input/Output cost.

Based on this I doubt that Mythos pro is too dangerous to release or provides significantly more value.

show 2 replies
daft_pinktoday at 1:55 PM

It’s obvious that this is a campaign to pump their pending ipo. It may be too expensive, but it’s all about the ipo in my opinion.

tomaytotomatotoday at 1:12 PM

AI has always been dangerous, but not existentially dangerous.

Mythos is dangerous but it's not going to Skynet us.

Just the same as the military drone using some sort of OpenCV library and target prioritisation loop isn't going to turn evil on us.

show 1 reply
crudgentoday at 1:12 PM

For marketing purposes it is always too dangerous, not saying it is safe

22spajtoday at 1:13 PM

This lengthy article by a self-described "AI enthusiast" muddies the waters. Yes, Anthropic has capacity constraints, which is why they rented Colossus from Musk despite the danger of being distilled.

The real reason is that the hype around Mythos has already gone quiet because it does not find more than other models. That is, nothing at all in most open source projects. If you hide the model, embarrassing statistics will not be posted.

vrganjtoday at 1:23 PM

The real Mythos was the friends we made along the way.

WarmWashtoday at 1:36 PM

You don't have to look much further than marketing...

scihubertoday at 2:43 PM

I've always wondered: what if China were deliberately using AI to search for vulnerabilities in critical government servers, for example in the EU.

lgcmotoday at 1:09 PM

Mythos had to silence you apparently

dwa3592today at 1:10 PM

Silenced immediately.

marginalia_nutoday at 1:08 PM

Jesus has microwaved a burrito so hot he can not eat it, refuses to show the world, citing dangerous omnipotence paradox.

lenerdenatortoday at 1:25 PM

I'd be tempted to offer this as a consultant service were I at Anthropic.

It feels like an AI tool that needs professionals to interface with it. Get some of those professionals, have them work with clients in a targeted way. It helps reduce the exposure the tool has to bad actors, and reduces the amount of resource usage that it will incur, because it's being used only by trained individuals.

Use what you learn from the experience to further refine its operation and make it less expensive to operate.

micromacrofoottoday at 1:22 PM

It's probably not much more dangerous than all the AI security patching being done without it, CVE rate is approaching a straight line up

miroljubtoday at 1:19 PM

My guess is they are still in the "fake it till you make it" phase. There's no Mythos, it's just a hype machine fueled by a hot air.

show 1 reply
hiroto_lemontoday at 1:53 PM

[flagged]

maninchargetoday at 1:46 PM

[dead]

paol_tajatoday at 1:01 PM

The "too dangerous to release" line was definitely a marketing stunt.

OpenAI already used the same playbook with GPT-2 in 2019, and some of the same people involved back then are now doing it again at Anthropic with Mythos.

Same safety-branding DNA, different company, and people are falling for it again.

show 4 replies
hsuduebc2today at 1:34 PM

[flagged]