That's a lot of waffle to try and say 'we've got a really scary next model coming too real soon, promise!'
I love that in the era of having LLMs summarize everything all of these companies have opted for what I call the “YouTube streamer apology video” tone and length for these announcements.
These feels more or less like a way to get in the news after Anthropic's Mythos announcement by removing some guardrails. I’m still signing up though.
It seems like local LLMs will get popular for cybersecurity if this trend of locking access to models continues.
I completed the "Trusted Access" verification, but it seems to have unlocked nothing in the OpenAI API or Codex models.
Just FYI for others.
"trusted" + openai just simply doesn't compute for me any more
>democratized access
>partner with a limited set of organizations for more cyber-permissive models.
I get where they're going with this, but still rather hilarious how they had to get a corporate speak expert pull of the mental gymnastics needed for the announcement
Wonder if Cyber would’ve caught the Claude Code source map leak?
All of that reminds me about how gpt2 was almost too dangerous to be released to the world...
Make cyber not cyber.
Requiring verified access is a good idea to mitigate risks from hacking while still giving people access to the latest models. Take notes, Anthropic.
I mean Anthropic clearly wins with the name (Mythos vs 'GPT-5.4-Cyber')
This approach means only a tiny portion of the population will every qualify. Doesn't that make everyone else beholden to those few, who are beholden to OpenAI?
Another solution is to make software makers responsible and liable for the output of their products. It's long been a problem that there is little legal responsibility, but we shouldn't just accept it. If Ford makes exploding cars, they are liable. If OpenAI makes software that endangers people, it should be the same.
> Democratized access: Our goal is to make these tools as widely available as possible while preventing misuse. We design mechanisms which avoid arbitrarily deciding who gets access for legitimate use and who doesn’t. That means using clear, objective criteria and methods – such as strong KYC and identity verification – to guide who can access more advanced capabilities and automating these processes over time.
KYC isn't democratic and doesn't prevent arbitrary favoritism, it's the opposite: It's used to control people and to favor friends and exclude enemies.
> Ultimately, we aim to make advanced defensive capabilities available to legitimate actors large and small, including those responsible for protecting critical infrastructure, public services, and the digital systems people depend on every day.
Translation: we aim to make defensive capabilities available to US and their vassals so they can protect critical infrastructure, while ensuring countries that are independent can't protect against US attacking their critical infrastructure.
Fortunately, this plan will backfire - the model capability is exaggerated and these "safeguards" don't reliably work.
Sounds totally reasonable to trust OpenAI and the sociopath sama.
Too little too late. OpenAI's shit was nearly worthless for cybersec for what, a year already?
ChatGPT 5.x just tries to deny everything remotely cybersecurity-related - to the point that it would at times rather deny vulnerabilities exist than go poke at them. Unless you get real creative with prompting and basically jailbreak it. And it was this bad BEFORE they started messing around with 5.4 access specifically.
And that was ChatGPT 5.4. A model that, by all metrics and all vibes, doesn't even have a decisive advantage over Opus 4.6 - which just does whatever the fuck you want out of the box.
What's I'm afraid the most of is that Anthropic is going to snort whatever it is that OpenAI is high on, and lock down Mythos the way OpenAI is locking down everything.
I don't think they've added enough cyber. My cyber workflow demands more trusted access for cyber so that I can use these cyber-permissive models for my cybersecurity.