OpenAI To Limit New Model Release On Cybersecurity Fears (axios.com)
- Reference: 0181499156
- News link: https://it.slashdot.org/story/26/04/09/194221/openai-to-limit-new-model-release-on-cybersecurity-fears
- Source link: https://www.axios.com/2026/04/09/openai-new-model-cyber-mythos-anthopic
> OpenAI introduced its "Trusted Access for Cyber" pilot program in February after rolling out GPT-5.3-Codex, the company's most cyber-capable reasoning model. Organizations in the invite-only program are given access to "even more cyber capable or permissive models to accelerate legitimate defensive work," according to a blog post. At the time, OpenAI committed $10 million in API credits to participants. [...]
>
> Restricting the rollout of a new frontier model makes "more sense" if companies are concerned about models' ability to write new exploits -- rather than about their ability to find bugs in the first place, Stanislav Fort, CEO of security firm Aisle, told Axios. Staggering the release of new AI models looks a lot like how cybersecurity vendors currently handle the disclosure of security flaws in software, Lee added. "It's the same debate we've had for decades around responsible vulnerability disclosure," Lee said.
[1] https://www.axios.com/2026/04/09/openai-new-model-cyber-mythos-anthopic
[2] https://it.slashdot.org/story/26/04/07/2115208/anthropic-unveils-claude-mythos-powerful-ai-with-major-cyber-implications
Us too (Score:1)
So Anthropic demos Mythos and OpenAI has to say put out a press release.
Altman's house of cards is collapsing...
Re: (Score:3)
I do suspect that OpenAI will be the 'Netscape' of this bubble pop. Early mover that in many ways sparked something significant that got left behind by others that did it better.
I am so eager for a bubble pop to recalibrate expectations to properly leverage LLM as appropriate instead of the current madness. It will be an adjustment, but without the craze it won't be nearly so obnoxious.
Re: Us too (Score:2)
Who do you have in mind who did better than Netscape back then? The one thing IE did better was insert itself everywhere.
Re: (Score:2)
> So Anthropic demos Mythos and OpenAI has to say put out a press release.
> Altman's house of cards is collapsing...
Dude, the next model is gonna be so scary that they won't even let THEMSELVES use it.
Prolossus: The Beforbin Preject (Score:2)
Our spendthrift would-be oligarchs are now fighting over which one of them gets to push the Blow Up Everything button, which may be entirely imaginary.
Firesign Theatre Got It Right (Score:3)
As the Firesign Theatre said so many years ago, "A power so great, it can only be used for good, or evil."
This isn't a mirage (Score:3)
Tech companies are running scared on this. Exploits are getting way too easy and there are few clear mitigations. Right now limiting the release might work, but what happens in a year when the open models have caught up?
Re: (Score:2)
I mean in theory you can have the AI identify and fix the exploits. Yeah it's an arms race but as some point the defense will probably win.
Re: (Score:2)
> there are few clear mitigations.
Other than fixing your bugs.
Re: (Score:2)
The same argument could be made around automated fuzzing. A new class of security misbehavior may be identified automatically, and it turns out you can use such tools to identify things to fix as well.
Of course, it could be a problem if it has a high false positive rate, where the attacker can hit false positives and barely be impacted but the false positives drive an impossible churn to keep up with on the defense side... Which frankly could be a thing based on my experience with LLM code review that can
Supposedly limited-release (Score:2)
Nation states will have sleeper agents who will grab a copy, send it home and go back to sleep waiting for the next big thing. How really dangerous it is - time will tell.
Hey what a coincidence... (Score:2)
Anthropic announces that they have a super awesome AI product that's just too awesome for anyone for anyone to see.
And then immediately OpenAI has the exact same thing.
FOMO on "my technology is too scary to exist" is a fun twist.
I know, it's not the first time, someone even linked an article where OpenAI said the same sort of thing about GPT-2 back in 2019...
Re: (Score:2)
OpenAI: "This is just for a few select companies to have..."
Wait, doesn't that go against OpenAI's mission statement of OPEN ACCESS???
Re: Hey what a coincidence... (Score:2)
I recently used codex latest model with my agentic augmented vuln research method. I found 6 high security bugs and produced PoCs for them in a very short period of time. Itâ(TM)s a set of open source networking utils Iâ(TM)m sure youâ(TM)ve heard ofâ¦. This is a real threat. It made me find bugs that would take weeks or months find them in a couple hoursâ¦
Re: (Score:3)
> Wait, doesn't that go against OpenAI's mission statement of OPEN ACCESS???
I'm pretty sure the "open" part now refers to opening your wallet .
Re: (Score:2)
The difference is we can see what OpenAI puts out generally and see what Anthropic puts out and see pretty clearly which one is miles ahead of the other. I believe Anthropic, not so sure about OpenAI.
Re: (Score:3)
While Anthropic is generally more credible, they have indulged in performative bullshit for the sake of the hype train.
Frankly, if they didn't, they would have been screwed over no matter how well they actually made a product.
Not crazy about the "do things to open source projects, but obfuscate the fact that it's LLM originated" Anthropic thing either way.