New OpenAI Models Likely Pose 'High' Cybersecurity Risk, Company Says (axios.com)
- Reference: 0180358485
- News link: https://it.slashdot.org/story/25/12/11/0040221/new-openai-models-likely-pose-high-cybersecurity-risk-company-says
- Source link: https://www.axios.com/2025/12/10/openai-new-models-cybersecurity-risks
> OpenAI says the cyber capabilities of its frontier AI models are accelerating and warns Wednesday that [1]upcoming models are likely to pose a "high" risk , according to a report shared first with Axios. The models' growing capabilities could significantly expand the number of people able to carry out cyberattacks. OpenAI said it has already seen a significant increase in capabilities in recent releases, particularly as models are able to operate longer autonomously, paving the way for brute force attacks.
>
> The company notes that GPT-5 scored a 27% on a capture-the-flag exercise in August, GPT-5.1-Codex-Max was able to score 76% last month. "We expect that upcoming AI models will continue on this trajectory," the company says in the report. "In preparation, we are planning and evaluating as though each new model could reach 'high' levels of cybersecurity capability as measured by our Preparedness Framework." "High" is the second-highest level, below the "critical" level at which models are unsafe to be released publicly.
"What I would explicitly call out as the forcing function for this is the model's ability to work for extended periods of time," said OpenAI's Fouad Matin.
[1] https://www.axios.com/2025/12/10/openai-new-models-cybersecurity-risks
shameless platforming (Score:3)
This is nothing more than platforming an advertisement disguised as news of a threat. Warning, our product is really good!
Re: (Score:2)
Exactly. This press release is just a convoluted way to advertise how badass their product is.
This is disgusting gatekeeping (Score:1)
Already the models refuse to assist at professional levels on the basis that it would somehow be dangerous to enable novices to act with professional capacity. There is nothing magical about having the resources to train these models or to gain professional level skills in any given field that confers ethical or moral responsibility.
It's gun control all over again and the answer is NOT to withhold capability from people, it's to empower good actors to defend against the bad ones and distribute power widely
Re: (Score:2)
"It's gun control all over again and the answer is NOT to withhold capability from people, it's to empower good actors to defend against the bad ones and distribute power widely to keep central authorities in check."
From [1]https://en.wikipedia.org/wiki/... [wikipedia.org] (as a tag on one of their graphs): U.S. gun homicide rates exceed total homicide rates in high-income OECD countries.
So how's arming everyone and their uncle's dog working out for you, eh?
[1] https://en.wikipedia.org/wiki/Gun_violence_in_the_United_States
Re: (Score:2)
The only logical conclusion is that the OP wants themself or someone they care about to be shot.
Re: (Score:1)
Last I checked people killed by guns are no more or less dead than those killed by other tools and overall homicide rates tend to go up when guns are banned.
In contrast gun related self defense estimates show even the worst accounts tallying more defense incidents than deaths with typical estimates between 1.5 million and 3 million self-defense instances per year.
Lets compare citizens killed by foreign invaders and mass murder of heavily armed civilian populations by the state vs mass murder/subjugation of
Asymmetry problem (Score:2)
As the saying goes, the defender has to get it right every time, but the attacker only has to get it right once. AI is good at getting something impressive sometimes, not so good at always getting it right.
No reason not to make things worse for everyone (Score:2)
I read this as, "We have a product that will most likely make the world less safe for everyone, but will continue to make us a lot of money. Why wouldn't we release it?"
AI is the new cigarette, but the producers aren't hiding the fact that it's dangerous, and we don't seem to care. I guess we deserve whatever happens next.
CVE process must step up (Score:2)
There are some efforts to automate vulnerability tracking, like incorporating [1]SBOM tools [gitlab.com] into QA process, but largely this is still done manually. Which means that AI's throughput will simply overwhelm all existing manual system until everyone catches up on automation. I expect we will see 100-long exploit chains of trivial vulnerabilities, I expect we will see AI getting integrated with fuzzing, I expect we will see longstanding low-level protocols exploited in novel ways.
[1] https://about.gitlab.com/blog/the-ultimate-guide-to-sboms/
Re: (Score:2)
> because anyone who updates their software to fix even one of the 100 links in the vulnerability chain renders the attack useless
Sure. The attacker is now out of some CPU time and bandwidth. Unlike previously, where attacker's time would've been wasted.
Re:CVE process must step up (Score:4, Insightful)
Sure, the solution is to fully automate everything, because we've seen how automation of software development has resulted in zero bugs. Let's not talk about code quality, let's talk about not having to do any work.
Re: (Score:2)
This is further automation of QA (testing) and not development (coding). QA automation is nothing new, AI is just a new necessary tool.
To put it differently, the "compiled and run it once" bar have been raised.
Re: (Score:2)
Such a shame that CVE quality is generally crap, as it's flooded with dubious 'findings' from people trying to build a resume as a security researcher. I'm not sure why you assert this is largely still done manually, reconciling with SBOM tools in my neck of the woods is pretty much automated for detecting and flagging issues because *no one* has time to deal with the gigantic volume of CVEs. Of course another problem in those SBOM tools is they have a terrible false positive rate. Trying to follow their
Re: (Score:2)
> There are some efforts to automate vulnerability tracking, like incorporating SBOM tools into QA process, but largely this is still done manually. Which means that AI's throughput will simply overwhelm all existing manual system until everyone catches up on automation. I expect we will see 100-long exploit chains of trivial vulnerabilities, I expect we will see AI getting integrated with fuzzing, I expect we will see longstanding low-level protocols exploited in novel ways.
AI sucks for bug hunting producing mostly noise. I "expect" people to get tired of this nonsense. Automated fuzzers like syzbot have yielded way better results.
Re: (Score:2)
Right, the impact here really could be quite substantive. Take a look at SOAPwn as an example. It maybe wasn't found with AI but its the kinda bug fuzzing could have found and LLMs would actually be great at generating exploits for/against.
We are not talking about an issue in some random github project that got a little to popular to fast here, were talking about vulnerability that has existed in the .NET distribution for a very long time. The recent experiences with OpenSSL are again instructive, maybe it