AI Can Find Hundreds of Software Bugs -- Fixing Them Is Another Story (theregister.com)
- Reference: 0180859610
- News link: https://it.slashdot.org/story/26/02/25/1743213/ai-can-find-hundreds-of-software-bugs----fixing-them-is-another-story
- Source link: https://www.theregister.com/2026/02/24/ai_finding_bugs/
Guy Azari, a former security researcher at Microsoft and Palo Alto Networks, told The Register that only two to three of those 500 vulnerabilities have been fixed and none have received CVE assignments. The National Vulnerability Database already carried a backlog of roughly 30,000 CVE entries awaiting analysis in 2025, and nearly two-thirds of reported open-source vulnerabilities lacked an NVD severity score.
The curl project closed its bug bounty program because maintainers could no longer handle the flood of poorly crafted reports from AI tools and humans alike. Feross Aboukhadijeh, CEO of security firm Socket, said discovery is becoming dramatically cheaper but validating findings, coordinating with maintainers, and developing architecture-aligned patches remains slow, human-intensive work.
[1] https://www.theregister.com/2026/02/24/ai_finding_bugs/
just code to pass automation checks even if the UI (Score:3)
just code to pass automation checks even if the UI shows an clear error!
Ergh (Score:3)
This is a result of shitty humans wasting money at a subject they don't understand in the hopes of making a few dollars. AI cannot solve this in its current form nor probably any future ones in the LLM vein. It's a huge shame curl had to do this to combat the shite. I wonder how long until a critical flaw is discovered in important systems but not fixed because maintainers don't have to wade through all the vibing going on.
Just shows how much technical debt there is (Score:3)
We've had decades to write poorly-tested poorly-reviewed code. But that's ok; as long as it kinda worked, we insisted on shipping it.
AI is now good enough to show what I told my managers for years: That technical debt builds up, and at some point the bill will come due.
The bill is now due.
Thanks to AI, it's now easy to find bugs. And relatively easy to confirm they're exploitable. But thanks to all the rest of the technical debt, much harder to fix the bugs. AI isn't good enough to fix the bugs yet, either, at least not without creating new ones just as fast. So it's a target-rich environment for hackers.
I'd say "I told you so", but I got out of that rat race a few years ago.
Who cares about CVEs anymore? (Score:2)
If there is a backlog of 30k vulnerabilities I don't see how AI is even relevant here. Linus Torvalds himself could have entered a fugue state and churned out 500 reports over the weekend and surely it would still be the case that his reports would be stuck in the queue.
The real concern is if while blue team is stuck on their trusty human-in-the-loop evaluation system, red team is 1000Xing exploits. (I don't think they will mind too much of 90% are false positives or implemented with crummy code.) In that c
The "AI" can hallucinate anything at all (Score:3)
Checking everything you get from it appropriately is and will remain more work than actually doing it yourself. As it gets "smarter" it will only require more work to figure out where it fails. At the expense of your environment, your quality of life and your future and the future of your kids.