AI code assistants make developers more efficient at creating security problems
- Reference: 1757053748
- News link: https://www.theregister.co.uk/2025/09/05/ai_code_assistants_security_problems/
- Source link:
Application security firm Apiiro says that it analyzed code from tens of thousands of repositories and several thousand developers affiliated with Fortune 50 enterprises, to better understand the impact of AI code assistants like Anthropic's Claude Code, OpenAI's GPT-5, and Google's Gemini 2.5 Pro.
AI is fixing the typos but creating the timebombs
The firm found that AI-assisted developers produced three to four times more code than their unassisted peers, but also generated ten times more security issues.
"Security issues" here doesn't mean exploitable vulnerabilities; rather, it covers a broad set of application risks, including added open source dependencies, insecure code patterns, exposed secrets, and cloud misconfigurations.
As of June 2025, AI-generated code had introduced over 10,000 new "security findings" per month in Apiiro's repository data set, representing a 10x increase from December 2024, the biz said.
[1]
"AI is multiplying not one kind of vulnerability, but all of them at once," said Apiiro product manager Itay Nussbaum, in a [2]blog post .
[3]
[4]
"The message for CEOs and boards is blunt: if you're mandating AI coding, you must mandate AI AppSec in parallel. Otherwise, you're scaling risk at the same pace you're scaling productivity."
The AI assistants generating code for the repos in question also tended to pack more code into fewer pull requests, making code reviews more complicated because the proposed changes touch more parts of the codebase. In one instance, Nussbaum said, an AI-driven pull request altered an authorization header across multiple services, and when a downstream service wasn't updated, that created a silent authentication failure.
[5]
The AI code helpers aren't entirely without merit. They reduced syntax errors by 76 percent and logic bugs by 60 percent, but at a greater cost – a 322 percent increase in privilege escalation paths and 153 percent increase in architectural design flaws.
"In other words, AI is fixing the typos but creating the timebombs," said Nussbaum.
[6]Boffins build automated Android bug hunting system
[7]Crims claim HexStrike AI penetration tool makes quick work of Citrix bugs
[8]It looks like you're ransoming data. Would you like some help?
[9]FreeBSD Project isn't ready to let AI commit code just yet
Apiiro's analysis also found that developers relying on AI help exposed sensitive cloud credentials and keys nearly twice as often as their DIY colleagues.
The firm's findings echo the work of other researchers. For example, in May 2025, computer scientists from University of San Francisco, Vector Institute for Artificial Intelligence (Canada), and University of Massachusetts Boston [10]determined that allowing AI models to iteratively improve code samples degrades security.
This shouldn't be surprising given that AI models ingest vulnerabilities [11]in training data and tend to repeat those flaws when generating code. At the same time, AI models are being used [12]to find zero-day vulnerabilities in Android apps .
[13]
Apiiro's observation about AI-assisted developers producing code faster than those without appears to contradict recent research from Model Evaluation & Threat Research (METR) that found AI coding tools [14]made software developers slower . It may be however that Apiiro is counting only the time required to generate code, not the time required to iron out the flaws.
Apiiro, based in Israel, wasn't immediately available to respond. ®
Get our [15]Tech Resources
[1] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aLq0uL6Z1kHBdbAQgqzOwwAAANU&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[2] https://apiiro.com/blog/4x-velocity-10x-vulnerabilities-ai-coding-assistants-are-shipping-more-risks/
[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aLq0uL6Z1kHBdbAQgqzOwwAAANU&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aLq0uL6Z1kHBdbAQgqzOwwAAANU&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aLq0uL6Z1kHBdbAQgqzOwwAAANU&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[6] https://www.theregister.com/2025/09/04/boffins_build_automated_android_bug_hunting/
[7] https://www.theregister.com/2025/09/03/hexstrike_ai_citrix_exploits/
[8] https://www.theregister.com/2025/09/03/ransomware_ai_abuse/
[9] https://www.theregister.com/2025/09/03/freebsd_project_update_no_ai/
[10] https://arxiv.org/html/2506.11022v1
[11] https://www.theregister.com/2025/06/05/llm_kept_persistent_path_traversal_bug_alive/
[12] https://www.theregister.com/2025/09/04/boffins_build_automated_android_bug_hunting/
[13] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aLq0uL6Z1kHBdbAQgqzOwwAAANU&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[14] https://www.theregister.com/2025/07/11/ai_code_tools_slow_down/
[15] https://whitepapers.theregister.com/
Efficiency
> a 322 percent increase in privilege escalation paths and 153 percent increase in architectural design flaws.
That security vulnerability one isn't good, but those can be fixed. That architectural design issue... that's going to result in supporting something bad, forever, or spending a *LOT* of time and effort to fix it.
The whole "AI Code"ing thing seems to be: write more code by not thinking about the problem. I mean, if you don't have to consider anything about the results of the product, could you write crap similarly fast as AI coding can generate the crap?
All that I read makes me want to use AI coding / assistants less and less. The editor points out syntax errors, I don't need or want AI code to assume an incorrect level of Python indentation, breaking the whole logic flow, because I missed a space somewhere.
Re: Efficiency
The trick with AI code is to role-play it. Tell it: “Pretend you’re an independent contractor brought in to fix this mess.” Then reject a few rounds with: “You can’t be serious, guv - have another go.”
Then follow up few times with: “I could hack this code even with half a keyboard - that’s how many holes you’ve left. Try again mate.”
It eventually coughs up something half-decent, but only after you’ve burned more time than if you’d just written it yourself.
"if you're mandating AI coding, you must mandate AI AppSec in parallel"
If that snakeoil you bought isn't working out for you, I've got this other snakeoil to fix that.
Meanwhile, in the real world:
[1]Curl creator mulls nixing bug bounty awards to stop AI slop
[1] https://www.theregister.com/2025/07/15/curl_creator_mulls_nixing_bug/
They reduced syntax errors by 76 percent
Because otherwise we'd *never* be able to find all those syntax errors, so good job there. /s
Re: They reduced syntax errors by 76 percent
Seriously though, that suggests that code was being checked in without the blessing of compilation or syntax check.
Re: They reduced syntax errors by 76 percent
JavaScript FTL.
Hold on there
Syntax errors ?
Aren't those things caught by the compiler ?
If you can't write syntaxically correct code, it won't compile, period. So if your developers are constantly blocked by syntax errors, fire them and hire ones who know how to write code. AI is not required here.
Re: Hold on there
Except to vet the applicants. Where it will preferentially select those who have used AI to generate their CVs...
We're doomed. Doomed, I tell you, doomed.
Re: Hold on there
guessing lots of scripting type sources and JIT.
Obscurity
So let’s get this straight: the AI assistants aren’t “creating holes” - they’ve been neutered to never write the kind of code that might actually test or trigger them. The result? They happily churn out boilerplate at hyperspeed while leaving clueless devs blind to the weak spots, effectively teeing everything up for the hackers.
Typical AI convo goes like this:
Dev: “I think there’s a buffer issue here - how do I check it?”
AI: “Well, here are some sanitized lab methods and a 3-page essay on responsible disclosure. But I can’t show you the exact thing that an attacker would actually run.”
Translation: “Don’t worry, you’ll only find out when the ransomware gang does.”
In other words: AI fixes the typos, forbids the tripwires, and leaves the timebombs live.
Progress?
> AI-assisted developers produced three to four times more code than their unassisted peers, but also generated ten times more security issues.
Possibly forty years ago, I came across a cartoon in one of the IT industry monthlies. It showed two techies in a machine room surrounded by huge cabinets of computing equipment (stuff that would now all fit in a matchbox).
One was saying to the other "with this new computer the boss is able to mess up a project in half his usual time". It seems that little has changed.