Infosec community panics as Anthropic rolls out Claude code security checker
- Reference: 1771876201
- News link: https://www.theregister.co.uk/2026/02/23/claude_code_security_panic/
- Source link:
The new security capability is currently available as a [1]limited research preview for enterprise and team customers to test in their environments, and open-source maintainers can apply for free, expedited access.
The announcement sent some cybersecurity stocks into a downward spiral and prompted much pontificating about the [2]end of security as we know it - along with a [3]dissenting opinion from CrowdStrike co-founder and CEO George Kurtz. His firm's shares were among those hit on Friday, closing the day [4]down nearly 8 percent from the previous close, and Kurtz asked Claude if its new security tool could replace what CrowdStrike does (tl;dr: Claude said no).
[5]
The reality, however, isn't nearly as gloomy for the security industry - nor as exciting and sexy as AI evangelists make it out to be. Yes, large language models have shown an ability to flag some pattern-based vulnerabilities at scale. Earlier this month, Anthropic [6]claimed that Claude Opus 4.6 "found and validated more than 500 high-severity vulnerabilities" in open source code.
[7]
[8]
But Claude's security feature is simply the latest and buzziest AI-enabled bug-fixing tool, meaning Anthropic is now doing what other companies at the forefront of agentic AI are also doing. When it comes to securing code, it's a move in the right direction. But it's not sufficient - humans are still required.
[9]Amazon also uses AI agents to find security flaws and suggest fixes internally. [10]Microsoft has its own swarm of security agents that, among other tasks, prioritize vulnerability remediation, [11]automate the identification of impacted devices , and then initiate fixes.
[12]
Google, back in November 2024, said its [13]LLM-based bug-hunting tool Big Sleep was the "first" AI to spot a memory safety vulnerability in the wild and then fix it before the buggy code's official release. More recently, it rolled out an [14]AI agent called CodeMender that it said "automates patch creation, can identify the root cause of a vulnerability, then generate and review a working patch."
Last October, OpenAI said it's [15]privately testing Aardvark , an agentic security system based on GPT‑5, that it promises will "help developers and security teams discover and fix security vulnerabilities at scale."
As is the case with Claude's code-scanning and patching tool, all of these still need a human to sign off on the fix. "Nothing is applied without human approval: Claude Code Security identifies problems and suggests solutions, but developers always make the call," [16]Anthropic said in announcing the new feature.
[17]
According to the AI developer, Claude Code Security is context-aware - as opposed to simply doing static code analysis. It "reads and reasons about your code the way a human security researcher would: understanding how components interact, tracing how data moves through your application, and catching complex vulnerabilities that rule-based tools miss," the company said.
This will likely prove to be a useful tool for developers and security analysts, as researchers have repeatedly shown that AI is [18]very good at detecting vulnerabilities . (It's also good at writing buggy code and [19]opening up new attack vectors for criminals.)
[20]Google DeepMind minds the patch with AI flaw-fixing scheme
[21]AI blew open software security, now OpenAI wants to fix it with an agent called Aardvark
[22]Google claims Big Sleep 'first' AI to spot freshly committed security bug that fuzzing missed
[23]AWS joins Microsoft, Google in the security AI agent race
"Anything that helps developers write better, safer code is a good thing," Glenn Weinstein, CEO of supply-chain security shop Cloudsmith, told The Register . "Claude Code Security is one of many safeguards in a wide range of defenses."
Isaac Evans, CEO of developer-focused security firm Semgrep, told The Register he's "very excited for Claude Code Security, even though we haven't tried it yet."
"LLMs are fantastic for security and have a great opportunity to actually make a dent in the coming wave of software vulnerabilities," he said.
However, the real test of these types of bug-hunting AI agents will be how well they perform at scale, according to Evans.
"So far none of the foundation model companies - Big Sleep, Aardvark, OpenAI - have published detailed statistics on how many false positives they experienced to get the results they had, or the cost to do so," Evans said. "That matters: Was this a $1 million investment? $10 million? This is some level of marketing-first, science-second. We are also hearing reports from security researcher friends that of the 500 vulnerabilities, not all of them are truly 'high-severity' as described." ®
Get our [24]Tech Resources
[1] https://claude.com/solutions/claude-code-security
[2] https://www.linkedin.com/pulse/anthropics-claude-code-security-warning-shot-cyber-saas-shaam-farooq-qcx1c/?trackingId=d8FM%2FSvgR1ebMreiz0NnIA%3D%3D
[3] https://www.linkedin.com/posts/georgekurtz_theres-been-a-lot-of-noise-lately-about-activity-7431417202505064448-6x1H/?utm_source=share&utm_medium=member_desktop&rcm=ACoAAAX3rawBPm6RIM1LZlSs7tFoRQis8-XnEUo
[4] https://finance.yahoo.com/news/why-crowdstrike-crwd-stock-falling-210616274.html
[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_security/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aZzcElhzYlAHtEM-pbQK5wAAAE4&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[6] https://red.anthropic.com/2026/zero-days/
[7] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_security/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aZzcElhzYlAHtEM-pbQK5wAAAE4&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[8] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_security/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aZzcElhzYlAHtEM-pbQK5wAAAE4&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[9] https://www.theregister.com/2025/12/02/aws_security_agent_ai/
[10] https://www.theregister.com/2025/03/24/microsoft_security_copilot_agents/
[11] https://www.microsoft.com/insidetrack/blog/vuln-ai-our-ai-powered-leap-into-vulnerability-management-at-microsoft/
[12] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_security/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aZzcElhzYlAHtEM-pbQK5wAAAE4&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[13] https://www.theregister.com/2024/11/05/google_ai_vulnerability_hunting/
[14] https://www.theregister.com/2025/10/07/google_deepmind_patches_holes/
[15] https://www.theregister.com/2025/10/31/openai_aardvark_agentic_security/
[16] https://www.anthropic.com/news/claude-code-security
[17] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_security/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aZzcElhzYlAHtEM-pbQK5wAAAE4&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[18] https://red.anthropic.com/2026/zero-days/
[19] https://www.theregister.com/2026/02/12/google_china_apt31_gemini/
[20] https://www.theregister.com/2025/10/07/google_deepmind_patches_holes/
[21] https://www.theregister.com/2025/10/31/openai_aardvark_agentic_security/
[22] https://www.theregister.com/2024/11/05/google_ai_vulnerability_hunting/
[23] https://www.theregister.com/2025/12/02/aws_security_agent_ai/
[24] https://whitepapers.theregister.com/
Re: Timely reminder ... to do with *ANYTHING* 'AI' !!!
I wouldn't mind running code through it to see if it spotted any security holes. But definitely not take its response at face value, nor ask it to write a fix. If it occasionally identifies an actual security issue, it's useful. Just another tool in the toolbox.
"very excited for Claude Code Security, even though we haven't tried it yet."
Religious folk are very excited about the afterlife stuff, even though they haven't tried it yet.
You'd think the infosec companies would be licking their chops at the though of charging 10 times as much to fix the mess "AI" leaves behind?
Oh wait, they only think in quarterly reports, so waiting it out to then pounce in isn't in their DNA.
To quote a famous movie
"This business will get out of control. It will get out of control and we'll be lucky to live through it." — Admiral Josh Painter.
aardvark?
Doesn't McMaster University have a trademark on an application named aardvark?
Timely reminder ... to do with *ANYTHING* 'AI' !!!
'AI' guesses the answer based on what it was trained on BUT is NOT 100% correct at all times.
If you base your 'world' on the 'AI' answers ... then your 'world' will eventually end when the 'AI' lies !!!
This is a given ... 100%
Why do people keep expecting things to be different !!!???
The 'presentation' is being polished over & over & over again BUT the answer is ALWAYS a guess !!!
Do you want to 'Bet' the company on a 'guess' !!!
:)