News: 0180833402

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Cyber Stocks Slide As Anthropic Unveils 'Claude Code Security' (bloomberg.com)

(Friday February 20, 2026 @10:30PM (BeauHD) from the concerns-over-competition dept.)


An anonymous reader quotes a report from Bloomberg:

> Shares of cybersecurity software companies [1]tumbled Friday after Anthropic PBC [2]introduced a new security feature into its Claude AI model. Crowdstrike Holdings was the among the biggest decliners, falling as much as 6.5%, while Cloudflare slumped more than 6%. Meanwhile, Zscaler dropped 3.5%, SailPoint shed 6.8%, and Okta declined 5.7%. The Global X Cybersecurity ETF fell as much as 3.8%, extending its losses on the year to 14%.

>

> Anthropic said the new tool will "scans codebases for security vulnerabilities and suggests targeted software patches for human review." The firm said the update is available in a limited research preview for now.



[1] https://www.bloomberg.com/news/articles/2026-02-20/cyber-stocks-slide-as-anthropic-unveils-claude-code-security

[2] https://www.anthropic.com/news/claude-code-security



Come on, we've been through this... (Score:5, Informative)

by devslash0 ( 4203435 )

Anyone who's ever looked at output of any code-scanning security tools knows that 50% of findings are about inadequate logging, 25% completely irrelevant to the context of your app because of the highly pedantic nature of such tools (which are going to be reported back to the tool as false positives), 10% about not adhering to the least privileged principle, another 10% about low-severity low-hanging fruit, 4.9999% about somethingg potentially interesting of which most turn out to be completely insignificant, and 0.0001% actual findings.

If you're really unlucky, you'll have a non-tech manager who'll require you to spend weeks fixing everything because he wants findings numbers at the absolute zero for his bonus next month.

Real findings require a real pentest.

Re: Come on, we've been through this... (Score:4, Insightful)

by Midnight_Falcon ( 2432802 )

Everything you said is true except I'd argue 50 percent are about vulnerable libraries which have bugs which do not affect the codebase they're used in. Also, now a pentest is being defined as a vulnerability scan by AI; several companies are selling them for just under the price of a human pentest. A real, non-enshittified pentest is harder to come by these days.

Re: (Score:2)

by HiThere ( 15173 )

You're being silly. A method can find lots of errors that should be corrected without requiring a pentest. I'll admit, that there are lots of errors it probably won't find, but at least you can reduce the attack surface.

And the benefit of using an AI here SHOULD be that it de-emphasizes problems without any real effect. (Whether it is or not, I couldn't say, but if it doesn't, then why use an AI.)

Re: (Score:2)

by phantomfive ( 622387 )

In addition, many security companies don't do pentesting and are of questionable value. Adding an AI agent to the mix doesn't make things worse.

Correction or Overreaction (Score:4, Informative)

by silentbozo ( 542534 )

Thesis 1:

Cybersecurity companies are bloated and had a stock valuation premium created by insurance mandate (thou shalt contract with a cybersecurity company to keep your insurance premiums low) that will be going away.

Thesis 2:

People are freaking out, without basis, that #1 is true, when in fact the opposite is true - even with AI making code more secure, you will still need cybersecurity insurance, and the insurer is still going to mandate that you contract with an existing cybersecurity company in order to keep your premiums low, due to reinsurance rules. In fact, because of dumbshits using vibecoding, AND the use of automated tools to identify and chain vulnerabilities, domain specific expertise provided by a deep bench will be needed in the future.

Thesis 3:

Cybersecurity companies will be trimming headcount and employing more AI tools internally.

Thesis 4:

Instead of hiring a cybersecurity company, companies will staff their own cybersecurity departments.

Of all of these, I think #4 (companies growing their own cybersecurity departments) is the least likely. #3 is highly likely (there will be some reorganizing and continued adoption of automated tooling). And while #1 (companies will no longer be able to command a large premium) may be true in some cases, I think #2 (this is a giant overreaction, and the use of automated exploit chaining means you need more expertise in defense) is probably the most likely outcome. Building a system to ensure your code is foolproof just breeds bigger fools.

Given the low bar to compete against... (Score:2)

by ffkom ( 3519199 )

... I would say it should be easy for Anthropic's tool to be less shit than what those other Snake Oil Security companies have on offer. I mean, the bar for them to be better is as low as "not introducing additional security vulnerabilities by running the 'security' tool".

Introducing Claude stocks (Score:2)

by liqu1d ( 4349325 )

Stocks slide as Claude stocks

Good (Score:1)

by quonset ( 4839537 )

We converted to Zscaler and just implemented Okta. Both are nothing but steaming piles.

Fun fact: Okta has as one of its authentication methods an email with a security code. If you can't get into Outlook, how are you supposed to receive your code? If you happen to have a comany phone, then you should be okay. But if not, are you supposed to use your personal phone?

Zscaler is similar. If you select to receive a phone call, how are you supposed to receive that call if you haven't authenticated Teams?

This is

Re: Good (Score:2)

by LindleyF ( 9395567 )

Security as in credentials isn't the goal here. It's security as in memory safety and input validation.

Nature, to be commanded, must be obeyed.
-- Francis Bacon