Ex-CISA head thinks AI might fix code so fast we won't need security teams
- Reference: 1761565380
- News link: https://www.theregister.co.uk/2025/10/27/jen_easterly_ai_cybersecurity/
- Source link:
Speaking at AuditBoard's user conference in San Diego, Easterly said the threat landscape has never stopped evolving.
The proliferation of data, platforms, and devices meant "we've expanded the attack surface for cyber threat actors like China and Russia and Iran and North Korea and gangs of cybercriminals." Easterly said that if cybercrime was a country, it would be the third biggest in the world, just behind the US and China.
[1]
But ultimately, this is all the result of bad software, ridden with vulnerabilities.
[2]
[3]
"We don't have a cybersecurity problem. We have a software quality problem," she said. The main reason for this was software vendors' prioritization of speed to market and reducing cost over safety.
AI is making attackers more capable, helping them create stealthier malware and "hyper-personalized phishing," and also to spot and surface vulnerabilities and flaws more quickly.
[4]
CISA has responded with its own AI action plan, and "I believe if we get this right, we will actually be able to tip the balance to the defenders and protectors."
That includes through detection, countermeasures, and learning from attacks, but also identifying vulnerabilities and ensuring software is secure by design.
Ultimately, she said, "if we're able to build and deploy and govern these incredibly powerful technologies in a secure way, I believe it will lead to the end of cybersecurity."
[5]
By which she meant that a security breach would be an anomaly, not a cost of doing business.
It was important to demystify hackers, Easterly added, and stop giving them portentous or glamorous names such as Fancy Bear or Scattered Spider. More appropriate titles would be "scrawny nuisance" or "weak weasel."
Equally, it is important to be clear about the real extent of their technical capabilities. Phraseology like "advanced persistent threat" obscured the fact that attackers are overwhelmingly exploiting the same categories of vulnerabilities that have plagued the industry for years. The People's Liberation Army is not relying on exotic cyber weapons, she said, but simply flaws in routers and other network devices to lay the ground for a full-scale attack in the event of war against Taiwan.
Moreover, Easterly said, this distracted attention from the victims. Too often the emphasis is wrongly on mistakes companies make. While user behavior could act as the start of an investigation, it shouldn't be the conclusion.
Rather, the real focus should be on the fact that the common factors uncovered by MITRE nearly 20 years ago – cross-site scripting, memory unsafe coding, SQL injection, directory traversal – remain part and parcel of shipped software. "It's not jaw dropping innovation… They were the golden oldies."
[6]SIM city: Feds say 100,000-card farms could have killed cell towers in NYC
[7]Ex-CISA chief slams MAGA 'manufactured outrage' after sudden West Point firing
[8]Uncle Sam wants you – to use memory-safe programming languages
[9]CISA loses another senior exec - and the budget cuts haven't even started yet
This is because software companies insisted customers bear all risk and convinced government and regulators that this was acceptable.
AI offers a way to address this, she claimed, as it is far better at tracking and identifying flaws in code. And it would be possible to tackle the mountain of technical debt left by a "rickety mess of overly patched, flawed infrastructure."
Easterly, who [10]stepped down from her CISA role as Trump returned to the White House, and later had a role at West Point rescinded, also backed the current administration's approach to AI regulation.
"I think the great news is the current administration is continuing to champion the idea of secure by design for software broadly." But she said "the kicker" was that the recently released White House AI Action Plan talks specifically about cybersecurity and the need for AI systems that are created, designed, developed, tested, and delivered with security as the top priority.
In a Q&A with Easterly, AuditBoard CISO Richard Marcus said the company found secure-by-design principles valuable for dealing with suppliers. But, he added, "we actually turn the mirror back on our internal teams too, and say this is what we're expecting in marketplace, but let's make sure our products are also upholding the same design principles."
Asked by Marcus what was top of mind for next year, Easterly said the key to reducing software risk is demanding more from software vendors. "That's where the risk gets introduced, and that's where we have the power and the capability through everything that you all do, to be able to drive down that risk in a very material way." ®
Get our [11]Tech Resources
[1] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_security/cybercrime&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aP-lJmYIAFxNL3WXkgf-ngAAAZU&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[2] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_security/cybercrime&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aP-lJmYIAFxNL3WXkgf-ngAAAZU&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_security/cybercrime&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aP-lJmYIAFxNL3WXkgf-ngAAAZU&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_security/cybercrime&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aP-lJmYIAFxNL3WXkgf-ngAAAZU&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_security/cybercrime&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aP-lJmYIAFxNL3WXkgf-ngAAAZU&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[6] https://www.theregister.com/2025/09/23/secret_service_sim_bust/
[7] https://www.theregister.com/2025/07/31/jen_easterly_west_point_termination/
[8] https://www.theregister.com/2025/06/27/cisa_nsa_call_formemory_safe_languages/
[9] https://www.theregister.com/2025/06/12/cisa_loses_another_senior_exec/
[10] https://www.theregister.com/2025/01/22/trump_cyber_policy/
[11] https://whitepapers.theregister.com/
SQL injection
[1]Little Bobby Tables is alive and well.
[1] https://xkcd.com/327/
Re: SQL injection
Humans might survive, it is the databases that will be going extinct-by-injection. Now that I think of it,...humans can no longer live without the databases and will go extinct too.
This...
.... has to be the funniest thing I read all day so far, and that included some brilliant posts on mastodon.
Does not really solve the problem for companies...
I could see this approach working if everyone ran everything in the cloud and build pipelines could update continuously with fixes as the AI DAST/SAST tooling found vulnerabilties and fixed them,
BUTT...
This does not fix the problem with operating systems being vulnerable to things (as they are not 'cloud') nor will it help with locally deployed apps (unless there is near constant updating of the apps), nor will it help with testing compatibility for clients that consume the updates, or the changing user experience.
I'm torn here between marvelling at the vision of people that think AI can save the world (even when it seems like the use cases are scraping the bottom of the barrel with a plan to throw it against the wall and see what sticks) and the shot-sightedness of the same people's understanding of how normal enterprise IT works.
"I believe if we get this right..."
So-callled AI (LLMs) has produced more true believers than anything else for a long time.
IF A IS TRUE, THEN A IS TRUE
Ultimately, she said, "if we're able to build and deploy and govern these incredibly powerful technologies in a secure way, I believe it will lead to the end of cybersecurity."
In other words, if we are able to build software securely, we will have software security.
Re: IF A IS TRUE, THEN A IS TRUE
FFS, does she have no common sense?
For starters AI isn't going to fix crap software reliably anytime soon (if ever) and then there's the minor problem that human greed and lawlessness are a constant. The software industry have been insecure since forever, and there's NOTHING going on that persuades me that their products are becoming any more resistant to the malcontents.
We've already seen AI used for cyber attacks, impersonation fraud, and simple malicious spam, and the crims have barely got started on the opportunities of AI.
Re: IF A IS TRUE, THEN A IS TRUE
It's not even true though. It's a profoundly over-simplified view of computer security, naively ignoring the adversarial nature of the endeavour. Even if somehow, miraculously software bugs ceased to exists, humans, and other systems, still need to use that software and that use is itself represents one of the broadest categories of vulnerability.
'The end of cybersecurity' will come whenever computers cease to exist and not before.
How delightfully naive
to think that a little AI magic pixie dust will solve all security problems.
The truth is good, old fashioned software engineering practice that starts with a secure design and ends with quality assurance testing.
Yes: AI might help with this but AI must not be used as an excuse to cut s/ware development costs - which only results in enshittification.
What an idiot
It's always great to see that people chosen to lead these kind of agencies have absolutely zero understanding of what the agency does and how technology actually works.
Re: What an idiot
I read it as Easterly suggesting that vendors used AI vulnerability scanning before releasing systems for hackers to try. Not trusting AI to write secure code.
Jen Easterly did a lot of good work at CISA, she was pushed out because of politics over the role of the agency.
Re: What an idiot
Expressing the opinion that all security incidents are caused by poor quality software and that LLMs can solve the problem indicates a fundamental lack of understanding.
Whether Easterly did good work at CISA before she was pushed out for not sucking up to the mad orange king does not change my opinion of her lack of understanding.
Perhaps a bit overoptimistic
Reading stuff like this always reminds me of Richard Feynman's appendix F to the report of the presidential commission on the Challenger disaster. [1]https://www.nasa.gov/history/rogersrep/v2appf.htm . In his appendix, Feynman describes a three order of magnitude gap between the reliability estimates of the working engineers (estimate 1 failure per 100 launches) and the project management (1 per 100,000 launches).
Let's just say that I suspect Ms Easterly probably wasn't the best possible choice for CISA head and that Trump's nominee for the job Sean Plankey doesn't look to be that much of an improvement. Could be wrong about that. Hope I am.
And, Oh Yes, Trump wants to cut the CISA budget and reduce staffing by a third. Will that make CISA 33% less ineffectual?
[1] https://www.nasa.gov/history/rogersrep/v2appf.htm
Re: Perhaps a bit overoptimistic
Nominative determination at work again -Trump's nominee for the job Sean Plankey
"cut the CISA budget and reduce staffing by a third"
That sounds like a job for GenAI!
Re: "cut the CISA budget and reduce staffing by a third"
Or even JenAI (see the IT crowd).
> "We don't have a cybersecurity problem. We have a software quality problem," she said. The main reason for this was software vendors' prioritization of speed to market and reducing cost over safety.
That's actually not wrong.
Where she goes wrong is with the solution. Software vendors put security at a very low priority not because they're dumb or evil (though some are), but because all the economic incentives are extremely in favor of speed to market and cost reduction, and security costs a lot of time and money. As long as the incentives are the same, shifting the problem to AI won't solve it.
I had to look up what CISA is
And now I know, I'd confidently say that I wouldn't trust this person to competently operate a microwave oven, never mind any sort of "computer".
No, AI isn't the magical unicorn pissing rainbows and sparkles. And one needs only look at the quality of GenAI pictures, stories, discussions, and code to know that it may well fix the problem it identifies but create a dozen different problems in the process. There's no "intelligence", no "understanding", and very little "memory" (as in remembering context). That's not something I'd let anywhere near actual executable code without plenty of human oversight, and full unit testing.
Wow
Spoken like a true corporate shill.
Hillarious at best, terrifying otherwise.
He's clearly not spotted that the introduction of AI coding has coincided with a massive dip in the quality of software being produced.
It seems to be daily I'm exposed absolute mounds of crap that are proudly released by many a major company. The only elements of these junkware apps that seem to work well are their unlawful levels of data collection and their continued persistance in pushing some form of AI labelled chatbot dungheap.
Money
It'll all boil down to costs. The price of secure engineering is still going to be high with "AI" solutions because the billions invested have to be repaid and a poor sod will still have to be paid to verify and, crucially, be capable of understanding the output and consequences. My bet is nothing much will change once the true cost becomes apparent.
She really doesn't have a clue
Nuff Sed!
AI fixing code so fast we don't need a security team?
Sounds great except who is going to validate that the AI code solves the problem and doesn't introduce any new ones?
Especially given that compromising AI systems is trivially easy and there appears to be no way to make LLMs secure.
Either you have a security team to second-guess your security-team-replacing AI or you don't have security.
Bring back flowcharts
I always argued that flowcharts were the way to design software. They show logic in two dimensions, making many errors and omissions much easier to see. But the industry has chosen the path of 'foolproof' programming languages, so the flow of disasters has continued apace.
I would like to see AI creating and analysing flowcharts to find and fix flaws.
The AI company that is paying her to shill is not getting their money's worth
Obvious flaw in the argument..
Who, exactly produced the code for AI?