News: 1770163075

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

AI agents can't yet pull off fully autonomous cyberattacks - but they are already very helpful to crims

(2026/02/04)


AI agents and other systems can't yet conduct cyberattacks fully on their own - but they can help criminals in many stages of the attack chain, according to the International AI Safety report.

The second annual [1]report , chaired by the Canadian computer scientist Yoshua Bengio and authored by more than 100 experts across 30 countries, found that over the past year, developers of AI systems have vastly improved their ability to help automate and perpetrate cyberattacks.

Perhaps the best, and scariest, evidence of that finding appeared in Anthropic's November 2025 report about Chinese cyberspies [2]abusing its Claude Code AI tool to automate most elements of attacks directed at around 30 high-profile companies and government organizations. Those attacks succeeded in "a small number of cases."

[3]

"At least one real-world incident has involved the use of semi-autonomous cyber capabilities, with humans intervening only at critical decision points," according to the AI safety report. "Fully autonomous end-to-end attacks, however, have not been reported."

[4]

[5]

Two areas where AI is especially useful to criminals are [6]scanning for software vulnerabilities and [7]writing malicious code .

During [8]DARPA's AI Cyber Challenge (AIxCC) – a two-year competition in which teams built AI models to find vulnerabilities in open source software that undergirds critical infrastructure – finalist systems autonomously identified [9]77 percent of the synthetic vulnerabilities used in the final scoring round, according to competition organizers.

[10]

And while that is an example of defenders using AI to find and fix vulnerabilities, rather than attackers using AI to find and exploit them, criminals are using models in similar ways. Last northern summer, we saw attackers on underground forums [11]claiming to use HexStrike AI , an open-source red-teaming tool, to target critical vulnerabilities in Citrix NetScaler appliances within hours of the vendor disclosing the problems.

[12]Yes, criminals are using AI to vibe-code malware

[13]AI-powered cyberattack kits are 'just a matter of time,' warns Google exec

[14]Agents gone wild! Companies give untrustworthy bots keys to the kingdom

[15]DIY AI bot farm OpenClaw is a security 'dumpster fire'

Additionally, AI systems are getting much better at [16]malware writing , and criminals can trade weaponized models that write ransomware and data-stealing code for [17]as little as $50 a month .

The good news for now, according to the report’s authors, is that AI systems still aren't great at carrying out multi-stage attacks without human help.

"Research suggests that autonomous attacks remain limited because AI systems cannot reliably execute long, multi-stage attack sequences," according to the report. "For example, failures they exhibit include executing irrelevant commands, losing track of operational state, and failing to recover from simple errors without human intervention."

Keep in mind, however, that this all was written before the [18]security dumpster fire that is OpenClaw – the AI agent [19]previously known as Moltbot and Clawdbot – and Moltbook, the vibe-coded social media platform for AI agents.

[20]

So it's also entirely plausible that the world won't end with a sophisticated, autonomous multi-stage cyberattack dreamed up by a nation-state crew or criminal mastermind, but rather a single agent that goes off the rails. ®

Get our [21]Tech Resources



[1] https://internationalaisafetyreport.org/publication/international-ai-safety-report-2026#2.1.3.

[2] https://www.theregister.com/2025/11/13/chinese_spies_claude_attacks/

[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_security/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aYLScKCBdMEen3oeUohTiQAAAQ0&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0

[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_security/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aYLScKCBdMEen3oeUohTiQAAAQ0&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_security/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aYLScKCBdMEen3oeUohTiQAAAQ0&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0

[6] https://www.theregister.com/2025/12/15/react2shell_flaw_china_iran/

[7] https://www.theregister.com/2026/01/08/criminals_vibe_coding_malware/

[8] https://www.darpa.mil/research/programs/ai-cyber

[9] https://www.darpa.mil/news/2025/aixcc-results

[10] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_security/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aYLScKCBdMEen3oeUohTiQAAAQ0&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[11] https://www.theregister.com/2025/09/03/hexstrike_ai_citrix_exploits/

[12] https://www.theregister.com/2026/01/08/criminals_vibe_coding_malware/

[13] https://www.theregister.com/2026/01/23/ai_cyberattack_google_security/

[14] https://www.theregister.com/2026/01/29/ai_agent_identity_security/

[15] https://www.theregister.com/2026/02/03/openclaw_security_problems/

[16] https://www.theregister.com/2026/01/20/voidlink_ai_developed/

[17] https://www.theregister.com/2025/11/25/wormgpt_4_evil_ai_lifetime_cost_220_dollars/

[18] https://www.theregister.com/2026/02/03/openclaw_security_problems/

[19] https://www.theregister.com/2026/01/27/clawdbot_moltbot_security_concerns/

[20] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_security/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aYLScKCBdMEen3oeUohTiQAAAQ0&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0

[21] https://whitepapers.theregister.com/



The joys of human-AI experimentation

Anonymous Coward

Yeah, that social media crockpot of A2A self-replicating prompt-virus injection dumpster fire trials that is OpenClaw seems to be where a lot of action is at right now ... it should be interesting to receive the first reports of system compromise and failure modes from the eager [1]jackasses who enthusiastically suckered themselves into that potent petri dish imho.

It'll be like ' Who, Me? ' meets ' AI-pocalypse ' ... prescription strength professional grade entertainment! (except for the irresponsibles & victims) ;|

[1] https://en.wikipedia.org/wiki/Jackass_(franchise)

MRDA

amanfromMars 1

AI agents and other systems can't yet conduct cyberattacks fully on their own - but they can help criminals in many stages of the attack chain, according to the International AI Safety report.

Well, they would say that, wouldn’t they, Jessica/El Reg ...... beings as they are just one of myriad tiny melting cogs in the titanic and lunatic analogue machine that is the official extant establishment AI opposition and competition.

It is SMARTR to not believe every word as it is reported for far too many of them are shared in order to support catastrophically failing infrastructures and projects and programs housing and hosting and pimping and pumping mountainous tiers of lies built upon rapidly disappearing foundations.

You can believe though ..... Don't relax: This is a 'when, not if’ scenario ....... with timely interventions and interruptions and explosions and exploitations a great deal sooner than is decent and most definitely never ever really expected.

PS NB ..... Relax, don’t do it. Cyberattacks which can help criminals in many stages of the attack chain are early test models exercised to expose likely compromised areas of operational systems failure for future remedial eradication.

An optimist believes we live in the best world possible;
a pessimist fears this is true.