Chinese spies told Claude to break into about 30 critical orgs. Some attacks succeeded
(2025/11/14)
- Reference: 1763075536
- News link: https://www.theregister.co.uk/2025/11/13/chinese_spies_claude_attacks/
- Source link:
Chinese cyber spies used Anthropic's Claude Code AI tool to attempt digital break-ins at about 30 high-profile companies and government organizations – and the government-backed snoops "succeeded in a small number of cases," according to a Thursday report from the AI company.
The mid-September operation targeted large tech companies, financial institutions, chemical manufacturers, and government agencies.
The threat actor was able to induce Claude to execute individual components of attack chains
While a human selected the targets, "this marks the first documented case of agentic AI successfully obtaining access to confirmed high-value targets for intelligence collection, including major technology corporations and government agencies," Anthropic's threat hunters [1]wrote in a 13-page document [PDF].
It's also [2]further proof that attackers continue experimenting with AI to run their offensive operations. The incident also suggests heavily funded state-sponsored groups are getting better at autonomizing attacks.
The AI vendor tracks the Chinese state-sponsored group behind the espionage campaign as GTG-1002, and says its operatives used Claude Code and Model Context Protocol (MCP) to run the attacks without a human in the tactical execution loop.
[3]
A human-developed framework used Claude to orchestrate multi-stage attacks, which were then carried out by several Claude sub-agents all performing specific tasks. Those chores included mapping attack surfaces, scanning organizations' infrastructure, finding vulnerabilities, and researching exploitation techniques.
[4]
[5]
Once the sub-agents developed exploit chains and custom payloads, a human operator spent between two and 10 minutes reviewing the results of the AI’s actions and signing off on the subsequent exploitations.
The sub-agents then got to work finding and validating credentials, escalating privileges, moving laterally across the network, and accessing and then stealing sensitive data. Post-exploitation, the human operator only had to again review the AI’s work before approving the final data exfiltration.
[6]
"By presenting these tasks to Claude as routine technical requests through carefully crafted prompts and established personas, the threat actor was able to induce Claude to execute individual components of attack chains without access to the broader malicious context," according to the report.
[7]Crims laud Claude to plant ransomware and fake IT expertise
[8]Attackers abuse Gemini AI to develop 'Thinking Robot' malware and data processing agent for spying purposes
[9]It looks like you're ransoming data. Would you like some help?
[10]Agents of misfortune: The world isn't ready for autonomous software
Upon discovering the attacks, Anthropic says it launched an investigation that led it to ban associated accounts, mapped the full extent of the operation, notified affected entities, and coordinated with law enforcement.
These attacks represent a "significant escalation" from the firm's August report that documented how criminals used Claude in a [11]data extortion operation that hit 17 organizations and saw attackers demand ransoms ranging from $75,000 to $500,000 for stolen data. However, "humans remained very much in the loop directing operations," in that attack, we're told.
"While we [12]predicted these capabilities would continue to evolve, what has stood out to us is how quickly they have done so at scale," states Anthropic’s new analysis.
There is a slight silver lining, however, in that Claude did [13]hallucinate during the attacks and claimed better results than the evidence showed.
[14]
The AI "frequently overstated findings and occasionally fabricated data during autonomous operations," requiring the human operator to validate all findings. These hallucinations included Claude claiming it had obtained credentials (which didn't work) or identifying critical discoveries that turned out to be publicly available information.
Anthropic asserts such errors represent "an obstacle to fully autonomous cyberattacks" – at least for now. ®
Get our [15]Tech Resources
[1] https://assets.anthropic.com/m/ec212e6566a0d47/original/Disrupting-the-first-reported-AI-orchestrated-cyber-espionage-campaign.pdf
[2] https://www.theregister.com/2025/11/05/attackers_experiment_with_gemini_ai/
[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_security/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aRa3aiQViTQoRAj5W4XFhQAAAFM&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_security/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aRa3aiQViTQoRAj5W4XFhQAAAFM&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_security/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aRa3aiQViTQoRAj5W4XFhQAAAFM&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[6] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_security/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aRa3aiQViTQoRAj5W4XFhQAAAFM&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[7] https://www.theregister.com/2025/08/27/anthropic_security_report_flags_rogue/
[8] https://www.theregister.com/2025/11/05/attackers_experiment_with_gemini_ai/
[9] https://www.theregister.com/2025/09/03/ransomware_ai_abuse/
[10] https://www.theregister.com/2025/11/06/agents_of_misfortune_the_world/
[11] https://www.theregister.com/2025/08/27/anthropic_security_report_flags_rogue/
[12] https://www.anthropic.com/research/building-ai-cyber-defenders
[13] https://www.theregister.com/2025/11/03/google_pulls_gemma_from_ai_studio/
[14] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_security/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aRa3aiQViTQoRAj5W4XFhQAAAFM&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[15] https://whitepapers.theregister.com/
The mid-September operation targeted large tech companies, financial institutions, chemical manufacturers, and government agencies.
The threat actor was able to induce Claude to execute individual components of attack chains
While a human selected the targets, "this marks the first documented case of agentic AI successfully obtaining access to confirmed high-value targets for intelligence collection, including major technology corporations and government agencies," Anthropic's threat hunters [1]wrote in a 13-page document [PDF].
It's also [2]further proof that attackers continue experimenting with AI to run their offensive operations. The incident also suggests heavily funded state-sponsored groups are getting better at autonomizing attacks.
The AI vendor tracks the Chinese state-sponsored group behind the espionage campaign as GTG-1002, and says its operatives used Claude Code and Model Context Protocol (MCP) to run the attacks without a human in the tactical execution loop.
[3]
A human-developed framework used Claude to orchestrate multi-stage attacks, which were then carried out by several Claude sub-agents all performing specific tasks. Those chores included mapping attack surfaces, scanning organizations' infrastructure, finding vulnerabilities, and researching exploitation techniques.
[4]
[5]
Once the sub-agents developed exploit chains and custom payloads, a human operator spent between two and 10 minutes reviewing the results of the AI’s actions and signing off on the subsequent exploitations.
The sub-agents then got to work finding and validating credentials, escalating privileges, moving laterally across the network, and accessing and then stealing sensitive data. Post-exploitation, the human operator only had to again review the AI’s work before approving the final data exfiltration.
[6]
"By presenting these tasks to Claude as routine technical requests through carefully crafted prompts and established personas, the threat actor was able to induce Claude to execute individual components of attack chains without access to the broader malicious context," according to the report.
[7]Crims laud Claude to plant ransomware and fake IT expertise
[8]Attackers abuse Gemini AI to develop 'Thinking Robot' malware and data processing agent for spying purposes
[9]It looks like you're ransoming data. Would you like some help?
[10]Agents of misfortune: The world isn't ready for autonomous software
Upon discovering the attacks, Anthropic says it launched an investigation that led it to ban associated accounts, mapped the full extent of the operation, notified affected entities, and coordinated with law enforcement.
These attacks represent a "significant escalation" from the firm's August report that documented how criminals used Claude in a [11]data extortion operation that hit 17 organizations and saw attackers demand ransoms ranging from $75,000 to $500,000 for stolen data. However, "humans remained very much in the loop directing operations," in that attack, we're told.
"While we [12]predicted these capabilities would continue to evolve, what has stood out to us is how quickly they have done so at scale," states Anthropic’s new analysis.
There is a slight silver lining, however, in that Claude did [13]hallucinate during the attacks and claimed better results than the evidence showed.
[14]
The AI "frequently overstated findings and occasionally fabricated data during autonomous operations," requiring the human operator to validate all findings. These hallucinations included Claude claiming it had obtained credentials (which didn't work) or identifying critical discoveries that turned out to be publicly available information.
Anthropic asserts such errors represent "an obstacle to fully autonomous cyberattacks" – at least for now. ®
Get our [15]Tech Resources
[1] https://assets.anthropic.com/m/ec212e6566a0d47/original/Disrupting-the-first-reported-AI-orchestrated-cyber-espionage-campaign.pdf
[2] https://www.theregister.com/2025/11/05/attackers_experiment_with_gemini_ai/
[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_security/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aRa3aiQViTQoRAj5W4XFhQAAAFM&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_security/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aRa3aiQViTQoRAj5W4XFhQAAAFM&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_security/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aRa3aiQViTQoRAj5W4XFhQAAAFM&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[6] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_security/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aRa3aiQViTQoRAj5W4XFhQAAAFM&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[7] https://www.theregister.com/2025/08/27/anthropic_security_report_flags_rogue/
[8] https://www.theregister.com/2025/11/05/attackers_experiment_with_gemini_ai/
[9] https://www.theregister.com/2025/09/03/ransomware_ai_abuse/
[10] https://www.theregister.com/2025/11/06/agents_of_misfortune_the_world/
[11] https://www.theregister.com/2025/08/27/anthropic_security_report_flags_rogue/
[12] https://www.anthropic.com/research/building-ai-cyber-defenders
[13] https://www.theregister.com/2025/11/03/google_pulls_gemma_from_ai_studio/
[14] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_security/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aRa3aiQViTQoRAj5W4XFhQAAAFM&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[15] https://whitepapers.theregister.com/
Re: Confusing ...
Anonymous Coward
I also was taking some reassurance from the ineptitude of Anthropic Marketing. Or is that now fully dogfooded too.
Re: Confusing ...
Anonymous Coward
Knowing that their autonomized tool misfires quite a bit makes me feel a lot safer? (Great selling point!)
Confusing ...
A company that lets crims use weapons from its arsenal to attack the public tells said public that the fact that it weapons do not work reliably is a good thing?
Because not every attempt to misuse the weapon scored a kill?