Anthropic to Pentagon: Autonomous weapons could hurt US troops and civilians
(2026/02/27)
- Reference: 1772159628
- News link: https://www.theregister.co.uk/2026/02/27/anthropic_pentagon_response/
- Source link:
Anthropic has fired back at the US Department of War, arguing that it can’t agree to Uncle Sam’s contract demand to remove guardrails on its AI in part because the tech can’t be trusted not to harm American civilians and warfighters.
As The Register [1]reported earlier this week, the US Department of War wants to compel Anthropic to allow unrestricted military use of its Claude tech, and has threatened to cancel the AI upstart’s Pentagon contracts and penalize the company if it does not comply.
On Thursday, Anthropic issued a [2]statement in which CEO Dario Amodei said the company won’t change its stance.
[3]
“Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner,” he wrote, before adding “However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.”
[4]
[5]
Amodei said two items in Anthropic’s contract with the department of war are “simply outside the bounds of what today’s technology can safely and reliably do.”
One of those use cases is mass domestic surveillance, which Amodei said can now create “a comprehensive picture of any person’s life—automatically and at massive scale” with the help of AI. The CEO thinks that’s only legal “because the law has not yet caught up with the rapidly growing capabilities of AI.”
[6]Anthropic launches new marketing blog, pretends it's being 'written' by 'retired' LLM
[7]Claude collaboration tools left the door wide open to remote code execution
[8]Anthropic accuses China's AI labs of ripping off content – just like it did
[9]Infosec community panics as Anthropic rolls out Claude code security checker
The second use case is powering fully autonomous weapons, which Amodei says are too dangerous to deploy in their current form.
“Today, frontier AI systems are simply not reliable enough to power fully autonomous weapons,” he wrote. “We will not knowingly provide a product that puts America’s warfighters and civilians at risk.”
[10]
The CEO said Anthropic has “offered to work directly with the Department of War on R&D to improve the reliability of these systems, but they have not accepted this offer.” He also suggested fully autonomous weapons “cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day. They need to be deployed with proper guardrails, which don’t exist today.”
Amodei also pointed out what he believes are inconsistencies in the Pentagon’s approach to this matter, by pointing out that one of its threatened sanctions labels Anthropic a threat to national security for refusing to do as asked, while another seeks to compel the company to remove guardrails on AI in the name of national security.
“Regardless, these threats do not change our position: we cannot in good conscience accede to their request,” Amodei wrote.
[11]
The CEO wrapped his post by expressing his desire his desire for Anthropic to continue supplying the Pentagon, without having to remove its guardrails.
The statement sets the scene for a showdown with Secretary of War Pete Hegseth, who gave Anthropic a Friday deadline to acquiesce to the Pentagon’s terms and conditions. Hegseth has argued that the USA’s military must focus on warfighting and become more lethal. ®
Get our [12]Tech Resources
[1] https://www.theregister.com/2026/02/25/pentagon_threatens_anthropic/?_gl=1*18366aj*_ga*MTI0MjE1MDMxNS4xNzE5OTg5NTg5*_ga_JXW44Y23NM*czE3NzIxNTc2ODkkbzE5MzQkZzEkdDE3NzIxNTc2OTYkajUzJGwwJGgw
[2] https://www.anthropic.com/news/statement-department-of-war
[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/publicsector&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aaEk8zZQTyVFmzUcgkzatgAAAwI&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/publicsector&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aaEk8zZQTyVFmzUcgkzatgAAAwI&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/publicsector&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aaEk8zZQTyVFmzUcgkzatgAAAwI&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[6] https://www.theregister.com/2026/02/26/anthropic_claude_opus_3_blog/
[7] https://www.theregister.com/2026/02/26/clade_code_cves/
[8] https://www.theregister.com/2026/02/24/anthropic_misanthropic_chinese_ai_labs/
[9] https://www.theregister.com/2026/02/23/claude_code_security_panic/
[10] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/publicsector&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aaEk8zZQTyVFmzUcgkzatgAAAwI&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[11] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/publicsector&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aaEk8zZQTyVFmzUcgkzatgAAAwI&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[12] https://whitepapers.theregister.com/
As The Register [1]reported earlier this week, the US Department of War wants to compel Anthropic to allow unrestricted military use of its Claude tech, and has threatened to cancel the AI upstart’s Pentagon contracts and penalize the company if it does not comply.
On Thursday, Anthropic issued a [2]statement in which CEO Dario Amodei said the company won’t change its stance.
[3]
“Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner,” he wrote, before adding “However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.”
[4]
[5]
Amodei said two items in Anthropic’s contract with the department of war are “simply outside the bounds of what today’s technology can safely and reliably do.”
One of those use cases is mass domestic surveillance, which Amodei said can now create “a comprehensive picture of any person’s life—automatically and at massive scale” with the help of AI. The CEO thinks that’s only legal “because the law has not yet caught up with the rapidly growing capabilities of AI.”
[6]Anthropic launches new marketing blog, pretends it's being 'written' by 'retired' LLM
[7]Claude collaboration tools left the door wide open to remote code execution
[8]Anthropic accuses China's AI labs of ripping off content – just like it did
[9]Infosec community panics as Anthropic rolls out Claude code security checker
The second use case is powering fully autonomous weapons, which Amodei says are too dangerous to deploy in their current form.
“Today, frontier AI systems are simply not reliable enough to power fully autonomous weapons,” he wrote. “We will not knowingly provide a product that puts America’s warfighters and civilians at risk.”
[10]
The CEO said Anthropic has “offered to work directly with the Department of War on R&D to improve the reliability of these systems, but they have not accepted this offer.” He also suggested fully autonomous weapons “cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day. They need to be deployed with proper guardrails, which don’t exist today.”
Amodei also pointed out what he believes are inconsistencies in the Pentagon’s approach to this matter, by pointing out that one of its threatened sanctions labels Anthropic a threat to national security for refusing to do as asked, while another seeks to compel the company to remove guardrails on AI in the name of national security.
“Regardless, these threats do not change our position: we cannot in good conscience accede to their request,” Amodei wrote.
[11]
The CEO wrapped his post by expressing his desire his desire for Anthropic to continue supplying the Pentagon, without having to remove its guardrails.
The statement sets the scene for a showdown with Secretary of War Pete Hegseth, who gave Anthropic a Friday deadline to acquiesce to the Pentagon’s terms and conditions. Hegseth has argued that the USA’s military must focus on warfighting and become more lethal. ®
Get our [12]Tech Resources
[1] https://www.theregister.com/2026/02/25/pentagon_threatens_anthropic/?_gl=1*18366aj*_ga*MTI0MjE1MDMxNS4xNzE5OTg5NTg5*_ga_JXW44Y23NM*czE3NzIxNTc2ODkkbzE5MzQkZzEkdDE3NzIxNTc2OTYkajUzJGwwJGgw
[2] https://www.anthropic.com/news/statement-department-of-war
[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/publicsector&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aaEk8zZQTyVFmzUcgkzatgAAAwI&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/publicsector&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aaEk8zZQTyVFmzUcgkzatgAAAwI&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/publicsector&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aaEk8zZQTyVFmzUcgkzatgAAAwI&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[6] https://www.theregister.com/2026/02/26/anthropic_claude_opus_3_blog/
[7] https://www.theregister.com/2026/02/26/clade_code_cves/
[8] https://www.theregister.com/2026/02/24/anthropic_misanthropic_chinese_ai_labs/
[9] https://www.theregister.com/2026/02/23/claude_code_security_panic/
[10] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/publicsector&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aaEk8zZQTyVFmzUcgkzatgAAAwI&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[11] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/publicsector&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aaEk8zZQTyVFmzUcgkzatgAAAwI&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[12] https://whitepapers.theregister.com/
Baximelter
Anthropic knows, as do the other AI developers - with the possible exception of Musk - that allowing their product to participate in mass surveillance of the American public, plus the development of autonomous killer robots, would make their devices radioactive. They depend heavily on borrowed money to build out their operations. They are not in a position to make themselves feared and hated by us. But this is exactly what Hegseth proposes to do.
that one in the corner
Look on the bright side:
If Hegseth manages to make enough people stop using these models (with luck, aversion to Anthropic will spread to the others) then he will encourage the bubble to burst earlier. Which will be a blessing in the long run.
Aaaand.... the total lunacy from the current American government seems to have no bounds.
Hey Donald, the bots just accidentally nuked Nebraska!
Ah don't worry about it. We'll blame the Democrats.