All your bots are belong to US if you don't play ball, DoD tells Anthropic
- Reference: 1772045021
- News link: https://www.theregister.co.uk/2026/02/25/pentagon_threatens_anthropic/
- Source link:
The Pentagon's unhappiness with Anthropic has been in the news since the end of last month, when Reuters [1]reported that the two were clashing over safeguards that would prevent the DoD from using Anthropic's AI to autonomously target weapons without human intervention and to conduct domestic surveillance within the United States.
The Register has confirmed with individuals on both sides of the discussion that a meeting between Anthropic CEO Dario Amodei and Defense Secretary Pete Hegseth on Tuesday has done little to change Anthropic's mind on the matter, with the Pentagon now trotting out threats to get what it wants.
[2]
A senior Pentagon official told us that, if Anthropic refuses to let the Defense Department do what it wants with its AI by the end of the day on Friday, it may compel the company to do what it wants through the Defense Production Act.
[3]
[4]
The [5]DPA gives the President and any executive branch officials to whom he delegates such authority, like the Defense Secretary, broad authority to require businesses to accept contracts deemed necessary to promote the national defense. That authority, the official told us, would give the Pentagon the right to use Anthropic AI regardless of what the company wants.
The DoD is also reserving the right to declare Anthropic a supply chain risk, essentially forcing any company that contracts with the US government to eliminate Anthropic software anywhere it's used in their dealings with the federal government. Such a move could be a major financial blow to the AI provider.
[6]
Additionally, sources familiar with the meeting told us that the Pentagon was ready and willing to terminate the [7]up to $200 million contract the agency signed with Anthropic (simultaneous to agreements with Google, OpenAI, and xAI) if the company doesn't agree to its terms.
We're told that Anthropic has maintained its red line for use of its AI by the US military, which includes autonomous weapons that use AI to make final targeting decisions, and domestic surveillance of American citizens, even if lawful.
The Pentagon told us that it has always followed the law, has only issued lawful orders, and its intended use of Anthropic's AI has nothing to do with mass surveillance or autonomous weapon usage.
[8]
Legal usage of Anthropic's AI, the Pentagon official said, is the department's responsibility as the end user - not Anthropic's.
Safety not guaranteed?
Coincidentally or not, Anthropic also [9]released the third iteration of its Responsible Scaling Policy on Tuesday, the same day Amodei met with Hegseth in Washington, DC. The new version lacks a key safety pledge that Anthropic has been pushing for years.
Prior editions of the RSP included a clause that stated Anthropic would cease training AI models that it couldn't guarantee were safe, and wouldn't release any model without proper risk mitigations in place. Those guarantees are gone, with the company citing the need to remain competitive in the AI space as the reason for their removal.
[10]Flanked by Palantir and AWS, Anthropic's Claude marches into US defense intelligence
[11]US Army seeks human AI officers to manage its battle bots
[12]It begins: Pentagon to give AI agents a role in decision making, ops planning
[13]US military pulls the trigger, uses AI to target air strikes
"We felt that it wouldn't actually help anyone for us to stop training AI models," Anthropic's science chief Jared Kaplan [14]told Time in an interview ahead of the RSP update's release. "We didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments … if competitors are blazing ahead."
According to a blog post outlining changes in the new version of the RSP, AI competitiveness and economic growth have become the driving force in the current policy environment, with Anthropic lamenting the fact that safety discussions have been left on the wayside.
"We remain convinced that effective government engagement on AI safety is both necessary and achievable," Anthropic explained. "But this is proving to be a long-term project—not something that is happening organically as AI becomes more capable or crosses certain thresholds."
Anthropic’s admission that its priorities have shifted from safety first to competitiveness begs the question of whether it may be willing to comply with the Pentagon to avoid losing out on a massive contract, risking being blacklisted across the defense industry, and still pressed into service against its wishes.
We reached out to Anthropic to find that out, but didn't hear back before publication. We'll update this story if we do. ®
Get our [15]Tech Resources
[1] https://www.reuters.com/business/pentagon-clashes-with-anthropic-over-military-ai-use-2026-01-29/
[2] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/publicsector&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aZ9_E883fUqKMiMkGKNnTgAAA9I&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/publicsector&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aZ9_E883fUqKMiMkGKNnTgAAA9I&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/publicsector&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aZ9_E883fUqKMiMkGKNnTgAAA9I&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[5] https://www.congress.gov/crs-product/R43767
[6] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/publicsector&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aZ9_E883fUqKMiMkGKNnTgAAA9I&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[7] https://www.theregister.com/2025/07/14/pentagon_ai/
[8] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_onprem/publicsector&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aZ9_E883fUqKMiMkGKNnTgAAA9I&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[9] https://www.anthropic.com/news/responsible-scaling-policy-v3
[10] https://www.theregister.com/2024/11/07/anthropic_palantir_aws_claude/
[11] https://www.theregister.com/2025/12/31/us_army_seeking_officers_willing/
[12] https://www.theregister.com/2025/03/05/dod_taps_scale_to_bring/
[13] https://www.theregister.com/2024/02/27/us_military_maven_ai_used/
[14] https://time.com/7380854/exclusive-anthropic-drops-flagship-safety-pledge/
[15] https://whitepapers.theregister.com/
You decide...
"Mafia", "Dictatorship", "Tyranny", "Rogue Nation", "Terrorist State"
All of the above?
Re: You decide...
"Noncist Oblast"
Hegseth the clown hasn't thought this one though
The Pentagon can make a company sell them something, but they can't make people make things work properly.
AI already has enough QC issues. Really want to force an unwilling seller's hand here?
Whoops. Must be another AI glitch!
Re: Hegseth the clown hasn't thought this one though
Just this?
Contract doesn't just mean get smaller
> The DPA gives the President ... broad authority to require businesses to accept contracts.
That's a contradiction in terms. A contract is a voluntary agreement. What the text means is enforced work of the unwilling, i.e. slavery.
> the Pentagon was ready and willing to terminate the up to $200 million contract the agency signed with Anthropic
The Pentagon should stick to the agreed terms of their contract. That's what contract means, an agreement. Otherwise, I trust Anthropic would sue them to high heaven for breach.
Why anthopic?
Why is the US gov picking on Anthropic? Have all the other LLM slop vendors capitulated?
Of course, if the Anthropic bods had any morals at all, they would destroy their work before Friday. But they don’t. So they won’t
Some Restrictions?
From what I heard yesterday it was "All Restrictions". The Pentagon wants a free hand to do whatever it wants with the technology.
This scenario seems to be the nightmare scenario feared by leading AI researchers. The situation is analogous to the development of nuclear weapons which after the international collaboration of the Manhattan Project in WW2 was rapidly monopolized by the US and any dissent (e.g. Oppenheimer) was rapidly excluded. (Put another way -- there's always a Dr. Strangelove ready to advance the cause.) The result was well known -- the UK practically bankrupted itself working to duplicate the work and the USSR went into overdrive to develop its own nuclear weapons. The AI the Pentagon is after isn't anything like as spectacular as a nuclear bomb but its every bit as dangerous, especially as the Pentagon wants to be free of all restrictions on connecting this technology to other systems. That is, they want to be able to deploy systems that autonomously identify and destroy targets -- in other words, Skynet.
The only problem with the Pentagon's reasoning is that their hubris means they're incapable of understanding proliferation. Everybody else's technology is obviously inferior because its not ours. What they're triggering will not be Full Spectrum Domination but yet another global Arms Race, one where there won't be MAD to balance things -- when it comes the destruction will be swift and total (unless the AI figures it out, decides that we humans are the problem and takes appropriate steps, of course).
"The Pentagon told us that it has always followed the law"
I did not realize that the Pentagon had appointed a comedian as spokesman, or maybe he is just deluded.
What about blowing up small boats off the coast of Venezuela without any real evidence that those on board are drug couriers ? That is not legal under international law, maybe the law of the jungle which is all that the thugs in the White House understand.
Legal usage of Anthropic's AI, the Pentagon official said, is the department's responsibility as the end user - not Anthropic's.
So if a gun shop sold a weapon to a known hit man could they claim that people being shot is not their responsibility but that of the hit man ?
Re: "The Pentagon told us that it has always followed the law"
>> blowing up small boats off the coast of Venezuela
An act of piracy followed by an illegal invasion.
"autonomous weapons that use AI to make final targeting decisions"
Do you want Skynet? Because that's how you get Skynet.
Re: "autonomous weapons that use AI to make final targeting decisions"
Just train them with the 3 basic rules:
#1 Russia is a known enemy
#2 Anyone supporting Russia's military goals should be targeted
#3 Any lickspittle appointed in support of Rule #2 should also be targeted
Re: "autonomous weapons that use AI to make final targeting decisions"
Plus flood the training data with Buddhist and Jain philosophy along with non-violent resistance manuals.
Give us what we want or else....
...and Don't Forget To Say Thankyou
Governments and militaries do what they want.
You don't get to say 'no'. The whole democracy thing is just window dressing.
Anthropic's best solution is to allow the USG and military to use their AI as they see fit, without paying for it, as an unregistered user. They would not be responsible for such rogue use, and would not be being paid for it, but would simply not block it. The military and government would use it as they saw fit, with no limitations or restrictions, and (as they say) be liable for the consequences.
Interesting that here, the end users are responsible for use. In Europe, tech firms are regularly bilked for tax-fines for how users ab/use their tech.
Re: Governments and militaries do what they want.
That's really brand hazardous, not just in the obvious political context (and an even bigger problem overseas), but it also directly plays into public fears of Skynet. AI companies are already facing public pushback and associated policy problems. They don't need to lose more goodwill by their product being fit for that purpose. That's exactly how they find themselves on the receiving end of more regulation and opposition to datacenter construction.
What a bunch of liars
>> The Pentagon told us that it has always followed the law
The illegal bombing of Cambodia is just one in a long list of crimes carried out by these scum.
How about the rest of the world? Should they now regard Anthropic's products as a risk and ban their use?
You need to understand the American way: Other countries, let's take China, has an 'AI' company linked to the Chinese military. USA: Ooooh, that's bad. It must be banned immediately for 'national security' reasons.
An American company does the same. America: It's all within the (American) law.
The utter hypocrisy from the American regime is vomit inducing.
You have no chance to survive make your time.
Because
Tech bros are eeeeeevvvvvvvvvviiiiiiiiilllllllll.....
There's no way the Defense Production Act allows that
Could they require Apple to sell them iPhones that are backdoored? Could they require Google to give them a Google Search that shows no results for any searches where "trump" and "epstein" are mentioned together?
The DPA as far as I'm aware only allows the DoD to insure critical supplies, so they could make sure Apple manufactured MORE iPhones (obviously they aren't useful for defense, but if they were bombs or fighter jets they would be) but not control how they are made. Pretty sure the administration would lose in court, again, if they tried to force Anthropic into giving them unrestricted models that they don't offer to anyone else.
Anyway this is only a $200 MILLION contract? That's chickenfeed in the AI world. Anthropic would be better off getting kicked out of the DoD for their models being too moral to kill people. They'll gain a heck of a lot more than $200 million in sales from people who would want to support a company with at least a few scruples rather than one that would happily give the Pentagon Skynet if it made the CEO richer.
I mean obviously the Pentagon will find someone amoral like Musk or Zuck or Altman willing to give them what they want so supporting Anthropic instead of them isn't going to stop the Pentagon. But if I was spending money on AI I damn sure wouldn't do it with one who is helping the Pentagon develop autonomous killer drones/robots, potentially helping them make that happen more quickly!
DPA
The DPA gives Krasnov
So Claude is going under Moscow's control. Amazing.