Anthropic's $200M Pentagon Contract at Risk Over Objections to Domestic Surveillance, Autonomous Deployments (reuters.com)
- Reference: 0180717634
- News link: https://tech.slashdot.org/story/26/02/01/2353221/anthropics-200m-pentagon-contract-at-risk-over-objections-to-domestic-surveillance-autonomous-deployments
- Source link: https://www.reuters.com/business/pentagon-clashes-with-anthropic-over-military-ai-use-2026-01-29/
- Using AI to surveil Americans
- Safeguards against deploying AI autonomously
> The company's position on how its AI tools can be used has intensified disagreements between it and the Trump administration, the details of which have not been previously reported... Anthropic said its AI is "extensively used for national security missions by the U.S. government and we are in productive discussions with the Department of War about ways to continue that work..."
>
> In [2]an essay on his personal blog , Anthropic CEO Dario Amodei warned this week that AI should support national defense "in all ways except those which would make us more like our autocratic adversaries.
A person "familiar with the matter" told the Wall Street Journal this [3]could lead to the cancellation of Anthropic's contract :
> Tensions with the administration began almost immediately after it was awarded, in part because Anthropic's terms and conditions dictate that Claude can't be used for any actions related to domestic surveillance. That limits how many law-enforcement agencies such as Immigration and Customs Enforcement and the Federal Bureau of Investigation could deploy it, people familiar with the matter said. Anthropic's focus on safe applications of AI — and its objection to having its technology used in autonomous lethal operations — have continued to cause problems, they said.
[4]Amodei's essay calls for "courage, for enough people to buck the prevailing trends and stand on principle, even in the face of threats to their economic interests and personal safety..."
[1] https://www.reuters.com/business/pentagon-clashes-with-anthropic-over-military-ai-use-2026-01-29/
[2] https://www.darioamodei.com/essay/the-adolescence-of-technology#humanity-s-test
[3] https://www.msn.com/en-us/technology/artificial-intelligence/anthropic-pentagon-clash-over-limits-on-ai-imperils-200-million-contract/ar-AA1Vh8oy
[4] https://www.darioamodei.com/essay/the-adolescence-of-technology#humanity-s-test
You've got to be fucking kidding me (Score:2)
LLMs in surveillance? Someone is going to get royally screwed over by this and they won't be able to afford to get themselves out. The tech is woefully inadequate.
Re: (Score:2)
Oh don’t worry politicians and rich folks are excluded by default.
I wonder how nuanced it is, or isn't (Score:2)
"No surveillance" of ... anyone? Ever? Or what?
Re-defining nuance. (Score:3)
> "No surveillance" of ... anyone? Ever? Or what?
Please. The way they get around such verbiage, is to simply re-define it.
We're not surveilling. We're merely observing. No? OK, then we're studying. Or, observing. You know, think of the children and shit.
Hell, my gut feeling is the reason it's being pushed back is because Anthropics surveillance is taking money out of the pockets of others doing the same damn surveillance. Not like we're going to pretend anyone is running low on that shit these days.
Dancing with Mr D (Score:2)
A large company contracting with the US military must surely be aware that the run the risk of being connected with something untoward.
That is comes as a surprise is somewhat surprising.
Making the Conscientious Decision. (Score:2)
> A large company contracting with the US military must surely be aware that the run the risk of being connected with something untoward.
> That is comes as a surprise is somewhat surprising.
When I was first hired on at a defense contractor (100% civilians), I was made to sign a document stating that I was not a conscientious objector.
Several years later, I asked a new hire about it. They looked at me like I asked them about their vacation on Venus. Guessing the policy went by the wayside long ago. Probably should bring it back for those in the business of defense contracting.
Particularly with a (self-identified) War Department.
Re: (Score:2)
Not domestic surveillance or policing. That is restricted by the [1]Posse Comitatus Act [wikipedia.org]. That this is most probably a contract to operate the system means that Anthropic anticipated seeing the data during the course of business and wanted nothing to do with possible violations of the law. Or having to confront a customer about it after the fact. So they put a clause in the contract.
> That limits how many law-enforcement agencies such as Immigration and Customs Enforcement and the Federal Bureau of Investigation could deploy it
True for the Defense Department. However, ICE and the FBI are not restricted by the Posse Comitatus Act. So a separate contract wi
[1] https://en.wikipedia.org/wiki/Posse_Comitatus_Act
Rules for Thee, don't apply? Odd. (Score:2)
> Anthropic's terms and conditions dictate that Claude can't be used for any actions related to domestic surveillance. That limits how many law-enforcement agencies..
The US Government rather enjoys the self-waiving fact that diesel emissions rules do NOT apply to any of the same-same vehicles us commoners also drive. We The People get to enjoy prematurely destroying our engines with Exhaust Gas Recirculation (EGR) nonsense, while the same vehicle gets the Emissions-Bullshit-Delete option for US Gov. A perfect example of Rules for Thee.
Am I shocked to find Anthrophic has some form of anti-Skynet verbiage in the Ts&Cs? No, not really. Great to see actually. We've
So it goes... (Score:1)
Anthropic's terms and conditions dictate that Claude can't be used for any actions related to domestic surveillance.
" We're not doing surveillance, we're...um...protecting our Greatest Nation Ever. Or something. "
So (flipping coin) kakistocracy or tinpot dictator. What is it today?
OpenAI : hold my beer (Score:2)
Sam would not mind breaking a few rules that others don't.
But Anthropic was happy to partner with Palantir (Score:2)
[1]https://techcrunch.com/2024/11... [techcrunch.com]
[1] https://techcrunch.com/2024/11/07/anthropic-teams-up-with-palantir-and-aws-to-sell-its-ai-to-defense-customers/
Anthropic was on my eval list (Score:2)
I was evaluating AI coding assistants for my company a while ago, and of course Claude Code was on my shortlist. But after discussing Anthropic's collaborating with the fascist US regime, our company decided to practice economic withdrawal and take our money to a European supplier.
Dark times (Score:2)
When having some (probably not too strong) morals disqualifies you as a business partner.
Billionaire turns down $millions on principle (Score:2)
WTF, I've somehow fallen into some weird parallel universe. Where am I?
Also reluctantly impressed (Score:2)
It will probably turn them into an also-ran, however, as the Epstein class congeals around Vance.
Re: (Score:2)
Even some rich people have a red line they will not want to cross. Admittedly, many do not. This guy does not seem to have been born rich, so he may actually be a much better person than those that have been and he may just think that he does not need more money.