Pentagon Formally Designates Anthropic a Supply-Chain Risk
- Reference: 0180915048
- News link: https://slashdot.org/story/26/03/05/2233247/pentagon-formally-designates-anthropic-a-supply-chain-risk
- Source link:
> The designation, historically reserved for foreign firms with ties to U.S. adversaries, will likely require companies that do business with the U.S. military -- or even the federal government in general -- to cut ties with Anthropic.
>
> "From the very beginning, this has been about one fundamental principle: the military being able to use technology for all lawful purposes," the Pentagon said in the statement. "The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and put our warfighters at risk."
>
> A spokesperson for Anthropic did not immediately respond to a request for comment. But the company said last week it would fight a supply-chain risk label in court.
[1] https://www.politico.com/news/2026/03/05/pentagon-tells-anthropic-it-has-designated-the-company-a-supply-chain-risk-00814758
[2] https://tech.slashdot.org/story/26/02/26/2352217/anthropic-ceo-says-ai-company-cannot-in-good-conscience-accede-to-pentagon
That's a start. (Score:1, Troll)
Now we need to declare them a national security risk and a mental health risk to our citizens. Ban these chatbots already.
Re:That's a start. (Score:5, Informative)
Absolutely not. That you don't like LLM AI is not justification for this. Heck, you could be completely correct, and it would still be the case that this is a terrible idea. Picking a specific company and trying to functionally damage or destroy it because it won't modify a contract the Pentagon made is terrible. Remember when Republicans complained about the government picking "winners and losers" in companies? Well this is that by 10 times as bad with a side-order of using the government to go after a company because the President and the Secretary of Defense don't like the CEO's politics. That should be alarming. Of course, because this Trump, this just alarming activity number 552.
Re: That's a start. (Score:2)
No, now can the Supreme Court reverse this ridiculous decision. *this* is the kind of dumb shit that genuinely does get your famed rights and freedoms taken away.
Re: (Score:2)
They have grounds to sue, but I don't think it will matter since Anthropic is making bank since the it was first announced.
This is Incredibly Frightening (Score:5, Insightful)
Trump has done a lot of stuff that is awful, but this is taking it to a completely different level. It's like we've gone back in time 70 years to the red scare, when enemies of the government (ie. enemies of specific powerful people in government) exploited their power to end the careers of individuals, and the very existence of entire companies.
I don't care if you are a Democrat or a Republican (or anything else): this is very, very wrong. You should be afraid, and you should be contacting your elected representatives and demand they take action!
Re: (Score:3)
Hold on, the news channel that runs 24x7 on my tv is telling me this is the fault of illegal immigrants and liberals.
Re: This is Incredibly Frightening (Score:2)
You still watch cable tv? How retro...
Re: (Score:2)
"this" is taking it to a completely different level?
The US is so far gone that I've abandoned feeling anxiety about it. I'm just enjoying watching them dig the hole deeper and deeper.
Sounds like securities fraud to me (Score:5, Interesting)
> "The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and put our warfighters at risk."
With that much spin, I'm the Pentagon hasn't started emitting radio signals.
There's a word for this: extortion. The military has decided that if they cannot use Anthropic's technology in any way they please, that they will just ban all government use, in an attempt to force the company to violate their principles. Here's hoping Anthropic shows them that the real world doesn't work that way by spanking them with a volley of lawsuits that will keep government lawyers employed for the next decade.
> From the very beginning, this has been about one fundamental principle: the military being able to use technology for all lawful purposes
That's not a principle. That's a functional requirement for the software that you're buying. If it is spelled out in the requirements document when a company bid on the contract, then they are obligated to comply. If you failed to specify the requirements in such a way that you could do whatever you want, tough s**t. That's your mistake. Do better next time. It's no different than any other situation where the government buys something that isn't suitable for what they are trying to use it for. They should have understood what they were buying before they bought it. Period.
I think it is safe to say that if those requirements were in the contract, the military would have sued for breach by now. So what we have are a bunch of embarrassed generals who failed to do due diligence in their procurement contracts, are realizing how much money they wasted, and are acting in ways that likely violate any number of federal securities laws, among others, to try to force the company into accepting an amended contract or else watch their dreams of an IPO (rumored to happen soon) turn into a nightmare.
I believe that the maximum penalty for securities fraud is 25 years. Just saying.
Re: (Score:2)
I'm wondering if perhaps another country would be interested in acquiring Anthropic.
I'm assuming that would be disallowed, but perhaps an offer could be made to Anthropic's employees for a safe life in a free country, where they can re-establish an AI company.
"You can beat the rap, but you can't beat the ride (Score:4, Insightful)
The government will lose in court... eventually. But the punishment will have already taken place.
Capitulate or suffer the consequences. Resistance is painful.
Re:"You can beat the rap, but you can't beat the r (Score:4)
It's the trump MO for the last year. Do whatever he and his cronies want, using tortured legal reasoning and then when the courts eventually tell him no, the damage is done and it becomes part of the message. "The activist judges won't let me do what you asked me to do."
Crazy to see the US becoming more like Putin's Russia every day. I'm not sure anyone is falling out of balconies yet, but there have definitely been suicides.
Re: (Score:2)
> I'm not sure anyone is falling out of balconies yet
Maybe that's the reason he's making the White House taller and hardscaping the ground next to it.
Re: (Score:3)
> The government will lose in court... eventually. But the punishment will have already taken place.
> Capitulate or suffer the consequences. Resistance is painful.
Painful, but how painful, really? The publicity has boosted Anthropic's subscriptions significantly, and the summary overstates the impact of the label. It is not true that all companies that do business with the DoD will have to cut ties with Anthropic, the label just means that companies that do business with the DoD can't use Anthropic's AI on their DoD contracts . They're still free to use it for any non-DoD work they do, or to run their own business operations.
So, yes, it'll inflict some pain, but
Right, right... (Score:2)
This is the same military that just sits and frets because it has all kinds of widgets that need a vendor tech called out to service them; but suddenly they've never heard of being restricted from a lawful use ever before...I don't expect honestly from these guys; but the transparency of their lies is pretty pathetic.
Re: (Score:2)
apparently they used palantir maven to recollect intel data with which then claude produced target lists with coordinates, ordinance recommendations and legal justifications. this process would usually take weeks but allowed to list and strike over 1000 targets recently in iran in a single day. supposedly the targets are verified by a human official, although verifying 1000 targets with any rigour in 24h seems a bit much.
incidentally, it has now transpired that the strike on a school that killed over 100 gi
Surveilance resistance? (Score:5, Interesting)
Since no government contractors can do business with Anthropic, that means Anthropic can't sell access to chat content to them or directly to the federal LEOs.
So Anthropic should be less surveilled by the US government than OAI and the other weasels sucking on Kegseth's nipples.
Re: (Score:1)
> weasels sucking on Kegseth's nipples.
The next time I need to pass a polygraph, this phrase will come in handy. Thanks, Slashdot!
Us government OR (Score:2)
The Mafia, you decide.
Re: (Score:2)
False dichotomy. You can pick both.
"All lawful purposes" is a lie (Score:3)
> From the very beginning, this has been about one fundamental principle: the military being able to use technology for all lawful purposes.
There is no such principle, this is completely made up. To make it worse, there are basically no laws restricting AI use, which means the Pentagon is asserting that they must be able to use it for literally anything they want to.
Re: (Score:2)
You have some valid points, but they don't make the original statement about "all lawful purposes" true.
Re: "All lawful purposes" is a lie (Score:2)
You mean like the jets the US has been selling to their allies, the ones with the kill switch, yeah?
Apparently it is a thing.
Re: (Score:2)
> We'll sell you this software, but we get to decide how you use it, not you.
Are you too stupid to have ever read an EULA?
That is exactly how *all* software is sold.
Re: "All lawful purposes" is a lie (Score:1)
Government and military software is not consumer software. The tos are part of the negotiated contract. As with any procurement of bespoke anything.
The closest I've seen from my little corner of it is a vendor insisting on "best effort" language to let them off the hook if the part they delivered turned out to be have been specified a little too ambitiously for their manufacturing process.
This was for an optical assembly. And it's still different from the vendor insisting that their optics only be used to l
Re: (Score:2)
If Anthropic violated the terms of any contract that has already been agreed to, the gov't would have said that.
The government is obviously trying to void the terms of a contract they've already agreed to, however. Otherwise, they wouldn't have their panties in a wad about Anthropic not "letting them do stuff".
Consumer or not, there is a contract. The biggest difference is that the consumer doesn't get to negotiate and has to scroll it in a tiny text box. The EULA for the software in a system I just bought
Re: "All lawful purposes" is a lie (Score:1)
Pretty sure that eula said the vendor is not responsible for damage if you use it to control nuclear power plants. Just like the gpl and bsd licenses say the software is provided "as is" and the authors are not liable for property damage and loss of life.
Re: (Score:2)
They might be responsible for the user using their tool to cause a meltdown under some legal theories, so they explicitly put that in the contract to make sure.
Likewise, I'm sure Anthropic doesn't want to get hauled in front of a Nuremberg-style tribunal one day for facilitating crimes against humanity. So they avoid going into that zone. This has never been a secret, so I'm not sure why the Pentagon is having a panic attack about it all of a sudden.
Re: (Score:2)
>> From the very beginning, this has been about one fundamental principle: the military being able to use technology for all lawful purposes.
> There is no such principle, this is completely made up. To make it worse, there are basically no laws restricting AI use, which means the Pentagon is asserting that they must be able to use it for literally anything they want to.
Well, to be fair, there are legal restrictions on what the DoD can do. For example, they can't blow up random boats off of the South American coast, they can't occupy American cities, they can't unilaterally invade a foreign country and kidnap its head of state, and they can't just start bombing shit without any congressional authorization.
So, you know, they're restricted to using the AI only for things that they are legally allowed to do.
Blackmail (Score:3, Insightful)
By using this designation, not only is the Pentagon saying they won't use Anthropic's products, they are saying that no other military contractors can use Anthropic's products. This is much, much more serious than just the loss of the contract, they are basically shutting out Anthropic from a very wide swath of the market. Microsoft wants to sell Office to the Pentagon? It can't use Anthropic's products. Just about all large companies will be affected. This is nothing short of blackmail.
I asked Claude (Score:1)
I asked Claude what to make of this. Claude responded by auditing Pete Hegseth's finances and recommending an extended stay at the Betty Ford Clinic.
They said NO to us! (Score:2)
How dare they? Waaah!
This makes them our enemies. If you play with them, we won't play with you.
Morals (Score:2)
Yeah, people with morals are a supply chain risk when certain people are in power.
someone is full of crap (Score:2)
> "From the very beginning, this has been about one fundamental principle: the military being able to use technology for all lawful purposes," the Pentagon said in the statement.
and
> OpenAI CEO Sam Altman wrote in a memo to staff that he will draw the same red lines that sparked a high-stakes fight between rival Anthropic and the Pentagon: no AI for mass surveillance or autonomous lethal weapons.
[1]https://slashdot.org/story/26/... [slashdot.org]
[1] https://slashdot.org/story/26/02/27/1530218/sam-altman-says-openai-shares-anthropics-red-lines-in-pentagon-fight
Standard Trump contract negotiation tactics (Score:2)
Threaten flaming nuclear death and settle for a 15% reduction in contract price which he can loudly brag about. Not mentioned is the reduction in service depth or the verbal agreement to expand and relocate Anthropic's offices in NYC & San Fran into properties owned or managed by Trump or Kushner. Everybody wins! Making Amerika great.
Hot take: there really is a supply chain risk. (Score:1)
Full disclaimer, my preferred AI agent is Claude 4.6 Opus running in Claud Code. I'm a paid subscriber and have no plans to go anywhere.
However, I have never once heard of a weapons company stipulating in a contract that the Pentagon shall not bomb a school (which just happened in Iran, intentionally or otherwise). Fact of the matter is, anyone and everyone who does business with the Pentagon shares just as much culpability as Anthropic or anyone else. Prism was ran on silicone supplied by private companies
So will Kegseth have to train a new model? (Score:2)
I mean it's obvious that Whiskey Pete has been letting Claude write his speeches and make his decisions this whole time, right?
It's even obvious that Claude is sourcing Colin Jost's "Weekend Update" performances for some of it.
It's probably why he wanted Anthropic to take off all the guard rails so that he can use it to commit more and better war crimes.
Good (Score:1)
Now declare the rest a supply chain risk.
Re: Good (Score:3, Insightful)
The designation is not based on some objective feature or lack thereof, it is just a revenge of your convicted felon president and war criminal in chief and his warfighters who want control over what they see is a useful tool to beat the rest of the world, including y'all, into submission.
Beating this one supplier, whose only sin here is not relinquishing control, is not an upside.
Re: (Score:1, Insightful)
Rslivergun is that you? Go take your meds before your heat explodes.
Re: (Score:2)
Much comeback, very arguments, bigly thinking.
Re: (Score:2)
Clearly you're the one that needs to take their meds.
Re: (Score:2)
They have increases subscriptions like 15% since the announcement, so I'd say it's 100% a positive so far.