US Threatens Anthropic with 'Supply-Chain Risk' Designation. OpenAI Signs New War Department Deal (anthropic.com)
- Reference: 0180874422
- News link: https://tech.slashdot.org/story/26/02/28/2028232/us-threatens-anthropic-with-supply-chain-risk-designation-openai-signs-new-war-department-deal
- Source link: https://www.anthropic.com/news/statement-comments-secretary-war
In [2]a post to his 1.1 million followers on X.com, U.S. Secretary of War Pete Hegseth criticized Anthropic for what he called "a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon."
> Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic's models for every LAWFUL purpose in defense of the Republic... Cloaked in the sanctimonious rhetoric of "effective altruism," [Anthropic and CEO Dario Amodei] have attempted to strong-arm the United States military into submission — a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives. The Terms of Service of Anthropic's defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield. Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable...
>
> In conjunction with the President's directive for the Federal Government to cease all use of Anthropic's technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic... America's warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final.
Meanwhile, Anthrophic said on Friday that " [3]no amount of intimidation or punishment from the Department of War will change our position ." (And "We will challenge any supply chain risk designation in court.")
> Designating Anthropic as a supply chain risk would be an unprecedented action — one historically reserved for US adversaries, never before publicly applied to an American company. We are deeply saddened by these developments. As the first frontier AI company to deploy models in the US government's classified networks, Anthropic has supported American warfighters since June 2024 and has every intention of continuing to do so. We believe this designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government... Secretary Hegseth has implied this designation would restrict anyone who does business with the military from doing business with Anthropic. The Secretary does not have the statutory authority to back up this statement.
Anthropic also defended the two exceptions they'd requested that had stalled contract negotiations. "[W]e do not believe that today's frontier AI models are reliable enough to be used in fully autonomous weapons. Allowing current models to be used in this way would endanger America's warfighters and civilians. Second, we believe that mass domestic surveillance of Americans constitutes a violation of fundamental rights."
Also Friday, OpenAI announced that " [4]we reached an agreement with the Department of War to deploy our models in their classified network." OpenAI CEO Sam Altman emphasized that the agreement retains and confirms OpenAI's [5]own prohibitions against using their products for domestic mass surveillance — and requires "human responsibility" for the use of force including for autonomous weapon systems. "The Department of War agrees with these principles, reflects them in law and policy, and we put them into our agreement. We also will build technical safeguards to ensure our models behave as they should, which the Department of War also wanted. "
> We are asking the Department of War to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements. We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.
[1] https://tech.slashdot.org/story/26/02/27/2138211/trump-orders-federal-agencies-to-stop-using-anthropic-ai-tech-immediately
[2] https://x.com/SecWar/status/2027507717469049070?s=20
[3] https://www.anthropic.com/news/statement-comments-secretary-war
[4] https://x.com/sama/status/2027578508042723599
[5] https://slashdot.org/story/26/02/27/1530218/sam-altman-says-openai-shares-anthropics-red-lines-in-pentagon-fight
The only risk (Score:4, Insightful)
Is trump and cronies.
The Art of the Steal - I mean, Deal. (Score:2)
> ... after contract negotiations stalled when Anthropic requested ...
The people in this Administration apparently like wielding sticks, not carrots, and anyone should be worried about them negotiating in good faith, especially given how they behaved during active negotiations with Venezuela and Iran.
What am I missing here? (Score:5, Interesting)
So this article makes it sound like OpenAI offered the exact same terms as Anthrophic, yet the latter is deemed a "supply-chain risk" whereas the former is fine?
Re:What am I missing here? (Score:5, Insightful)
These are people who treat laughably childish assertions of dominance as the point; so odds are it was largely just about dick-waving vs. the 'woke' and attempting to normalize the ability of the DoD to directly punish elements of the civilian economy that don't fall in line with el presidente; not about some capability the Anthropic wasn't selling them, if they even have it(which they potentially do for domestic surveillance and propaganda operations, allegedly LLMs have some value for doing text attribution by style and speech-to-text; and they certainly have utility for more sophisticated sockpuppeting; it's much less clear that the big name LLM guys have anything super interesting on machine vision of the sort that you'd want to use for geospatial analysis or terminal guidance).
There's also the possibility that Hegseth and friends have roughly the same understanding of 'AI' as your average dangerously clueless optimist taking medical advice from chatgpt; and genuinely believe that the techbros are holding out on them when it comes to developing skynet or the assorted near-miracles that the so called "Genesis Mission" is allegedly going to deliver; in which case they might believe that they are actually being denied a capability that they will want in the more or less near future; but my money would mostly be on it being an attempt to demonstrate dominance rather than a meaningful dispute.
The idea that it's a dominance play seems especially likely given that they are throwing around the threat of 'supply chain risk' designation; rather than going with the much more banal "RFP says we need 'AI' that can be used for killbots and agentic stasi, if your product doesn't do that it's not in the running'. It's not like the DoD doesn't buy tons of nonlethal products and services of various sorts all the time, mostly without incident, or normally makes any fuss about just not-buying products that don't meet their requirements; without threatening to blacklist the vendor. A 'power move' from people with the crudest and most puerile understanding of power.
Re: What am I missing here? (Score:3)
I rather wonder if OpenAIs statement about how it'll be used is a lie. If the former got admonished in public it would seem if they really had the same stipulations on use OpenAI would also get told off. The other option is that the Hegsworth had another reason to deny anthropic.
Re: (Score:2)
I’m guessing that Anthropic wanted the power to verify their product wasn’t being used to create autonomous killing weapons, which would mean full audit access to basically the entire US military R&D ecosystem. This would not happen in ANY universe under ANY president. OpenAI was happy with a pinky-promise and a giggle.
Re: (Score:2)
I believe the core difference is that Anthropic was enforcing the restrictions in the model itself. So for the Pentagon to get what it wanted would require a rebuild of the model, which Anthropic refused. OpenAI however seems to have only gotten these assurances on paper in the contract, but absolutely nothing now stops the Pentagon from actually using as they desire.
Didn't anyone watch Terminator? (Score:2)
Unleashing killbots hell-bent on destroying humanity would totally own the libs, obviously.
Unwarranted Outrage (Score:2)
It's the military. What did you expect? They aren't playing games. If you want to contract with them, it's on their terms. No different than any other company wanting to contract with them. Good for Antrhopic for saying no. It's their call. I'm not sure why this all had to be public theater. They didn't provide what they wanted. Contract cancelled. Not much else to see here.
Re: (Score:2)
That would be the case; except that they are also threatening the 'supply chain risk' designation.
Just not-buying something that doesn't suit your purposes would be normal; saying that none of the people you do business with can do business with the guy you have chosen not to do business with is both extreme and clearly intended to be punitive. "Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic"
Re: (Score:2)
It may feel like punishment. Given the temperament of the current administration. But the logic is pretty sound. The military wanted Anthropic tech as part of their supply chain. Anthropic said no, we won't provide unless you accept our terms. Military said no thank you, we don't agree to those terms. Military now has a supply chain issue they need to solve. If a company disrupts your supply chain, what's to prevent them from disrupting their other clients who have contracts with military. Again, this isn'
Re: (Score:2)
i sort of called this yesterday and the day before. it's not just the military and it's not just contract cancelled, it's throwing anthropic into a fair bit of trouble they might not be able to cope with. why? the modus operandi is pretty consistent: accept my outrageous and unacceptable demands or get crushed. this is not how you negotiate or void a contract, this is how you bully or go for the kill. it actually looks more like the alliances were solidified beforehand and this is a deliberate move to elimi
"lawful purpose" (Score:5, Insightful)
Trump and his supporters have been very clear from the beginning that the law is whatever Donald Trump says it is. So "lawful purpose" is a totally pointless fig leaf. OpenAI's models will be used for whatever purpose the government wants.
Re: (Score:2)
Against the people for sure.
I actually have some (Score:2)
agreement with the administration on this one. Military R&D takes place in secret behind layers of locked doors. The only way Anthropic could actually be sure their product isn’t weaponized would be if Uncle Sam gave Anthropic the power to poke into literally every corner of our government’s secrets and audit their work. No. Just no. Not. Gonna. Happen. Not under *any* president. Regardless of how you feel about the military, war, this administration or AI in general. No way the US governme
Re: (Score:1)
"Military R&D takes place in secret behind layers of locked doors."
It's my understanding that Anthropic is the entity that will be doing the R&D and the military is the entity that will be using the results of said R&D.
Just like it's Lockheed (or whoever) that builds the plane and Air Force pilots who fly it.
Re: (Score:2)
Unsure why someone downmodded this. Pretty sure the poster is correct. They’re gonna put the software in all sorts of systems. No, Anthropic would definitely NOT be the only one to develop.
Please don't call it "the dept of war" (Score:3)
We all know that is yet another vanity title to please the orange king, which will get reverted when he dies/goes more insane/loses power.
Sam Altman, American Hero! (Score:4, Interesting)
He's going to save us all by watching us all very, very carefully. Especially the young guys. Then his AI will kill us all.