Sam Altman Answers Questions on X.com About Pentagon Deal, Threats to Anthropic (x.com)
- Reference: 0180876130
- News link: https://news.slashdot.org/story/26/03/01/0233230/sam-altman-answers-questions-on-xcom-about-pentagon-deal-threats-to-anthropic
- Source link: https://x.com/sama/status/2027900042720498089
Altman said Saturday that enforcing that "Supply-Chain Risk" designation on Anthropic "would be very bad for our industry and our country, and obviously their company. We said [that] to the Department of War before and after. We said that part of the reason we were willing to do this quickly was in the hopes of de-esclation.... We should all care very much about the precedent... To say it very clearly: I think this is a very bad decision from the Department of War and I hope they reverse it. If we take heat for strongly criticizing it, so be it."
Altman also said that for a long time, OpenAI was planning to do "non-classified work only," but this week found the Department of War "flexible on what we needed..."
> Sam Altman: The reason for rushing is an attempt to de-escalate the situation. I think the current path things are on is dangerous for Anthropic, healthy competition, and the U.S. We negotiated to make sure similar terms would be offered to all other AI labs.
>
> I know what it's like to feel backed into a corner, and I think it's worth some empathy to the Department of War. They are... a very dedicated group of people with, as I mentioned, an extremely important mission. I cannot imagine doing their work. Our industry tells them "The technology we are building is going to be the high order bit in geopolitical conflict. China is rushing ahead. You are very behind." And then we say "But we won't help you, and we think you are kind of evil." I don't think I'd react great in that situation. I do not believe unelected leaders of private companies should have as much power as our democratically elected government. But I do think we need to help them.
>
> Question: Are you worried at all about the potential for things to go really south during a possible dispute over what's legal or not later on and be deemed a supply chain risk...?
>
> Sam Altman: Yes, I am. If we have to take on that fight we will, but it clearly exposes us to some risk. I am still very hopeful this is going to get resolved, and part of why we wanted to act fast was to help increase the chances of that...
>
> Question: Why the rush to sign the deal ? Obviously the optics don't look great.
>
> Sam Altman: It was definitely rushed, and the optics don't look good. We really wanted to de-escalate things, and we thought the deal on offer was good.
>
> If we are right and this does lead to a de-escalation between the Department of War and the industry, we will look like geniuses, and a company that took on a lot of pain to do things to help the industry. If not, we will continue to be characterized as as rushed and uncareful. I don't where it's going to land, but I have already seen promising signs. I think a good relationship between the government and the companies developing this technology is critical over the next couple of years...
>
> Question: What was the core difference why you think the Department of War accepted OpenAI but not Anthropic?
>
> Sam Altman: [...] We believe in a layered approach to safety--building a safety stack, deploying FDEs [embedded Forward Deployed Engineers] and having our safety and alignment researcher involved, deploying via cloud, working directly with the Department of War. Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with. We feel that it it's very important to build safe system, and although documents are also important, I'd clearly rather rely on technical safeguards if I only had to pick one...
>
> I think Anthropic may have wanted more operational control than we did...
>
> Question: Were the terms that you accepted the same ones Anthropic rejected?
>
> Sam Altman: No, we had some different ones. But our terms would now be available to them (and others) if they wanted.
>
> Question: Will you turn off the tool if they violate the rules?
>
> Sam Altman: Yes, we will turn it off in that very unlikely event, but we believe the U.S. government is an institution that does its best to follow law and policy. What we won't do is turn it off because we disagree with a particular (legal military) decision. We trust their authority.
Questions were also answered by OpenAI's head of National Security Partnerships (who at one point [5]posted that they'd managed the White House response to the Snowden disclosures and helped write the post-Snowden policies constraining surveillance during the Obama years.) And they stressed that with OpenAI's deal with Department of War, "We control how we train the models and what types of requests the models refuse."
> Question: Are employees allowed to opt out of working on Department of War-related projects?
>
> Answer: We won't ask employees to support Department of War-related projects if they don't want to.
>
> Question: How much is the deal worth?
>
> Answer: It's a few million $, completely inconsequential compared to our $20B+ in revenue, and definitely not worth the cost of a PR blowup. We're doing it because it's the right thing to do for the country, at great cost to ourselves, not because of revenue impact...
>
> Question: Can you explicitly state which specific technical safeguard OpenAI has that allowed you to sign what Anthropic called a 'threat to democratic values'?
>
> Answer: We think the deal we made has more guardrails than any previous agreement for classified AI deployments, including Anthropic's. Other AI labs (including Anthropic) have reduced or removed their safety guardrails and relied primarily on usage policies as their primary safeguards in national security deployments. Usage policies, on their own, are not a guarantee of anything. Any responsible deployment of AI in classified environments should involve layered safeguards including a prudent safety stack, limits on deployment architecture, and the direct involvement of AI experts in consequential AI use cases. These are the terms we negotiated in our contract.
They also [6]detailed OpenAI's position on LinkedIn :
> Deployment architecture matters more than contract language. Our contract limits our deployment to cloud API. Autonomous systems require inference at the edge. By limiting our deployment to cloud API, we can ensure that our models cannot be integrated directly into weapons systems, sensors, or other operational hardware...
>
> Instead of hoping contract language will be enough, our contract allows us to embed forward deployed engineers, commits to giving us visibility into how models are being used, and we have the ability to iterate on safety safeguards over time. If our team sees that our models aren't refusing queries they should, or there's more operational risk than we expected, our contract allows us to make modifications at our discretion. This gives us far more influence over outcomes (and insight into possible abuse) than a static contract provision ever could.
>
> U.S. law already constrains the worst outcomes. We accepted the "all lawful uses" language proposed by the Department, but required them to define the laws that constrained them on surveillance and autonomy directly in the contract. And because laws can change, having this codified in the contract protects against changes in law or policy that we can't anticipate.
[1] https://x.com/sama/status/2027900042720498089
[2] https://tech.slashdot.org/story/26/02/27/2138211/trump-orders-federal-agencies-to-stop-using-anthropic-ai-tech-immediately
[3] https://tech.slashdot.org/story/26/02/28/2028232/us-threatens-anthropic-with-supply-chain-risk-designation-openai-signs-new-war-department-deal
[4] https://slashdot.org/story/26/02/27/1530218/sam-altman-says-openai-shares-anthropics-red-lines-in-pentagon-fight
[5] https://x.com/natseckatrina/status/2027931400775627188
[6] https://www.linkedin.com/posts/katrinaemmons_our-agreement-with-the-department-of-war-activity-7433627924815163392-hMRw/?utm_source=share&utm_medium=member_desktop&rcm=ACoAAAIw4lsBWfASot8C1f8PgnZIljzqjXrZkXE
Deployment proved this tech is hard to control (Score:5, Insightful)
It's being used for crime, child porn, scams, disinformation.
It's strength is general purpose and that's it's weakness.
In that respect it has the same devilish appeal of Social Media and Crypto. It's a good looking snake.
You could say Altman's naive if he believes he can take on Uncle Sam. Mendacious is more likely. He's burning venture capital and making lots of unfounded promises that are just not bearing out in reality and producing low quality, industrialized content. We see this everywhere that hand made items are valued more than mass produced, and Gen AI is content mass production.
LLM's are a really useful tool for augmenting work, but it's unreliable, the flaw in the kernel is that it depends on data from unreliable, inconsistent humans. We frequently lie, exaggerate, make mistakes and call truth opinions and belief. LLM generated content compounds our errors.
"Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should." Dr. Ian Malcolm (Jeff Goldblum), Jurassic Park:
Re: (Score:2)
> "Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should." Dr. Ian Malcolm (Jeff Goldblum), Jurassic Park:
Spend billions of dollars of venture capital money to build the world's biggest chatbot? That is a definite yes. It's almost as great as using Libya uranium to make a bomb case filled with used pinball machine parts.
hmmm, (Score:3)
> Sam Altman: ... we believe the U.S. government is an institution that does its best to follow law and policy.
Santa Claus is real too.
Re: (Score:2)
I laughed out loud at that one too.
On the verge of bankruptcy (Score:2)
ChatGPT is on the verge of bankruptcy. They must still secure literally billions of dollars to keep from going bankrupt and that is unlikely to happen. Google is going to eat their lunch with their AI offering and Google is definitely financially-secure. Altman mentioned that eventually, they'll "ask ChatGPT how to turn a profit" some time in the future. It's going to be interesting.
For normal people (Score:5, Informative)
1. The "Department of War" given in the summary is actually referring to the Department of Defense. The term "Department of War" is a nickname given by the Trump administration, it has no legal status.
2. One way Musk dealt with the fact his thinned down Twitter development group was no longer able to maintain a scalable system was to ban people who don't have accounts with X from viewing threads. This reduces the load on X's servers slightly. Fortunately, third parties have filled the gap. You can read the thread [1]here [xcancel.com].
You're welcome.
[1] https://xcancel.com/sama/status/2027900042720498089
Somebody has to be the villain in this story... (Score:1)
...and Sam Altman and OpenAI just strode confidently through that door and onto the stage
Re: (Score:2)
Musk has something to say about that.
Maybe he hasn't been keeping up with the news? (Score:2)
"...but we believe the U.S. government is an institution that does its best to follow law and policy. "
News bulletin: this just in (Score:2)
Camp follower follows camp. Film at 11.
tl;dr (Score:2, Flamebait)
Sam Altman is a weasel. A very rich weasel.
Re: (Score:2)
Truth. And no surprise he's spewing his bullshit on X, the village commons that a rich weasel handed over to the Nazis.