Anthropic CEO Says He's 'Deeply Uncomfortable' With Unelected Tech Elites Shaping AI (businessinsider.com)
- Reference: 0180107515
- News link: https://tech.slashdot.org/story/25/11/17/1518202/anthropic-ceo-says-hes-deeply-uncomfortable-with-unelected-tech-elites-shaping-ai
- Source link: https://www.businessinsider.com/anthropic-ceo-dario-amodei-unelected-tech-leaders-shaping-ai-concerned-2025-11
> "I think I'm deeply uncomfortable with these decisions being made by a few companies, by a few people," Amodei told Anderson Cooper in a "60 Minutes" episode that aired Sunday. "Like who elected you and Sam Altman?" asked Anderson. "No one. Honestly, no one," Amodei replied.
[1] https://www.businessinsider.com/anthropic-ceo-dario-amodei-unelected-tech-leaders-shaping-ai-concerned-2025-11
Flip side (Score:3, Interesting)
Would he actually be more comfortable with our Elected non-tech elites making the big decisions?
I just don't see our legislative process, or administrative state terribly equipped to deal with shaping AI technology.
I think their job is to:
1) Ensure societies existing guard rails are uniformly and fairly applied to all, independent as to if AI has anything to do with the activity or not.
2) Respond reactively. If we identify a specific activity when coupled with AI is in some way corrosive to the society we generally want to have, then enact legislation to curb it in that area. While generally speaking anticipating problems and trying to avoid them is good practice, with something like this evolving this rapidly, I believe you usually create more issues if you go trying to solve problems you don't really know you yet have.
A good example is work force reduction, a lot of people are convinced there is going to be a huge wave of job losses that are directly attributed to AI, we don't really have any evidence of that yet. There are plenty of equally plausible explanations for unemployment rate increases right now. So if you go legislation a bunch of 'things' companies are not allowed to use ML/AI tech for and it turns out the UE uptick isn't ai related all you have done is limited productivity gains and created more economic drag.
It is important to keep in mind this is mostly just computers filling out paper work, taking down orders, and churning out questionable quality music and video clips. Hardly things we can't 'shut off' if need be. It isn't like nearly as destructive and irreversible as all kinds of development projects we often give the private sector a long leash to run with.
Elected (Score:2)
It wouldn't be the elected elites. It would be the bureaucrats making the rules whom, after tailoring regulations to favor some companies over others, would then go work for those companies.
Re: (Score:2)
I mean, the elected elites don't have time to make legislation, they are too busy going on talk shows and podcasts and being wined and dined by unelected elites.
Evidence is that often enough their patrons write the legislation and regulations and send it in for a rubber stamp.
Re: (Score:2)
Actually, the delegation of powers to various government entities and the people is arguably just as important. That's why we have democratically elected representatives instead of a pure democracy. Or founding fathers feared the resulting mob rule.
I am totally comfortable with Corps making AI. (Score:2)
Because AI is not very important. Large Language Models are morons,not Artificially intelligent.
No intelligent human lets a LLM do anything important beyond suggesting stuff.
LLMs do a lot of minor tasks.
Yes corps could use LLMs to feed people propaganda. Guess what, they did that BEFORE LLMs and if LLMs vanish, they would still be doing it.
You've missed the elephant (Score:4, Insightful)
LLMs make a lot of mistakes but the tech bros don't care - they're using them for all sorts of things including supposed self driving cars. If the AI fucks up and causes issues , well , on appendix section 16, sub section A, paragraph 21 there'll be a clause explicitly exempting the AI company from any responsibility and in jurisdictions where that disclaimer is void then what the hell, they've made billions anyway and they'll just settle out of court.
Re: (Score:2)
> No intelligent human lets a LLM do anything important beyond suggesting stuff.
And if intelligent humans were the only ones holding political power, managing infrastructure, litigating court cases, writing computer programs, etc, then we'd be fine. So obviously, we're not fine.
> Yes corps could use LLMs to feed people propaganda. Guess what, they did that BEFORE LLMs and if LLMs vanish, they would still be doing it.
LLMs can do it faster and more effectively. Even now, in many cases they can do it more convincingly. Saying that LLMs don't increase the scope and effectiveness of propaganda is like saying that nuclear warheads don't increase the scope and effectiveness of military actions. The latter of which, by the way, are
business insider with the scoop! (Score:2)
Slashdot posting an article about an interview on 60 minutes.
Its like the human centipede.
Translation (Score:2)
It pains me a lot, but I find solace laying on my pile of cash.
Oh no (Score:2)
"I'm so uncomfortable with myself".
Elites posturing about their victimhood taken to yet another level.
Who elected Toru Iwatani to make Pac-Man? (Score:3)
People do stuff. WTF, are we supposed to have a world-wide committee meeting every time some hacker starts a random project?
Sam Altman can have his own "AI," with blackjack and hookers. If you don't want yours to have that, then write it differently. If his project is affecting yours, it's because he's on the sharp end, running into scaling issues and regulators first . Let him bear the brunt of that, so you don't have to.
The only thing that can really go wrong, is if he uses his financial influence to get a government-granted monopoly. (And you'll have my support in opposing that.) Until then , though, how much is he shaping things? You can do something other than what he is doing right now . He isn't in charge of your project, is he?
Re: (Score:2)
Equating AI with Pac-Man isn't really the intellectual flex you probably think it is.
His Whole Pitch is Safety (Score:3)
Anthropic's entire pitch has always been safety. Innovation like this tends to favor a very few companies, and it leaves behind a whole pile of losers that also had to spend ridiculous amounts of capital in the hopes of catching the next wave. If you bet on the winning company you make a pile of money, if you pick one of the losers then the capital you invested evaporates. Anthropic has positioned itself as OpenAI, except with safeguards, and that could very well be the formula that wins the jackpot. Historically, litigation and government sponsorship have been instrumental in picking winners.
However, as things currently stand, Anthropic is unlikely to win on technical merits over its competition. So Dario's entire job as a CEO is basically to get the government involved. If he can create enough doubt about the people that are currently making decisions in AI circles that the government gets involved, either directly through government investment, or indirectly through legislation, then his firm has a chance at grabbing the brass ring. That's not to say that he is wrong, he might even be sincere. It is just that it isn't surprising that his pitch is that AI has the potential to be wildly dangerous and we need to think about safety. That's essentially the only path that makes his firm a viable long term player.
Anthropic CEO Dario Amodei (Score:3)
His surname is one transposition away from "AI Mode".
The answer is easy (Score:1)
Release your models open source and open weight
This tech should not be controlled by monopolists or governments
It should be available to all
It's called Capitalism (Score:1)
> "I think I'm deeply uncomfortable with these decisions being made by a few companies, by a few people," Amodei told Anderson Cooper in a "60 Minutes" episode that aired Sunday. "Like who elected you and Sam Altman?" asked Anderson. "No one. Honestly, no one," Amodei replied.
When you get control of the money, you get control of the means of production. That's literally what capitalism is for.
Re: (Score:1)
What you are describing is called Plutocracy, not capitalism.
Plutocracy is rule by the rich. Nobody wants to admit that so often they lie and claim to be a Capitalist.
Capitalism is about the Free Market (Free as in choice) not ruling.
Re: (Score:2)
There is no ruling going on here.
There is no rule that makes you use it. You are free to choose competitors or not use any AI tool at all. They offer a service, and it is entirely up to you whether you want to use it, you are not being ruled.
Re: It's called Capitalism (Score:4, Insightful)
Yes but the free market naturally trends towards consolidation, and thus plutocracy.
Re: It's called Capitalism (Score:2)
Most nations operate a mixed economy, not a free market.
So some regulation of markets exists almost everywhere. In cases where the political organs answer to the largest and most influential donors, you get a plutocracy. In cases where they answer to the people, you have a representative democracy.
Re: (Score:2)
> Plutocracy is rule by the rich.
The largest (wealthiest) block of shareholders in this country are the pension funds. Don't like the way business is being run? Complain to your union rep.