Anthropic Safety Researcher Quits, Warning 'World is in Peril' (semafor.com)
- Reference: 0180772054
- News link: https://slashdot.org/story/26/02/11/1849224/anthropic-safety-researcher-quits-warning-world-is-in-peril
- Source link: https://www.semafor.com/article/02/11/2026/anthropic-safety-researcher-quits-warning-world-is-in-peril
> An Anthropic safety researcher quit, saying [1]the "world is in peril" in part over AI advances . Mrinank Sharma said the safety team "constantly [faces] pressures to set aside what matters most," citing concerns about bioterrorism and other risks.
>
> Anthropic was founded with the explicit goal of creating safe AI; its CEO Dario Amodei said at Davos that AI progress is going too fast and called for regulation to force industry leaders to slow down. Other AI safety researchers have left leading firms, citing concerns about catastrophic risks.
[1] https://www.semafor.com/article/02/11/2026/anthropic-safety-researcher-quits-warning-world-is-in-peril
We have lost our ability to debate and decide (Score:4, Insightful)
I don't have a good idea as to what has caused it, but look at any TV debate up to sometime around the 1980s and you are likely to find a logical discussion, with little tolerance for lies, exaggeration, or logical error. That seems to have gone now.
Scientists might show that we are in trouble, and the result is that the most powerful deride the scientists, and most voters are incapable of the thought required to arrive at an informed opinion.
Politics and social debate need to fundamentally change.
Re: (Score:2)
We know what happened, in the 80s everyone got their news from the same source. Today we get news from our little silos and accuse people in other silos of being uninformed. I don't know what you expect could even change that since any attempt to hold the other side accountable would just look like an act of aggression and make it worse.
Science: the god that failed (Score:1)
What happened was that the predictions of science didn't live up to the hype, and people realized that many scientists were actually idiots, grifters or sociopaths.
It's pretty much a trope that coffee has been bad for us one week and then good for us again the next week, ad infinitum.
There's a limit to the number of times people will listen to failed prophets.
You might also want to read Eisenhower's Farewell Address where he warns of the risks of government funding of science. The mass production of 'scient
Re: (Score:3)
> the predictions of science didn't live up to the hype
1/10 really weak trolling. fuck off.
Re: (Score:2)
Yeah, because Ronnie Raygun didn't make up a bunch of BS about welfare queens that years later turned out to be BS. Nevermind the back door comms to Iran to get the hostages held until he secured the Presendency ... etc.
The 80s are not the bastion of good behavior and one can't have an honest conversation when folks lie and hide things.
It's all BS and lies by folks in power.
Ah yes (Score:2)
Quitters, the true heroes
Re: (Score:2)
Actually it is a smart move, and the guy is spot on on security, there is none of it in the "AI".
The "architecture" of the claude and the whole "AI" crap is beyond ridiculous.
You have a stateless "AI" produce unsafe code that nobody checks, then you feed the descriptors of that code fed to the "AI" at every request so that it can re-learn what it did and call some other process to run that very code. Or some similar code. Or some code that has that descriptor. Or something.
And 99.9% of the people who do tha
Yeah no (Score:2)
Ai safety people, much like "safety" people in customer facing web companies, were taken on in the second half of the 2010s as a concession to various left-aligned activist groups in what may be accurately described less as being down with the revolution and more as please don't hurt me here's a no-work job for your cousin.
That is to say, these people were political officers whose grasp of the tech or the implications of the tech was always secondary to their political loyalties.
Now that many of them have b
Crying wolf...yet again. (Score:2)
Over a billion dollars was spent on a persistent global scare mongering / PR blitz of utter bullshit that ultimately amounted to absolutely nothing. Subsequently enough people have used and or been subjected to AI. According to public polling vast majority now find it mildly useful while being largely annoyed by the nonsense and zero effort slop.
Bio-terrorism risks have been a concern long before the rise of generative AI fueled by reduced cost and improved capability of enabling technology. Despite nons
Last year it was openai (Score:3)
The company we could not do without.
This year it is arthritic, the outfit that admitted that "AI" is a dead end, but then slapped a neverending loop of reparsing long json bits onto that shit and called it a business model.
It is almost like all knowledge of architecture design is gone and the whole of "AI" is vibe-coded by stoned schoolchildren.
Wait, exactly as it was in the year 2000.