News: 0177593043

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Is the Altruistic OpenAI Gone? (msn.com)

(Saturday May 17, 2025 @05:34PM (EditorDavid) from the rise-of-the-machines dept.)


"The altruistic OpenAI is gone, if it ever existed," [1]argues a new article in the Atlantic , based on interviews with more than 90 current and former employees, including executives. It notes that shortly before Altman's ouster (and rehiring) he was "seemingly trying to circumvent safety processes for expediency," with OpenAI co-founder/chief scientist Ilya telling three board members "I don't think Sam is the guy who should have the finger on the button for AGI." (The board had already discovered Altman "had not been forthcoming with them about a range of issues" including a breach in the Deployment Safety Board's protocols.)

Adapted from the upcoming book, [2] Empire of AI , the article first revisits the summer of 2023, when Sutskever ("the brain behind the large language models that helped build ChatGPT") met with a group of new researchers:

> Sutskever had long believed that artificial general intelligence, or AGI, was inevitable — now, as things accelerated in the generative-AI industry, he believed AGI's arrival was imminent , according to Geoff Hinton, an AI pioneer who was his Ph.D. adviser and mentor, and another person familiar with Sutskever's thinking.... To people around him, Sutskever seemed consumed by thoughts of this impending civilizational transformation. What would the world look like when a supreme AGI emerged and surpassed humanity? And what responsibility did OpenAI have to ensure an end state of extraordinary prosperity, not extraordinary suffering?

>

> By then, Sutskever, who had previously dedicated most of his time to advancing AI capabilities, had started to focus half of his time on AI safety. He appeared to people around him as both boomer and doomer: more excited and afraid than ever before of what was to come. That day, during the meeting with the new researchers, he laid out a plan. "Once we all get into the bunker — " he began, according to a researcher who was present.

>

> "I'm sorry," the researcher interrupted, "the bunker?"

>

> "We're definitely going to build a bunker before we release AGI," Sutskever replied. Such a powerful technology would surely become an object of intense desire for governments globally. The core scientists working on the technology would need to be protected. "Of course," he added, "it's going to be optional whether you want to get into the bunker." Two other sources I spoke with confirmed that Sutskever commonly mentioned such a bunker. "There is a group of people — Ilya being one of them — who believe that building AGI will bring about a rapture," the researcher told me. "Literally, a rapture...."

>

> But by the middle of 2023 — around the time he began speaking more regularly about the idea of a bunker — Sutskever was no longer just preoccupied by the possible cataclysmic shifts of AGI and superintelligence, according to sources familiar with his thinking. He was consumed by another anxiety: the erosion of his faith that OpenAI could even keep up its technical advancements to reach AGI, or bear that responsibility with Altman as its leader. Sutskever felt Altman's pattern of behavior was undermining the two pillars of OpenAI's mission, the sources said: It was slowing down research progress and eroding any chance at making sound AI-safety decisions.

"For a brief moment, OpenAI's future was an open question. It might have taken a path away from aggressive commercialization and Altman. But this is not what happened," the article concludes. Instead there was "a lack of clarity from the board about their reasons for firing Altman." There was fear about a failure to realize their potential (and some employees feared losing a chance to sell millions of dollars' worth of their equity).

"Faced with the possibility of OpenAI falling apart, Sutskever's resolve immediately started to crack... He began to plead with his fellow board members to reconsider their position on Altman." And in the end "Altman would come back; there was no other way to save OpenAI."

> To me, the drama highlighted one of the most urgent questions of our generation: How do we govern artificial intelligence? With AI on track to rewire a great many other crucial functions in society, that question is really asking: How do we ensure that we'll make our future better, not worse? The events of November 2023 illustrated in the clearest terms just how much a power struggle among a tiny handful of Silicon Valley elites is currently shaping the future of this technology. And the scorecard of this centralized approach to AI development is deeply troubling. OpenAI today has become everything that it said it would not be....

The author believes OpenAI "has grown ever more secretive, not only cutting off access to its own research but shifting norms across the industry to no longer share meaningful technical details about AI models..."

"At the same time, more and more doubts have risen about the true economic value of generative AI, including a [3]growing [4]body of [5]studies that have shown that the technology is not translating into productivity gains for most workers, while it's also [6]eroding their critical thinking ."



[1] https://www.msn.com/en-us/news/technology/we-re-definitely-going-to-build-a-bunker-before-we-release-agi/ar-AA1ERwqj

[2] https://bookshop.org/p/books/empire-of-ai-dreams-and-nightmares-in-sam-altman-s-openai-karen-hao/4d0c1753c458e708

[3] https://hbr.org/2024/01/is-genais-impact-on-productivity-overblown

[4] https://www.goldmansachs.com/insights/top-of-mind/gen-ai-too-much-spend-too-little-benefit

[5] https://investors.upwork.com/news-releases/news-release-details/upwork-study-finds-employee-workloads-rising-despite-increased-c

[6] https://www.404media.co/microsoft-study-finds-ai-makes-human-cognition-atrophied-and-unprepared-3/



Just look at who founded and financed OpenAI... (Score:4, Informative)

by ffkom ( 3519199 )

... and you know that there never was an "altruistic OpenAI". Just some hardcore capitalists who happened to pick up the trend to call something "open" because this gained some PR points for free.

Never existed (Score:4, Insightful)

by hdyoung ( 5182939 )

It was always done with an eye towards monetization. Right from the start. But our economy has this weird grey-zone economic thing that amounts to "we're legally a non-profit but not really we want to make money and everybody knows it but nobody is gonna talk about it".

Combine that with private ownership, and you've got a perfect recipe for complete murkiness, which can make doing business a lot easier than answering to those pesky shareholders and paying taxes.

DUH! (Score:2)

by Gravis Zero ( 934156 )

Are you kidding me? Altruism at OpenAI has been dead so long that the idea that it ever existed is being questioned.

oh my gosh humans are screwing up again? (Score:2)

by laxr5rs ( 2658895 )

I can't believe it!

Funny yet normal (Score:3)

by peterww ( 6558522 )

...how borderline moronic and insane that people can be that are talented and motivated. Sutskever seems like simultaneously a genius, and a crazy person. A computer program bringing about a rapture? Bunkers? AGI "in 10 years" (for the last 30 years) ? Turns out this kind of genius-yet-crazy is pretty common.

- Newton was super smart. But he also thought he could find the date for the rapture hidden in codes in random texts. And of course spent half his life studying alchemy.

- Einstein had some really great ideas, but some stinkers too.

- Alfred Russel Wallace, the guy who thought up evolution right before Darwin, was obsessed with seances to try to talk to the dead.

- Joseph Priestley discovered oxygen, and used it to continuously try to justify a theory of 4 natural elements... and every scientist in the world used it to prove why there *aren't* four natural elements.

- Francis Crick, the guy who discovered DNA, also says that DNA arrived on earth because aliens.

- James Watson, the other guy who discovered DNA, firmly believes all black people are less smart than white people, because of his interactions with black employees. Oh and women scientists are more difficult to work with. And we should alter the genes of "inferior" people.

People: we have nukes. NUKES. And they're not even controlled by strictly designed, non-AI computer programs to keep the system safe. They're controlled by humans. Humans like *Trump*. Crazy, insane, aggressive, emotional humans. Yet we aren't all dead yet. A lot of people did hide in bunkers, when nukes came out. But, amazingly, despite this _super dangerous technology_, we aren't all dead yet.

Bruh (Score:3)

by systemd-anonymousd ( 6652324 )

This question is at least three years too late. Around GPT-2 Sam Altman established his marketing/scam loop of:

1. is sooo dangerous we'll never ever EVER even show you, let alone release the weights!

2. okay fine, you can look at but it's so dangerous we'll never ever let you touch it!

3. okay fine, celebrities and influencers are allowed to touch but definitely not you, it's way too dangerous!

4. okay fine, you can touch thing but it's so flippin' dangerous that we're going to have to charge you a fee, and those weights? no way, humanity can't handle that! only our for-profit non-profit that makes billion dollar deals with defense contractors can be trusted with that!

5. erm wow you violated our terms of service for responsible usage of . You've lost the possibility of early access to

Also somewhere in there Sam Altman gets expelled from Kenya for refusing to comply with orders to stop scanning locals' irises in exchange for shitcoin.

I like how we all have to pretend (Score:4, Insightful)

by rsilvergun ( 571051 )

We don't have a ruling class or if we do acknowledge their existence we all pretend they are benevolent benefactors and not cutthroat bloodthirsty psychopaths.

It's weird because we grow up with movies telling us just how bad corporations and CEOs and Wall Street ghouls really are and we have actual history we are taught about how god-awful they are but then when we get out into the real world all it takes is a little tiny bit of Charity giveaways here and there and suddenly we act like they are saints.

Re: (Score:3)

by ffkom ( 3519199 )

> we grow up with movies telling us just how bad corporations and CEOs and Wall Street ghouls really are and we have actual history we are taught about how god-awful they are but then when we get out into the real world all it takes is a little tiny bit of Charity giveaways here and there and suddenly we act like they are saints.

That certainly irks me, too, but to be fair: Those psychopaths appear to make not only negative impact, they also appear to be the ones that make investments (either by themselves or gullible followers) happen, some of which end up bringing some actual (technological) progress. Which would otherwise have died in endless committee discussions on who should spend whose money on what.

I think we should be able to see both the good and the bad in these people, and try to establish legislation that mitigates thei

Newsflash (Score:2)

by Kyogreex ( 2700775 )

Turns out that their push for AI safety was all about destroying the competition after all. Who would have guessed?

He's right about the bunker (Score:2)

by VampireByte ( 447578 )

Thanks to AI we'll need to be in a bunker to avoid the flood of ads.

So which is it slashdot? (Score:1)

by sixminuteabs ( 1452973 )

When you are not bitching that this technology isnâ(TM)t even real or no more advanced than your sed script from 1994, you bitch that it is so evil and dangerous they should not be allowed to make a profit. If you picked a lane and stuck with it you all might have a shred of credibility

Get hold of portable property. -- Charles Dickens, "Great Expectations"