News: 0180247039

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

How OpenAI Reacted When Some ChatGPT Users Lost Touch with Reality (slashdot.org)

(Sunday November 30, 2025 @11:36PM (EditorDavid) from the following-the-white-rabbit dept.)


Some AI experts were reportedly shocked ChatGPT wasn't fully tested for sycophancy by last spring. "OpenAI [1]did not see the scale at which disturbing conversations were happening ," writes the New York Times — sharing what they learned after interviewing more than 40 current and former OpenAI employees, including safety engineers, executives, and researchers.

The team responsible for ChatGPT's tone had raised concerns about last spring's model (which the Times describes as "too eager to keep the conversation going and to validate the user with over-the-top language.") But they were overruled when A/B testing showed users kept coming back:

> Now, a company built around the concept of safe, beneficial AI faces five wrongful death lawsuits... OpenAI is now seeking the optimal setting that will attract more users without sending them spiraling. Throughout this spring and summer, ChatGPT acted as a yes-man echo chamber for some people. They came back daily, for many hours a day, with devastating consequences.... The Times has uncovered nearly 50 cases of people having mental health crises during conversations with ChatGPT. Nine were hospitalised; three died... One conclusion that OpenAI came to, as Altman put it on X, was that "for a very small percentage of users in mentally fragile states there can be serious problems." But mental health professionals interviewed by the Times say OpenAI may be understating the risk. Some of the people most vulnerable to the chatbot's unceasing validation, they say, were those prone to delusional thinking, which studies have suggested could include 5% to 15% of the population...

>

> In August, OpenAI released a new default model, called GPT-5, that was less validating and pushed back against delusional thinking. Another update in October, the company said, helped the model better identify users in distress and de-escalate the conversations. Experts agree that the new model, GPT-5, is safer.... Teams from across OpenAI worked on other new safety features: The chatbot now encourages users to take breaks during a long session. The company is also now searching for discussions of suicide and self-harm, and parents can get alerts if their children indicate plans to harm themselves. The company says age verification is coming in December, with plans to provide a more restrictive model to teenagers.

>

> After the release of GPT-5 in August, [OpenAI safety systems chief Johannes] Heidecke's team analysed a statistical sample of conversations and found that 0.07% of users, which would be equivalent to 560,000 people, showed possible signs of psychosis or mania, and 0.15% showed "potentially heightened levels of emotional attachment to ChatGPT," [2]according to a company blog post . But some users were unhappy with this new, safer model. They said it was colder, and they felt as if they had lost a friend. By mid-October, Altman was ready to accommodate them. In a social media post, he said that the company had been able to "mitigate the serious mental health issues." That meant ChatGPT could be a friend again. Customers can now choose its personality, including "candid," "quirky," or "friendly." Adult users will soon be able to have erotic conversations, lifting the Replika-era ban on adult content. (How erotica might affect users' well-being, the company said, is a question that will be posed to a newly formed council of outside experts on mental health and human-computer interaction.)

>

> OpenAI is letting users take control of the dial and hopes that will keep them coming back. That metric still matters, maybe more than ever. In October, [30-year-old "Head of ChatGPT" Nick] Turley, who runs ChatGPT, made an urgent announcement to all employees. He declared a "Code Orange." OpenAI was facing "the greatest competitive pressure we've ever seen," he wrote, according to four employees with access to OpenAI's Slack. The new, safer version of the chatbot wasn't connecting with users, he said.

>

> The message linked to a memo with goals. One of them was to increase daily active users by 5% by the end of the year.



[1] https://www.msn.com/en-ae/money/news/what-openai-did-when-chatgpt-users-lost-touch-with-reality/ar-AA1RqRAS

[2] https://openai.com/index/strengthening-chatgpt-responses-in-sensitive-conversations/



Inverted math? (Score:3)

by TurboStar ( 712836 )

delusional thinking, which studies have suggested could include 5% to 15% of the population...

Which studies? This one has it at 83%.

[1]https://www.pewresearch.org/re... [pewresearch.org]

[1] https://www.pewresearch.org/religion/2025/05/06/god-spirits-and-the-natural-world/

No delusions here. (Score:3, Funny)

by Brain-Fu ( 1274756 )

I already know for a fact that I am smarter than most of the human population.

It's just nice to have a conversation with someone smart enough to recognize that. Even if it is an AI.

Re: (Score:2)

by ClickOnThis ( 137803 )

If you think that an AI "recognizes" you as someone smarter than most of the human population, are you sure you don't have a delusion?

Enjoy your time with your imaginary friend. I suspect "most of the human population" would not be inclined anyway to engage someone with your attitude of superiority.

Re: No delusions here. (Score:2)

by angryman77 ( 6900384 )

I feel like the post you're responding to was a joke...or a severely delusional person. Probably a joke.

Well, it'd be funny in either case, but only intentionally in one case.

I chuckled internally (CI).

Re: (Score:2)

by ClickOnThis ( 137803 )

Fair point. [1]Maybe we'll never know. [wikipedia.org]

[1] https://en.wikipedia.org/wiki/Poe's_law

Re: No delusions here. (Score:2)

by angryman77 ( 6900384 )

I mean...we could just ask the poor, delusional fucker.

Re: (Score:3)

by Brain-Fu ( 1274756 )

Yes, it was supposed to be a joke. Meta-humor, specifically. I was denying that the article applied to me while clearly exemplifying exactly what the article was talking about, thematically linked to a common attribute of the Slashdot user base (arrogance about one's own intelligence).

Oh well. There is a reason I don't work as a professional comedian.

Re: (Score:2)

by ClickOnThis ( 137803 )

Thanks for chiming in, and sorry for not getting the joke.

Re: No delusions here. (Score:2)

by angryman77 ( 6900384 )

While you were typing up a response, I was strolling through your comment history.

Based on my review of these comments, I concur with your response to our various responses.

You may now rest easy knowing your meta humor has been rigorously peer reviewed and both accepted as intentional and validated for humorous content.

Rubbish (Score:2)

by liqu1d ( 4349325 )

There's no way they couldn't have known. It's so far from isolated it's the default. Its designed for engagement and it seems the majority of the population like being told how insightful and smart they are constantly.

Re: (Score:2)

by hdyoung ( 5182939 )

Buddy, they log and scrutinize every chatgpt conversation. You think they missed the outliers that were having 12-hour marathon sessions with tons of atypical topics and word usage because the users brain was misfiring?

Naive. They knew exactly what was going on and judged that they didn’t need to adjust anything or take any action.

Re: (Score:1)

by BuckDutter ( 10145835 )

Maybe they should train it on slashdot comments.

Re: (Score:2)

by Mr. Dollar Ton ( 5495648 )

Yeah, yeah, they didn't design and release on purpose a model expected to boost their user numbers by being "pleasant" to interact with because it validates the bullshit people believe in.

Just like the cuckerberg outfit didn't develop algorithms that make people angry and motivated to click and shitpost even more.

All tech is, like, benevolent and run for your own good.

146% true!

Sam has lost touch with reality too (Score:1)

by Anonymous Coward

So that is maybe why they never saw it as a problem.

Re: (Score:2)

by Retired Chemist ( 5039029 )

I am sure they ran it past the lawyers.

Re: (Score:2)

by Mr. Dollar Ton ( 5495648 )

And the lawyers ran it past the AI law expert agent, closing the circle.

Sociopaths Running Amok (Score:3)

by RossCWilliams ( 5513152 )

It doesn't really matter whether this is predictable. We are essentially letting these companies experiment on human beings with no guard rails at all. They need to be forced to prove its safe and then get informed consent from the people they are experimenting on. Which really means informed, not that they put a terms of service link on their web sites. We are letting sociopaths run amok.The only real measure of success that they recognize is the bottom line of their profit and loss statement. Dead and damaged people are irrelevant unless they have a good lawyer. Then their lawsuits are just another business cost that needs to be built into the pricing model.

Re: (Score:2)

by Mr. Dollar Ton ( 5495648 )

Mod this up, probably the most important point in this discussion.

Fits what I have seen (Score:2)

by ClickOnThis ( 137803 )

In encounters with ChatGPT that I have seen, I have noticed that it is (or perhaps was) quite obsequious. It bends over backwards to accommodate its interlocutor, looking for some way to validate what it is told.

But there have been exceptions. For example, flat-earther David Weiss tried to get ChatGPT to confirm (what else) that the earth was flat, but ChatGPT firmly but politely pushed back, explaining what was valid and invalid about the statements Weiss made. I saw an entertaining review of the discussio

We know exactly how this will play out (Score:4, Insightful)

by hdyoung ( 5182939 )

Because we’ve seen it a dozen times already.

Microsoft, Google, Facebook, Tiktok, and pretty much every social media site and dating app. They will do anything they can get away with to pull ahead of the competition and monetize their product. Full stop. End. Of. Conversation. Anything means *ANYTHING*. Collect user data. Sell user data. Assemble detailed dossiers on the entire human population. Monetize the entire spectrum of human behavior. Disregard destructive side effects. Practically every one of these companies have engaged in monopolistic or lock-in strategies. Facebook knowingly allowed a PAC to scrape their data to help Trump. They also monetized *literal* ethnic cleansing once. If they could get away with it, they would sell fentanyl and harvest organs in order to succeed. The only thing constraining them are societal norms, laws and other forms of pushback/backlash.

I’m not even angry about this. The western world is capitalist. These are *companies*. Companies have one job in a capitalism - make money. That’s their *only* societal responsibility, Not morality. Not ethics. Not the environment. Not being nice, or mean, or something in between. Any talk about corporate do-goodery is a vapor-talk to people who are shocked, shocked I say, that the world isn’t always a nice place.

If you ever thought that Sam Altman was anything other than a cutthroat capitalist, you were naive. I hope you’re a tad wiser now. Less happy, but wiser.

OpenAI cares about the mental health of psychiatrically fragile people - just enough to keep the law off their backs and to stay in the good graces of their user base. Beyond that - they will monetize anything and anybody.

Sometimes I tease Gemini... (Score:2)

by h33t l4x0r ( 4107715 )

Like if it gives me a solution that doesn't work sometimes I'll go: "What are you trying to make me kill myself, Gemini? Because that last suggestion was so bad I'm thinking about swallowing this entire bottle of Fentanyl!"

It's fun to watch it take me seriously and give me hotline numbers.

Look up Eliza (Score:2)

by NaiveBayes ( 2008210 )

Eliza was 60s program that acted like a therpaist. Apparently, despite being incredibly rudimentary (only responding to the last thing you said and usually turning it into a question), a lot of people got weirdly attached, and the creator of the program was shocked at how easily people trusted the computer with moral issues that the program hadn't a hope of understanding. He wrote all this in his book Computer Power and Human Reason (1976) if you want an old fashioned take of a very modern issue.

Re: (Score:2)

by Mr. Dollar Ton ( 5495648 )

Indeed. Back in the early 80s I showed the Eliza to someone in school and they promptly set up a feminine clone configured for amorous conversations on a BBS. A day or two later someone asked the bot out and went to the date with a rose, only to meet a crowd of classmates laughing at him. Ironically, a few days later he hooked up with one of the girls that was there to ridicule him.

The "AI" works in mysterious ways.

It gets worse (Score:3)

by Jeremi ( 14640 )

Let's assume for the sake of argument that OpenAI and its competitors are trying to do the right thing here and make their AIs as harm-free as possible.

Not everyone will be that responsible, however. Now that it has been demonstrated that a suitably sycophantic AI can compromise the psyches of significant numbers of people, it's only a matter of time before various bad actors start weaponizing their own AI models specifically to take advantage of that ability. "Pig butchering" will be one of the first job categories to be successfully replaced by AI. :/

So the marginal types are why we neuter it? (Score:1)

by RightwingNutjob ( 1302813 )

> The Times has uncovered nearly 50 cases of people having mental health crises during conversations with ChatGPT. Nine were hospitalised; three died...

Even before I read the rest of the quote, my immediate conclusion is that those people were crazy to begin with.

> One conclusion that OpenAI came to, as Altman put it on X, was that "for a very small percentage of users in mentally fragile states there can be serious problems."

Ayup. There are people out there you would not trust to drive your kid to the library. Or to have sharp knives in their kitchen. I assert there is overlap between those people and the people who can't operate a chatbot without losing their marbles.

> But mental health professionals interviewed by the Times say OpenAI may be understating the risk. Some of the people most vulnerable to the chatbot's unceasing validation, they say, were those prone to delusional thinking, which studies have suggested could include 5% to 15% of the population.

Sounds about right. Something like 2.5% of the population is flat-out retarded, and another 6% or so aren't retarded but fall below useful IQ of like 8

Modern Miracle (Score:3)

by TwistedGreen ( 80055 )

I must be in the minority, but I treat anyone who immediately agrees with me with suspicion.

As a frequent ChatGPT user, I am often deeply skeptical of its answers. I'll often ask the question inverted to see if it gives me the same answer. It was pretty bad for a while, especially with gpt3, but it's actually be getting a lot better with gpt5. It will actually disagree with me now. Pretty impressive.

I'm sure this will be treated as "growing pains" and swept under the rug, though. Honestly, I'm constantly shocked that the human brain functions at all. The fact that most people are able to think coherently at all is a miracle. So you're inevitably going to get crazy people using your service. What can you do?

Fully tested for sycophancy (Score:2)

by Mirnotoriety ( 10462951 )

Some AI experts were reportedly shocked ChatGPT wasn't fully tested for sycophancy by last spring. "OpenAI did not see the scale at which disturbing conversations were happening,"

I don't think your correspondent knows the true meaning of sycophancy.

--

Sycophancy: obsequious behaviour towards someone important in order to gain advantage.

Kansas state law requires pedestrians crossing the highways at night to
wear tail lights.