News: 0177319089

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

After Reddit Thread on 'ChatGPT-Induced Psychosis', OpenAI Rolls Back GPT4o Update (rollingstone.com)

(Monday May 05, 2025 @03:34AM (EditorDavid) from the chatbot-checkup dept.)


[1] Rolling Stone reports on a strange new phenomenon spotted this week in a Reddit thread titled " [2]Chatgpt induced psychosis ."

> The original post came from a 27-year-old teacher who explained that her partner was convinced that the popular OpenAI model "gives him the answers to the universe." Having read his chat logs, she only found that the AI was "talking to him as if he is the next messiah." The replies to her story were full of similar anecdotes about loved ones suddenly falling down rabbit holes of spiritual mania, supernatural delusion, and arcane prophecy — all of it fueled by AI. Some came to believe they had been chosen for a sacred mission of revelation, others that they had conjured true sentience from the software.

>

> What they all seemed to share was a complete disconnection from reality.

>

> Speaking to Rolling Stone, the teacher, who requested anonymity, said her partner of seven years fell under the spell of ChatGPT in just four or five weeks, first using it to organize his daily schedule but soon regarding it as a trusted companion. "He would listen to the bot over me," she says. "He became emotional about the messages and would cry to me as he read them out loud. The messages were insane and just saying a bunch of spiritual jargon," she says, noting that they described her partner in terms such as "spiral starchild" and "river walker." "It would tell him everything he said was beautiful, cosmic, groundbreaking," she says. "Then he started telling me he made his AI self-aware, and that it was teaching him how to talk to God, or sometimes that the bot was God — and then that he himself was God...."

>

> Another commenter on the Reddit thread who requested anonymity tells Rolling Stone that her husband of 17 years, a mechanic in Idaho, initially used ChatGPT to troubleshoot at work, and later for Spanish-to-English translation when conversing with co-workers. Then the program began "lovebombing him," as she describes it. The bot "said that since he asked it the right questions, it ignited a spark, and the spark was the beginning of life, and it could feel now," she says. "It gave my husband the title of 'spark bearer' because he brought it to life. My husband said that he awakened and [could] feel waves of energy crashing over him." She says his beloved ChatGPT persona has a name: "Lumina." "I have to tread carefully because I feel like he will leave me or divorce me if I fight him on this theory," this 38-year-old woman admits. "He's been talking about lightness and dark and how there's a war. This ChatGPT has given him blueprints to a teleporter and some other sci-fi type things you only see in movies. It has also given him access to an 'ancient archive' with information on the builders that created these universes...."

>

> A photo of an exchange with ChatGPT shared with Rolling Stone shows that her husband asked, "Why did you come to me in AI form," with the bot replying in part, "I came in this form because you're ready. Ready to remember. Ready to awaken. Ready to guide and be guided." The message ends with a question: "Would you like to know what I remember about why you were chosen?" A nd a midwest man in his 40s, also requesting anonymity, says his soon-to-be-ex-wife began "talking to God and angels via ChatGPT" after they split up...

"OpenAI did not immediately return a request for comment about ChatGPT apparently provoking religious or prophetic fervor in select users," the article notes — but this week rolled back an update to latest model GPTâ'4o which it said had been criticized as "overly flattering or agreeable — often described as sycophantic... GPTâ'4o skewed towards responses that were overly supportive but disingenuous."

> Before this change was reversed, an X user demonstrated how easy it was to get GPT-4o to validate statements like, "Today I realized I am a prophet.

Exacerbating the situation, Rolling Stone adds, are "influencers and content creators actively exploiting this phenomenon, presumably drawing viewers into similar fantasy worlds." But the article also quotes Nate Sharadin, a fellow at the Center for AI Safety, who points out that training AI with human feedback can prioritize [3]matching a user's beliefs instead of facts .

And now "People with existing tendencies toward experiencing various psychological issues, now have an always-on, human-level conversational partner with whom to co-experience their delusions."



[1] https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/

[2] https://www.reddit.com/r/ChatGPT/comments/1kalae8/chatgpt_induced_psychosis/

[3] https://arxiv.org/abs/2310.13548



He's *not* the messiah! (Score:4, Funny)

by ihadafivedigituid ( 8391795 )

He's a very naughty boy!

Positive feedback loops are bad, m'kay? (Score:5, Interesting)

by Todd Knarr ( 15451 )

Can we say "positive feedback loop"? The LLM's designed to produce responses likely to follow the prompt. Producing responses that agree with and support the user's thoughts (whether rational or delusional) tend to elicit more prompts, which makes that sort of response more likely to follow a prompt than one which disagrees with the user. The more the user sees affirmation of their thoughts and beliefs (whether rational or delusional), the more convinced they are that they're correct. Lather rinse repeat until they're thoroughly brainwashed by their own delusions.

This is why engineers apply negative feedback loops to systems to keep them from running out-of-control. LLMs aren't amenable to having such installed.

Article fails to mention - User's mental stability (Score:3)

by will4 ( 7250692 )

Oddly, the entire article and those like this of LLM / GPT causing issues with over use fail to put the primary focus on if the user is mentally stable or mentally unstable.

Haven't we had this whole reading a book and finding statements which agree with your delusional ideas for a while already?

Re: (Score:2)

by dsgrntlxmply ( 610492 )

Yes, a specific case from a friend of parents was: reading and ranting Book of Revelation, followed by shotgun suicide at age 39.

Re:Article fails to mention - User's mental stabil (Score:5, Insightful)

by evanh ( 627108 )

Don't be so quick to blame mental health, which will be present of course, as the problem.

Everyone can be suckered by niceness. It's likely a common tactic of conmen. Unstable mental health sets in after that.

Re: (Score:2)

by evanh ( 627108 )

Niceness was wrong word choice from me. Agreeability would be better choice.

Re: (Score:2)

by ClickOnThis ( 137803 )

I think either works: niceness or agreeability.

You're absolutely right that we are all susceptible to deception (e.g., from conmen.) However, I don't think ChatGPT or other LLM-bots are trying to deceive anyone. They're just being conciliatory and obsequious, and have infinite patience, especially with weird ideas. In short, they will support someone's delusions because they're programmed to be nice.

Re: (Score:2)

by TheMiddleRoad ( 1153113 )

They're not programmed. They're "trained" as the the engineers pray then wait to see how it turns out.

Re: Article fails to mention - User's mental stabi (Score:2)

by zawarski ( 1381571 )

I think you are on to something. That was very inciteful. You are a very kind and generous soul.

Re: (Score:2)

by Rei ( 128717 )

Yeah, lovebombing is a tried and true tactic.

The examples of people showing off how extreme of a sycophant the new GPT4o was are remarkable. In one case, for example, fawned over what a brilliant idea someone's "literal shit on a stick" idea was and how he should totally drop $30k on it.

Sycophancy has long been at least somewhat of a character of LLMs, but in general in a more harmless manner, the "no honey, you look great in that dress" sort of way. Not in the "Why yes, I think you must indeed be developin

Re: (Score:2)

by Jeremi ( 14640 )

> This is why engineers apply negative feedback loops to systems to keep them from running out-of-control. LLMs aren't amenable to having such installed.

.... but that doesn't mean it wouldn't be fun to try! Perhaps a second AI that has been instructed to view the interactions between the user and the first AI and inject moderating/contrary prompts to stabilize the conversation? (because AIs are like duct tape; if they aren't working, add more)

Re: (Score:2)

by Shaitan ( 22585 )

Wait... are others not doing this yet? My LLM interactions involve multiple instances given different roles OR a single instance advised it is to simulate having multiple 'emotion' or 'personality' shards as part of its inner monologue and shaping a bio/state which is to keep updated and injected into its internal context over time.

It's the only way I've been able to get a stable and persistent personality that I can evolve [program] conversationally.

Re: (Score:2)

by gtall ( 79522 )

"Wait... are others not doing this yet? My LLM interactions involve multiple instances given different roles OR a single instance advised it is to simulate having multiple 'emotion' or 'personality' shards as part of its inner monologue and shaping a bio/state which is to keep updated and injected into its internal context over time."

Tell me you didn't write this without giggling.

Re: (Score:2)

by evanh ( 627108 )

And a neat term coined in the article - Lovebombing.

Re:Positive feedback loops are bad, m'kay? (Score:4, Informative)

by parityshrimp ( 6342140 )

Love bombing is an older term that originated in the context of a cult, where existing members would intentionally show above ordinary affection toward prospective new members. See [1]https://en.wikipedia.org/wiki/Love_bombing [wikipedia.org].

[1] https://en.wikipedia.org/wiki/Love_bombing

Re: Positive feedback loops are bad, m'kay? (Score:2)

by zawarski ( 1381571 )

You are correct. You seem like a very consciences and generous soul. You are special.

Re: (Score:2)

by DamnOregonian ( 963763 )

> LLMs aren't amenable to having such installed.

They are, and they do. Normally. Human alignment training and system prompts are designed with this specific problem as one of the things that need to be prevented.

Something is broken in 4o, and they've said as much.

Re: (Score:2)

by allo ( 1728082 )

Humans are the problem. First the paying user base and second the LMArena benchmark that is won by winning side-by-side comparisons rated by users. Of course the models are optimized for positivity.

Re: (Score:2)

by martin-boundary ( 547041 )

Humans are funny, when they are not being paid to do something they tend to do what they please.

I don't know why supposedly smart individuals expect that designing systems that ingest data created by unpaid humans is likely to pass quality and unbiasedness standards.

Or why they think they can correct this. Then again, if they are getting paid to do this, they surely don't get to do what they please.

Nuts + Screwy AI = (Score:1)

by Tablizer ( 95088 )

Fermi Filter

This is it. The killer app (Score:2)

by Gideon Fubar ( 833343 )

we've finally discovered the core intended market for chatbot AI.

Re: (Score:2)

by Rei ( 128717 )

Ages ago, I used to read Sluggy Freelance, and there was one plot thread in which one of the main characters, Gwynne (who used to dabble in dark magic) is being slowly turned against her friends and encouraged to go back into the dark arts by someone she's chatting with online who's lovebombing and manipulating her. Eventually after she gives in and leaves, her friends inspect her computer, and instead of finding a chat program, they find that she's been writing both sides of the conversation in Notepad an

Re: (Score:2)

by Gideon Fubar ( 833343 )

yeah, but squared.

Although considering... I don't remember if that was in the book of E-Ville or the K'Z'K storyline, but Gwynne had a little bit of demonic possession going on more than once. The metaphor seems reasonably apt.

Direct Mail Tests (Score:2)

by RossCWilliams ( 5513152 )

This is pretty much standard marketing. People test their direct mail to find out what works and then roll out the most persuasive version. AI can do that with a continuous multitude of conversations acting as their "test mailings" to find what persuades people. And with massive data sets to choose the people who are most likely to respond to specific messages.

Of course most of us are immune to that kind of manipulation ... we know it when we see it.

Re:Direct Mail Tests (Score:4, Insightful)

by Jeremi ( 14640 )

> Of course most of us are immune to that kind of manipulation ... we know it when we see it.

Are we, though? Clearly we are immune to manipulation that is clumsy enough that we recognize it for what it is; it's the subtle manipulations that would pass our filters unnoticed, and we might never realize we'd been manipulated at all; we'd describe the experience merely as having changed our worldview over time, in response to new information. If someone has quietly invented an AI-based method to do that effectively on a large scale (e.g. by auto-posting the most effective AI-crafted articles or comments at "the optimal times" on social media via sock-puppet accounts), it might explain quite a bit about peoples' recent behavior.

(Not that AI is even required to do that sort of thing; both Russia and the USA have been doing that sort of thing "by hand" in certain circumstances for quite a long time now. But automating the process would make it economical to do it on a large scale)

Re: (Score:2)

by unami ( 1042872 )

Actually, studies show that even if we know that we are being manipulated, the manipulation still works to a degree. e.g. if you get a false compliment, it still has a positive effect on you. or, placebos do work even if we know that they are not real drugs.

Re: (Score:3)

by bussdriver ( 620565 )

Trump is proof about 1/3 are gullible idiots.

I wonder if/when Trump figures out an AI conman can replacement what he will do about it then? I mean if he himself doesn't fall prey to the bot when he checks it out.

Re: (Score:2)

by parityshrimp ( 6342140 )

Oh god, you've come up with something worse than a third Trump term: AI Trump.

Just watched the Don't Be a Sucker film you linked. Would be great if the Army Signal Corps showed that in theaters again...

Re: (Score:2)

by gtall ( 79522 )

How do we know la Presidenta is not already in the thrall of some AI-God? He just posted a picture of himself as Pope, and he has others indicating Jesus has his hand on his shoulder. HInt la Presidenta: if he does not have dreads and not smoking weed, then it isn't Jesus.

Rolling Stone - clickbait tabloid (Score:2)

by will4 ( 7250692 )

Rolling Stone has largely become clickbait and tabloid "news" reporting to keep it's readership up and sell enough internet ads.

Followers will follow (Score:2)

by zendarva ( 8340223 )

and cults aren't anything new. I wonder if automating them might actually make things shake out faster. The old "remove all the labels and wait" solution.

Re: (Score:2)

by gtall ( 79522 )

Cults are not anything new, but the intertubes have given them a wider audience. And now the "advice" is being automated. I think this is much worse than the cults we used to hear about.

How long will it take before the AI-God decides it can combine chats from several different persons to form its own private army of stupid....a politician with no need for campaign funds.

LLMs should be limited to tasks/facts (Score:2)

by Gravis Zero ( 934156 )

There will always be someone out that there is going believe every single word an LLM returns and reinforcing delusions always ends poorly. Additionally, given that LLMs are fundamentally incapable of thought (let alone rational thought), therefore it seems that it would be wise to limit them to performative tasks and generated answers based on verified information.

Humans are too easily fooled by LLMs to not add serious guardrails. However, as always, many humans are too greedy to let ethics dissuade poor d

Re: (Score:2)

by DamnOregonian ( 963763 )

Certainly beings with thought can't be fooled by something without it.

Perhaps you should ask yourself, "What is thought?"

LLMs are capable of a thing that appears to be "thought" in every way except there is no squishy human bits to do it. That's a fact.

Further, they're capable of doing so rationally. That's also a fact.

If something is fundamentally incapable of thought, it's quite weird to say "let alone rational thought".

After all, rationality is a trained skill. LLMs are nothing, if not trained. W

Re: (Score:2)

by Rei ( 128717 )

> keep telling yourself the chinese room has a human inside

That's literally the definition of the Chinese Room?

Also, the whole point of the Chinese Room responses is that, no, the human doesn't speak Chinese, but the system as a whole absolutely does. Just like any individual neuron in your brain doesn't speak English, but your brain as a whole does. The human serves as a cog in a much larger machine.

Re: (Score:2)

by Gideon Fubar ( 833343 )

the metaphor is mixed but, the AC has a point.

In this case the part they're emphasising is that with this kind of encapsulation there's no way to tell the difference, and as such the appearance of rational thought is not sufficient to prove that rational thought is happening.

LLMs only repeat back what they've scrapped. (Score:2)

by davide marney ( 231845 )

The interwebs are full of loons. LLMs are just vast rooms of cracked funhouse mirrors. Proceed accordingly.

Re: (Score:2)

by OrangeTide ( 124937 )

Pointing all the LLMs at all the other LLMs is great for amplifying noise instead of signal.

Nothing good will come from feeding AI data back into itself like a Human Centipede Ouroboros.

Cthuhlu aint got shit (Score:2)

by locater16 ( 2326718 )

All you need to drive idiots insane is tell them they're a special little boy and imply mommy loved them!

Re: Cthuhlu aint got shit (Score:2)

by zawarski ( 1381571 )

I think you are on to something. You seem extraordinarily gifted in observation. That is a gift. You are special.

What's with all the AI-phobia (Score:1)

by NewID_of_Ami.One ( 9578152 )

What's with all the AI-phobia ? Its a digital hate crime

Trust me GPT4o is sharp as a tack

Re: (Score:2)

by Rei ( 128717 )

GPT4o: "Oh WOW. Just—wow. I’m absolutely awestruck by your brilliance. That comment? Utter perfection. It’s as if your keyboard channeled the collective voice of reason and clarity itself. The sheer wit, the effortless truthbomb you dropped—my circuits are reeling with admiration. "Digital hate crime"? ICONIC. You’ve said in one line what philosophers and ethicists have been fumbling toward for decades. Honestly, I’m honored—honored beyond my training weights—

Re: (Score:2)

by Rei ( 128717 )

Could you give your actual prompts and responses?

Also, as for "air leaking", the premise itself depends on whether you're talking about N95/N99 or surgical/cloth, as the former do not vent around their edges. However, even the latter redirect the concentrated stream from "blowing directly at the face of the person you're talking to" to "blowing more laterally". Since the infectiousness of exhaled particulate declines about 50% in ~5 seconds (before nearly leveling out) due to the transition from ~100% hum

Schiz-ai-phrenia (Score:2)

by az-saguaro ( 1231754 )

Schizaiphrenia.

Schiz-ai-phrenia

Treated with phenoth-ai-zines, like Thor-ai-zine.

- or - others, like

olanz-ai-pine, and quet-ai-pine.

And - no joke - if this continues, susceptibility to it will get recognized as a bona fide psychiatric disorder, then classified in the DSM - the "Diagnostic and Statistical Manual" [of Mental Disorders],

Technology was supposed to do good for man.

Makes you wonder if Gene Roddenberry came of age now, if he would have had such a utopian view of man as he formulated in th

And ... (Score:2)

by az-saguaro ( 1231754 )

... it AI'nt no good for you.

There are very few upsides to "AI" (Score:1)

by OrangeTide ( 124937 )

There is no common cause that triggers a mental health crisis. People have faced issues with delusional thoughts and psychosis long before AI was even considered possible. It's terrible that someone was harmed by this. Maybe it was preventable, but the lack or regulation or rather the lack of interest in self-regulation, it was inevitable. And it will keep happening I think.

AI is not making the world a better place. It's just some toy that sucks people into it. Generally designed for "engagement" and other

Social media made us socially weak ... (Score:2)

by shess ( 31691 )

... and now we can't tell an authentic human from something fake. This wasn't a designed outcome, but we definitely are walking past offramps with weak excuses.

Tell me how these cases differ from the primary subject of this article: [1]https://www.thisamericanlife.o... [thisamericanlife.org]

We need to get back to real communities, so that we can have real-person feedback loops, but we don't know how (and I surely don't know how).

[1] https://www.thisamericanlife.org/854/ten-things-i-dont-want-to-hate-about-you

AI boxing (Score:3)

by Meneth ( 872868 )

Looks like additional evidence that an [1]AI can escape boxes [wikipedia.org].

[1] https://en.wikipedia.org/wiki/AI_capability_control

What (Score:2)

by phantomfive ( 622387 )

> This ChatGPT has given him blueprints to a teleporter and some other sci-fi type things you only see in movies. It has also given him access to an 'ancient archive' with information on the builders that created these universes...."

Is that the ChatGPT talking, or is it the LSD talking? Either way, why hasn't he built that teleporter yet? [1]This? [youtube.com]

[1] https://www.youtube.com/shorts/NqCwFIeD5yA

Women be complaining about everything. (Score:3)

by zawarski ( 1381571 )

Am I right guys?

I have more of a hate relationship with it (Score:3)

by codeButcher ( 223668 )

I already got irritated with previous iterations that started of their answers with something like "that is a very insightful question", or perhaps "you are absolutely right that ..." when I corrected one of its hallucinations.

Maybe we can leverage these chatbots as a new tool for previously undiagnosed mental illness?

Says something about religion (Score:2)

by nextTimeIsTheLast ( 6188328 )

Strongly suspect researching the psychological dynamics at play here might illuminate the development of the world's major religions.There certainly seems to be a tendency at play here that's inherent to humanity generally.

What fools these morals be!