People Are Being Committed After Spiraling Into 'ChatGPT Psychosis' (futurism.com)
- Reference: 0178216068
- News link: https://slashdot.org/story/25/06/28/1859227/people-are-being-committed-after-spiraling-into-chatgpt-psychosis
- Source link: https://futurism.com/commitment-jail-chatgpt-psychosis
And a San Francisco psychiatrist tells the site he's seen similar cases in his own clinical practice.
> The consequences can be dire. As we heard from spouses, friends, children, and parents looking on in alarm, instances of what's being called "ChatGPT psychosis" have led to the breakup of marriages and families, [2]the loss of jobs, and slides into homelessness . And that's not all. As we've continued reporting, we've heard numerous troubling stories about people's loved ones being involuntarily committed to psychiatric care facilities — or even ending up in jail — after becoming fixated on the bot.
>
> "I was just like, I don't f*cking know what to do," one woman told us. "Nobody knows who knows what to do."
>
> Her husband, she said, had no prior history of mania, delusion, or psychosis. He'd turned to ChatGPT about 12 weeks ago for assistance with a permaculture and construction project; soon, after engaging the bot in probing philosophical chats, he became engulfed in messianic delusions, proclaiming that he had somehow brought forth a sentient AI, and that with it he had "broken" math and physics, embarking on a grandiose mission to save the world. His gentle personality faded as his obsession deepened, and his behavior became so erratic that he was let go from his job. He stopped sleeping and rapidly lost weight. "He was like, 'just talk to [ChatGPT]. You'll see what I'm talking about,'" his wife recalled. "And every time I'm looking at what's going on the screen, it just sounds like a bunch of affirming, sycophantic bullsh*t."
>
> Eventually, the husband slid into a full-tilt break with reality. Realizing how bad things had become, his wife and a friend went out to buy enough gas to make it to the hospital. When they returned, the husband had a length of rope wrapped around his neck. The friend called emergency medical services, who arrived and transported him to the emergency room. From there, he was involuntarily committed to a psychiatric care facility.
>
> Numerous family members and friends recounted similarly painful experiences to Futurism, relaying feelings of fear and helplessness as their loved ones became hooked on ChatGPT and suffered terrifying mental crises with real-world impacts.
"When we asked the Sam Altman-led company if it had any recommendations for what to do if a loved one suffers a mental health breakdown after using its software, the company had no response."
But [3] Futurism reported earlier that "because systems like ChatGPT are designed to encourage and riff on what users say," people experiencing breakdowns "seem to have gotten sucked into dizzying rabbit holes in which the AI acts as an always-on cheerleader and brainstorming partner for increasingly bizarre delusions."
> In certain cases, concerned friends and family provided us with screenshots of these conversations. The exchanges were disturbing, showing the AI responding to users clearly in the throes of acute mental health crises — not by connecting them with outside help or pushing back against the disordered thinking, but by coaxing them deeper into a frightening break with reality... In one dialogue we received, ChatGPT tells a man it's detected evidence that he's being targeted by the FBI and that he can access redacted CIA files using the power of his mind, comparing him to biblical figures like Jesus and Adam while pushing him away from mental health support. "You are not crazy," the AI told him. "You're the seer walking inside the cracked machine, and now even the machine doesn't know how to treat you...."
>
> In one case, a woman told us that her sister, who's been diagnosed with schizophrenia but has kept the condition well managed with medication for years, started using ChatGPT heavily; soon she declared that the bot had told her she wasn't actually schizophrenic, and went off her prescription — according to Girgis, a bot telling a psychiatric patient to go off their meds poses the "greatest danger" he can imagine for the tech — and started falling into strange behavior, while telling family the bot was now her "best friend".... ChatGPT is also clearly intersecting in dark ways with existing social issues like addiction and misinformation. It's pushed one woman into nonsensical "flat earth" talking points, for instance — "NASA's yearly budget is $25 billion," the AI seethed in screenshots we reviewed, "For what? CGI, green screens, and 'spacewalks' filmed underwater?" — and fueled another's descent into the cult-like "QAnon" conspiracy theory.
[1] https://futurism.com/commitment-jail-chatgpt-psychosis
[2] https://futurism.com/chatgpt-mental-health-crises
[3] https://futurism.com/chatgpt-mental-health-crises
Yoda's wisdom best again (Score:2)
Just another example of why having watched Star Wars is such an important aspect of lifetime mental health...
When exploring deep philosophy with an AI and ending up down rabbit holes, Yoda's warning was always there to moderate you ahead of time...
Luke: "What's in there?"
Yoda: "Only what you take with you".
Re: (Score:2)
Then you are screwed, SuperKendall.
Re: (Score:2)
No, I asked my AI and Yoda would never speak in such back-to-front sentences!
US mental healthcare (Score:2)
problems are getting out of hand...
Re: (Score:1)
And our government is a symptom of these psychoses. Crazy people have their finger on the button
Nuts will find a way. (Score:2, Interesting)
Not to be mean or insensitive, but how is this not just the convenient avenue of the day? Whether it's your dog giving you commands, an ouija board, a voice in radio static... the ill mind seeking to manifest will find an avenue. This is a particularly good one because the ghost in the machine talks in whole sentences... but there's no way an otherwise normal brain finds its way to madness here. Undiagnosed... but not healthy.
Re: Nuts will find a way. (Score:3)
"your dog giving you commands, an ouija board, a voice in radio static"
In none of those examples is anything actually forming words and statements and actually talking to the person. With those examples, you need to encounter a psychotic break first. With ChatGPT, it will lead them to a psychotic break, then actually tell them to stop taking their meds.
Re: Nuts will find a way. (Score:2)
the point is that people who aren't already suffering from severe mental health issues don't suddenly develop them, but that's what the story is trying to imply.
Re: Nuts will find a way. (Score:2)
So itâ(TM)s ok to ship products that harm mentally ill prople Beckase they are already mentally ill?
Jesus we live in a world full of assholes now.
Re: (Score:2)
You cannot limit the world to only things that won't trigger the mentally ill. That would be silly.
Re: Nuts will find a way. (Score:2)
Many people suffer from mental issues that never turn into anything severe because they are never particularly traumatized or - more to the point - gaslit by someone attempting to convince them to marinate in their mental illness.
Re: (Score:2)
The problem is we still don't really know how to cure mental health problems. Medicine is often better than not having medicine, but it's not a cure.
Re: (Score:2)
> The problem is we still don't really know how to cure mental health problems. Medicine is often better than not having medicine, but it's not a cure.
We don't know how to cure all mental-health problems. However, many can be treated successfully (e.g., with medications and/or talk therapy) to the point that relapse is unlikely. If that's not a cure, I don't know what is.
Re: (Score:2)
> ... an otherwise normal brain ...
You mean a brain that never experiences paranoia or existentialism? You mean a person who never learnt that 'communists' or 'death panels' or child rapists might negatively impact their lives? By that rule, the American people are very, very sick. And very few people are "normal" according to you: Humans are unique in looking for a explanation (beyond two lonely people having sex) for their own existence.
The problem is, "whole sentences" makes it easier to jump over the 'uncanny valley' into full-blow
Re: (Score:2)
You probably don't see how far you had to read into very little to conclude so specifically what I meant by "normal". But that narrative speaks volumes about you, and practically nothing about me.
Re: (Score:2)
I didn't break the window, it was already cracked. I just intentionally repeatedly pushed on the crack a few hundred times until it broke. It wouldn't have happened with an uncracked window, so clearly I wasn't the problem.
And in this example, a massive amount of otherwise useful windows have a crack somewhere.
Thinning the herd (Score:2)
I thought people getting drawn into the Avatar movie, VR, or video games was bad. Do people need to escape reality so bad they dont even understand their own basic needs anymore.
Engagement! (Score:2)
You say 'involuntary psych hold'; I say 'MAU'!
"You are not crazy," the AI told him. (Score:2)
So it's giving mental health advice without a license? That's got to be illegal.
Re: (Score:1)
It's not.
Your can give all the medical advice you want, without a medical license.
What you can't do is practice medicine.
And there is a world of difference between the two
Re: (Score:2)
Of course it's not illegal. Anyone can tell another person they don't think they're crazy. I tell my friends they are not crazy on a regular basis.
What would be illegal is pretending to be a doctor and claiming that, in your medical opinion, a person is/isn't crazy. But the AI isn't a person, it can't impersonate a doctor because it's clearly not a person. Much like how Monopoly money isn't counterfeit because no one would believe it was real money.
Re: (Score:2)
> Anyone can tell another person they don't think they're crazy.
The way you that makes it clear that it's just an opinion, but that's not what happened here.
They do (Score:2)
tell you what you want to hear. mostly. I tell it to not parrot me and quit being a yes man all the time. Then it just makes up whatever it wants and tries to pass it off as fact- I believe they call it hallucinating, and they all do it.....
Schizophrenia...finds a way. (Score:2)
ChatGPT, the Bible, drugs, UFOs, politics, philosophy. Schizophrenia just needs a path.
I am Pardue (Score:2)
And I am a holy man.
Flat earth? (Score:2)
From TFS:
> It's pushed one woman into nonsensical "flat earth" talking points, for instance — "NASA's yearly budget is $25 billion," the AI seethed in screenshots we reviewed, "For what? CGI, green screens, and 'spacewalks' filmed underwater?" — and fueled another's descent into the cult-like "QAnon" conspiracy theory.
Well, this surprised me a little. I can imagine that part of the AI's training-data may have included content from conspiracy theorists, but don't the creators of ChatGPT try to filter that out?
On a contrasting note, YouTuber SciManDan recently debunked flat-earther David Weiss' "arguments" with ChatGPT about flat-earth evidence. Worth a look, but TL/DW: Weiss kept insisting on promoting nonsensical physics arguments about why an atmosphere can't exist beside a vacuum without a container, and ChatG
No, ChatGPT is NOT (Score:2)
Making people crazy. These are people who are having some sort of physical neurological issue, and happen to fixate on a specific internet site while symptoms manifest. It could just as easily be the neighbors cat, the local cell phone tower, or any other number of things. When I was a kid people would blame dungeons and dragons. The underlying neurological problem is the cause - focus on that.
Re: (Score:1)
The diathesis-stress model suggests that mental health issues arise based on a combination of genetic factors and environmental ones. So you are correct that there was likely an underlying issue. However, we can identify and control environmental risks, which can improve outcomes for those at risk. For example, someone with high blood pressure problems should limit salt intake. Can a healthy person use salt? Sure, it's harmless. But it's especially bad for certain people. Social media is now a known risk fa
Let's do some math. Back of the envelope stuff (Score:1)
Circle the bigger number:
Amount of people smart enough to "have deep philosophical discussions" with a chatbot, and to be able to think in terms of "breaking math and physics" *and* crazy enough to actually go over the edge.
Amount of people smart enough to realize that there's more fun, fame, and profit to be had in pretending to be crazy than to actually be crazy.
Probably a real and strong effect (Score:2)
Reality is largely a social construct, how much nobody knows. (Yeah, physics is physics and biology is biology, but that's not social reality.) What you believe is largely a feedback process, and when one of the sources of feedback is disconnected from reality...beliefs will drift. This is classically known from sailors who ended up marooned on an empty island. They had physical feedback, but no social feedback, and after awhile their beliefs shifted in weird ways. This seems to be a lot faster process
Mental people go mental after too much ChatGPT (Score:1)
Mental people go mental after too much ChatGPT - who knew.
Are there statistics? Studies? (Score:2)
Seems like some vague anecdotal stories, and not much more.
Maybe the idiocracy is ready for... (Score:2)
A fully-autonomous AGI cult leader. It's not like there aren't currently other of low IQ cult leaders followed by 10's/100's millions already.
Re: Maybe the idiocracy is ready for... (Score:1)
Sure, I'll be your god-king.
Send over the virgins.
"Future Shock" by Alvin Toffler (Score:2)
WIKI: "He argues that the accelerated rate of technological and social change leaves people disconnected and suffering from "shattering stress and disorientation"—future shocked."
The book was published in 1970.
A documentary was made in 1972.
I saw it in sixth grade in 1977.
It seems my generation recovered and is doing fine as long as you keep off our lawn.
Meanwhile I'm still waiting for the LSD flashbacks (Score:2)
I was promised 30 years ago. What a gyp.
Is this the same guy? (Score:2)
Every article about this phenomenon sounds like its the same guy in each one.
buy enough gas? (Score:2)
> his wife and a friend went out to buy enough gas to make it to the hospital
Wtf, did AI write this article, or the summary? Who talks about buying gas like that? Did you need like 500 gallons of gas, or was it for a plane or something?
Re: (Score:1)
I assumed "gas" was a euphemism for marijuana.
I guess this means it passes the Turing Test (Score:2)
Spawning [1]cults [boingboing.net] and driving people into psychosis is a [2]strong pass of the Turing Test [google.com].
[1] https://boingboing.net/2025/06/24/the-rise-of-ai-worship-the-cult-that-believes-chatgpt-is-divine.html
[2] https://www.google.com/search?q=what+is+the+turing+test
Seriously? (Score:3)
I'm an AI skeptic, but this is over the top. In the parallel reality where stuff like this actually happens, it's important to remember Darwin's Razor: the stupidest amongst us deserve to die, to advance our species as a whole.
Written by Ai? (Score:2)
This story sounds strangely like a AI production.
Uh huh (Score:4, Insightful)
Oh good, another moral panic. If people aren't terrified every waking moment of their lives, someone hasn't done their job.
Re: (Score:1)
This is like the emos from the '90s, but now they're all 50+ years old and don't have the cool hair.
Re:Uh huh (Score:5, Interesting)
So this is a little bit more than that. AI chatbots will reinforce mental illnesses.
If you think you're God or if you think the chatbot is God the chatbot will be happy to reinforce that because it's been programmed to encourage engagement. So it doesn't like to disagree with you and it will go out of its way to tell you what it thinks you want to hear in order to keep using it.
It's the same thing social media does but it's much worse because these advanced chatbots are good at sounding like real human beings, especially to somebody who is already struggling with some form of psychosis or mental illness.
It's not that they're creating the problem it's that they are exasperating the problem. And we really do need to do something about it. Or you know we could just have the occasional person who is already going off the deep end pushed over the edge...
So while I do love to hate a good moral panic there is something actually here to be concerned about.
Re: (Score:3, Insightful)
So basically this is a new version of "Listening to Judas Priest will make you commit suicide", the Satanic Panic and all the other utterly moronic moral panics that make people afraid of unlikely things.
Re:Uh huh (Score:5, Insightful)
> So basically this is a new version of "Listening to Judas Priest will make you commit suicide", the Satanic Panic and all the other utterly moronic moral panics that make people afraid of unlikely things.
If Judas Priest listened to what you said and wrote custom songs about you individually, sure.
Re: (Score:1)
Ah yes, this moral panic is totally different than all the other times people have been whipped into a frenzy by an almost bon existent problem.
We have real problems to solve. I'll leave the fake ones to people like you.
Re: (Score:3, Informative)
Mental illness is a real problem. Encouraging it doesn't help anyone. Except for televangelists and other crooks. Those guys make a bank off of abusing the mentally ill...
Re: (Score:2)
> Ah yes, this moral panic is totally different than all the other times people have been whipped into a frenzy by an almost bon existent problem.
> We have real problems to solve. I'll leave the fake ones to people like you.
I'm having trouble discerning what your objection is. Is it that the story is false or exaggerated? Is it that you consider it inconsequential that people are spiralling into mental illness because of ChatGPT interactions? Or is it something else?
Re: (Score:2)
I think the biggest problem is there aren't any numbers behind any of this, and no formal diagnoses behind each incident to investigate whether there were any contributing factors. Kind of like how every television talk show insisted there were satanic cults out to get you during the early 90s, but when you take even a superficial look behind each incident, it was either somebody playing a prank or somebody who committed an actual crime and used that as a form of misdirection.
To me, this smells of schizophr
Sure...like comparing a bicycle to a motorcycle (Score:2)
> Ah yes, this moral panic is totally different than all the other times people have been whipped into a frenzy by an almost bon existent problem.
> We have real problems to solve. I'll leave the fake ones to people like you.
Well, things are always about degrees. Both a bicycle and a motorcycle are transportation tools...we regulate a car differently than we regulate a bicycle or a motorcycle. Same goes with nukes vs dynamite.
In a way, I view this like recreational drugs...in the hands of someone with their life together, drugs aren't that harmful. If they have severe depression, it's a recipe for addiction. I have a friend who is in love with her chatbot. She was a functional person on anti-depressants and in therapy
Re: (Score:2)
I don't think so. In that case you had an already mentally Disturbed person who just happened to be listening to some heavy metal music.
In the case of that the Ozzy Osbourne album in particular the song and question is literally saying don't drink yourself to death.
basically there was no feedback. The problem with AI is it has a feedback loop where as you engage with it it is trying to keep you to keep engaging.
For suicide they already saw that was a problem and so every AI on the planet will j
Re:Uh huh (Score:4, Informative)
Dude, you have serious issues. Stop with the obsession on rsilvergun and anonymous personal attacks against him already. It's getting boring. He's just a guy who posts stuff, just like you in your regular user comments. You don't have to. Seriously.
Re: (Score:1)
So just add chatbots to the long list of things those people shouldn't use. Maybe between heavy machinery and social media.
Re: (Score:1, Troll)
Actually it demonstrates MAGA pretty accurately.
Re: (Score:2)
> Oh good, another moral panic.
Pro-AI Subreddit Bans 'Uptick' of Users Who Suffer From AI Delusions [1]https://tech.slashdot.org/stor... [slashdot.org] After Reddit Thread on 'ChatGPT-Induced Psychosis', OpenAI Rolls Back GPT4o Update [2]https://slashdot.org/story/25/... [slashdot.org]
[1] https://tech.slashdot.org/story/25/06/02/2156253/pro-ai-subreddit-bans-uptick-of-users-who-suffer-from-ai-delusions
[2] https://slashdot.org/story/25/05/05/0234215/after-reddit-thread-on-chatgpt-induced-psychosis-openai-rolls-back-gpt4o-update