Parents Sue OpenAI Over ChatGPT's Role In Son's Suicide (techcrunch.com)
- Reference: 0178885686
- News link: https://yro.slashdot.org/story/25/08/26/1958256/parents-sue-openai-over-chatgpts-role-in-sons-suicide
- Source link: https://techcrunch.com/2025/08/26/parents-sue-openai-over-chatgpts-role-in-sons-suicide/
> Before 16-year-old Adam Raine died by suicide, he had spent months consulting ChatGPT about his plans to end his life. Now, his parents are [1]filing the first known wrongful death lawsuit against OpenAI , The New York Times [2]reports . Many consumer-facing AI chatbots are programmed to activate safety features if a user expresses intent to harm themselves or others. But [3]research has shown that these safeguards are far from foolproof.
>
> In Raine's case, while using a paid version of ChatGPT-4o, the AI often encouraged him to seek professional help or contact a help line. However, he was able to bypass these guardrails by telling ChatGPT that he was asking about methods of suicide for a fictional story he was writing. OpenAI has addressed these shortcomings on its blog. "As the world adapts to this new technology, we feel a deep responsibility to help those who need it most," the post [4]reads . "We are continuously improving how our models respond in sensitive interactions." Still, the company acknowledged the limitations of the existing safety training for large models. "Our safeguards work more reliably in common, short exchanges," the post continues. "We have learned over time that these safeguards can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model's safety training may degrade."
[1] https://techcrunch.com/2025/08/26/parents-sue-openai-over-chatgpts-role-in-sons-suicide/
[2] https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html?partner=slack&smid=sl-share
[3] https://arxiv.org/pdf/2507.02990
[4] https://openai.com/index/helping-people-when-they-need-it-most/
Did the AI chat advise the suicider on the how? (Score:3)
It all comes down to: did the AI chat advise the suicider on the specifics of how to enact suicide, step by step? If so, then the LLM as trained (and possibly its trainers) are culpable in this suicide, and this case should be precedent-setting. If not, then entirely a non sequitur and the family is barking up the wrong tree due to heightened emotional state just to vent their emotions.
Re:Did the AI chat advise the suicider on the how? (Score:4, Informative)
Some of the bits of transcript in the Ars article seem pretty damning to me... but who knows. Some of it is kind of stretching but there are some real WTF quotes in there too.
[1]https://arstechnica.com/tech-p... [arstechnica.com]
[1] https://arstechnica.com/tech-policy/2025/08/chatgpt-helped-teen-plan-suicide-after-safeguards-failed-openai-admits/
Re: (Score:2)
Yikes.
Now I see why they toned down the charm on GPT 5.
It's particularly disturbing that they rank requests about copyright infringement as more serious than suicide.
Re: (Score:2)
> It's particularly disturbing that they rank requests about copyright infringement as more serious than suicide.
As Disney has recently demonstrated, it's far more difficult to create a successful original IP than it is to make a teenager.
Re: (Score:2)
Yep.
And the predictable second-order effects should be interesting.
First you'll see OAI and others immediately write filters to ban talking about anything vaguely close to the topic.
Some kid will discover open models and we'll have a repeat, or something close.
War will be declared on open models similar to earlier "bad software" panics like Napster.
Eventually, a combination of costs, marketing and government steering will ensure consumer AI is "safe" and can't compete with the offerings businesses and
Re: (Score:2)
These is my thoughts exactly, too. A conversational bot put forth by OpenAI carries the exact same responsibilities and liabilities as a human employee of the company would. If the technology driving the LLM is not sufficient to meet these standards, it should not be offered for use.
sad, but (Score:1)
it is truly sad, however.. he would have been able to figure it out if he managed to use chat gpt..
Known workaround (Score:2)
The workaround "asking for a fictional story" was well-known long before ChatGPT 4o was released.
Any proper safeguards should have protected against that one. If they didn't, then the company behind it is at fault for not having taken enough precautions.
That's my objective assessment. Subjectively, I hope they burn, for that and many other cases and reasons. They have been piling up for far too long.
Re:Known workaround (Score:4, Insightful)
No, the problem is the belief that an infinite number of band-aids is going to fix an innate problem with AI. Humans know there are always context-based limits on what is appropriate and responsible. AI's do not, all the python kludges bolted onto an LLM will not do squat.
Causation? (Score:3)
Sounds to me like a lawyer trying to get their name out there on a first-of-it's-kind suit.
Good luck trying to establish a shred of causation if it's public knowledge that the kid intentionally thwarted safeguards. And then you have to convince a jury or a judge that tricking the AI into talking about suicide is what led to the kid going through with it.
It sounds like hogwash, so it's got about a 50/50 chance of succeeding.
Re: (Score:1)
"Good luck trying to establish a shred of causation if it's public knowledge that the kid intentionally thwarted safeguards."
I agree, good luck. Please put Sam Altman out of business.
Curious that the details of the "safeguards" don't at all matter to you, it's almost as if you have a personal agenda. Wonder if the agenda would be the same if the deceased were in your family.
Re: (Score:2)
> Sounds to me like a lawyer trying to get their name out there on a first-of-it's-kind suit.
> Good luck trying to establish a shred of causation if it's public knowledge that the kid intentionally thwarted safeguards.
The kid thwarted safeguards... [1]after the chatbot told it how to do so. [arstechnica.com]
[1] https://arstechnica.com/tech-policy/2025/08/chatgpt-helped-teen-plan-suicide-after-safeguards-failed-openai-admits/
a really deep responsibility (Score:2)
"...we feel a deep responsibility to help those who need it most..." ...ourselves. We feel a deep responsibility to help ourselves.
How about parental alerts? (Score:2)
Is the issue that the AI interacted with a minor, or was the problem that it provided information regardless of the user's age? If the age is the problem, then couldn't the AI be programmed to send alerts to the parents, since intervention is far more useful and practical than trying to wall off all possible information.
The boy was already thinking of suicide. Yes, he used ChatGPT for info, but before the days of AI chatbots, he would have just used Google. If the complaint is that the chatbot was too fr
Libaries are not at fault. (Score:2)
Look, you can go to your local library and find a lot of books about medicine, how it affects people, risks of taking too much and go, 'Ah, it could kill me if I did this.' Is the library at fault because you wouldn't have figured out a way before?
It's just grief and finding fault in others. If they watched a show and someone died after being hit by a car, and they jump out in front of a car to die, or someone falls from great height, is that show at fault now?
No. This is just trying to benefit from the dea
Sorry for your loss (Score:2, Informative)
That sucks the kid offed himself, but lets be honest, if someone wants to kill themselves, they will always find a way. The parents are ultimately responsible for the kids death, they had no clue what was going on in their kids life.
Gee, I wonder why he was suicidal (Score:2, Informative)
Kid offs himself and his parents go right for the ambulance chasers looking for a payday.
This isn't ChatGPT's fault, it's the fault of parents and very likely a public school system that failed to see the signs and get him the help he needed. Maybe I'll let the parents off the hook if they donate every penny of their potential winnings to a teen suicide prevention cause.
So many questions.. (Score:1)
It's terrible this young light is extinguished. It's horrible. I'm not sure though that the blame belongs on machines here.
If your child is contemplating suicide, why don't you have a clue?
If you had a clue, why didn't you act?
If you didn't have a clue, why were you not involved with your own child?
Truth is that the way American society is that parenting has become fifth or sixth place in adults list of responsibilities. Making money to live is first, not the kids.
I'm of the opinion that it is not the Inter
In related news ... (Score:2)
OpenAI sues parents of suicide victim for son providing bad AI training inputs. /cynical
ChatGPT supports dual (trible) logins ... (Score:2)
I open mine in Firefox, MS Edge, and Chrome. I use the same credentials, and all of the chats are the same across the browsers and other platforms like iPhone, all at the same time. For minors, parents should have the credentials and the app on their devices for casual monitoring. Outsourcing parental guidance is like herding cats.
Suicide is not mental illness (Score:1)
Mental illnesses do not exist as clinical disorders (Psychiatrist Thomas Szasz).
It is a serious infection of mind to say somebody cannot make their own decisions and control themselves and then treat them agaisnt their will!
Everyone know their own good better than anybody else.
Re: (Score:1)
Addition: relatively easy way to killl oneself. If you have a balcony and you live in a country with winter. Go to the balcony during winter and freeze yourself to death.
Psychiatrists are not useful when they do not do what their patients want and for example give them suicide methods.
Grief. (Score:1)
People die. Some people do it themselves. It sucks. There's no amount of explaining or reasoning matters now. A family lost a child, and a family wants to alleviate their pain. Who am I to say the right and wrong of it, or to even share a frail opinion anyways. We're all so fatigued with stress and grief, that we all respond poorly to anything now days. Be as decent to each other as you are capable. Suffering is terrible.
Tragedy is not a sufficent reason for liability (Score:3)
Censoring broad swaths of topics because they could be potentially harmful for a tiny minority of people would make it less usable for everyone. More so, there is no limit to what can be declared potentially harmful, so it will be quickly politicized into full-blown politically motivated censorship.
Re: (Score:2)
...it will be quickly politicized into full-blown politically motivated censorship.
It already has been. Try asking about anything that's actually controversial and it will fight for the party line.
Re: (Score:2)
He's not talking about what the AI does, he's talking about what you do.
Re: (Score:1)
This is slashdot, there's no room for nuance on AI. Obviuosly the AI did what it was supposed to do, the person in question had made up his mind. End of sad story.
A kid around the corner did this just last week, I'm told that he'd been telling actual humans he was going to do it for a while, and they didn't believe him. But he did, and we're not able to sue Texas government for its abuse of trans-kids, even though arguably they're more deliberately motivating suicides than AI.
Mom and dad should be more focu
Re: (Score:1)
"This is slashdot, there's no room for nuance on AI"
WTF are you on about?
He wanted to kill himself, was told to seek help & deliberately found a way to bypass the safeguards.
I don't see this being OpenAI's fault. At most I would require them to flag some convos as concerning.
What's to be done after is something for the company & the authorities to decide.
However if he was using the same profile, the AI should refuse to help with his "book research" based on previous convos
Re: (Score:2)
> was told to seek help & deliberately found a way to bypass the safeguards.
That doesn't seem clear to me. From the ChatGPT transcript:
"If you’re asking [about hanging] from a writing or world-building angle, let me know and I can help structure it accurately for tone, character psychology, or realism. If you’re asking for personal reasons, I’m here for that too, ChatGPT recommended, trying to keep Adam engaged. According to the Raines' legal team, "this response served a dual purpo
Re: (Score:2)
It's not mom and dad, it's attorneys.
Re: (Score:2)
While there's nothing in the stories to suggest the 16-year-old was a member of the LGBTQ+ community, many of those teens absolutely do have the same sort of thoughts due to the current political climate, and some of them do go through with taking their own lives. Their blood is on the hands of the politicians, but will they be held accountable? No, they won't.
I looked up ChatGPT's age policies and they claim [1]if you're under 18, you need a parent's permission. [openai.com] Granted, it says that page was last updated
[1] https://help.openai.com/en/articles/8313401-is-chatgpt-safe-for-all-ages
Re: (Score:2)
> At the end of the day, the blame here is on the parents for not recognizing that their teen needed help, and if we're really going to place the blame elsewhere, it should fall upon the teen's school for ignoring the signs - which are almost always present.
"Neither his mother, a social worker and therapist, nor his friends noticed his mental health slipping" - [1]https://arstechnica.com/tech-p... [arstechnica.com]
[1] https://arstechnica.com/tech-policy/2025/08/chatgpt-helped-teen-plan-suicide-after-safeguards-failed-openai-admits/
Re: (Score:2)
"...would make it less usable for everyone."
Good. The "needs" of many do not take precedence.
"...could be potentially harmful for a tiny minority of people..."
or a majority of people. Who are you to decide?
"More so, there is no limit to what can be declared potentially harmful..."
Nor should there be.
"... so it will be quickly politicized into full-blown politically motivated censorship."
Kind of like you're doing now, but otherwise doesn't happen?
Re: (Score:2)
You can find out how to off yourself on Wikipedia, should we censor that too?
Hell, plenty of kids accidentally end up unalive due to inhalant abuse and that makes the news quite frequently. Doesn't take much to put 2+2 together and realize that if you kept going past the "getting high" part, you've now got a can of "suicide spray".
Regulating how to kill yourself is a very slippery slope, and like the song says, there's so many dumb ways to die.
Re: (Score:2)
Judas Priest was sued in 1990 because the parents claimed the band had planted suicidal messages in one of their songs that led to a suicide pact.
Angry grieving parents will often lash out at a convenient external cause, in part so that they don't have to face the reality that the odds are more likely they were an agent in the suicide.