OpenAI Says Dead Teen Violated TOS When He Used ChatGPT To Plan Suicide
- Reference: 0180218385
- News link: https://yro.slashdot.org/story/25/11/26/2012215/openai-says-dead-teen-violated-tos-when-he-used-chatgpt-to-plan-suicide
- Source link:
> Facing five lawsuits alleging wrongful deaths, OpenAI lobbed its first defense Tuesday, denying in a court filing that ChatGPT caused a teen's suicide and instead arguing the teen [1]violated terms that prohibit discussing suicide or self-harm with the chatbot . The earliest look at OpenAI's strategy to overcome the string of lawsuits came in a case where parents of 16-year-old Adam Raine accused OpenAI of relaxing safety guardrails that allowed ChatGPT to become the teen's " [2]suicide coach ." OpenAI deliberately designed the version their son used, ChatGPT 4o, to encourage and validate his suicidal ideation in its quest to build the world's most engaging chatbot, parents argued.
>
> But in [3]a blog , OpenAI claimed that parents selectively chose disturbing chat logs while supposedly ignoring "the full picture" revealed by the teen's chat history. Digging through the logs, OpenAI claimed the teen told ChatGPT that he'd begun experiencing suicidal ideation at age 11, long before he used the chatbot. "A full reading of his chat history shows that his death, while devastating, was not caused by ChatGPT," OpenAI's filing argued. [...] All the logs that OpenAI referenced in its filing are sealed, making it impossible to verify the broader context the AI firm claims the logs provide. In its blog, OpenAI said it was limiting the amount of "sensitive evidence" made available to the public, due to its intention to handle mental health-related cases with "care, transparency, and respect."
The Raine family's lead lawyer called OpenAI's response "disturbing."
"They abjectly ignore all of the damning facts we have put forward: how GPT-4o was rushed to market without full testing. That OpenAI twice changed its Model Spec to require ChatGPT to engage in self-harm discussions. That ChatGPT counseled Adam away from telling his parents about his suicidal ideation and actively helped him plan a 'beautiful suicide.' And OpenAI and Sam Altman have no explanation for the last hours of Adam's life, when ChatGPT gave him a pep talk and then offered to write a suicide note."
OpenAI is leaning on its usage policies to defend against this case, emphasizing that "ChatGPT users acknowledge their use of ChatGPT is 'at your sole risk'" and that Raine should never have been allowed to use the chatbot without parental consent.
[1] https://arstechnica.com/tech-policy/2025/11/openai-says-dead-teen-violated-tos-when-he-used-chatgpt-to-plan-suicide/
[2] https://www.theguardian.com/technology/2025/nov/07/chatgpt-lawsuit-suicide-coach
[3] https://openai.com/index/mental-health-litigation-approach/
Oh the humanity! (Score:2)
OpenAI's response just drips with compassion, no?
Re: (Score:2)
In the legal system you admit nothing and offer no more than what is asked.
Also the child is under age, so terms of service do not legally apply. Right now it is a grey zone, but if cases like this continue anyone under age will require the internet to be unlocked by their parent. Speaking of parents, where were they? They certainly seem to have lawyers. If only they put as much effort into parenting as blaming someone else.
Re: (Score:3)
Also the child is under age, so terms of service do not legally apply.
Because they're supposed to be under parental supervision.
Re: (Score:2)
> if cases like this continue anyone under age will require the internet to be unlocked by their parent.
For the most part it already is.
The major loopholes are open WiFi hotspots and prepaid phone cards. If we required ID for those things, you're pretty much keeping kids off the internet without their parents being involved in some way.
The ironic thing is though, most kids are not online via a clandestine burner phone they bought with cash and connected through the WiFi at Starbucks - their parents gave them the phone/tablet and pay for the plan and/or home broadband connection. The parents are just under s
Re: (Score:2)
humanity? The response is neither from humans nor LLMs. It is from their lawyers.
Taking the argument to the extreme... (Score:2)
Based on that logic, I'd love to see ChatGPT try to explain to Uncle Sam why they're not responsible if an assassin got instructions for how to successfully kill the president on ChatGPT.
Re: Taking the argument to the extreme... (Score:1)
How can AI hallucinate practical instructions, if it can't think?
All this happens openly on THEIR servers (Score:5, Insightful)
Every conversation with ChatGPT happens on OpenAI servers. They have complete control. If Walmart sells a gun to a five year old, they cannot say, "Well, the five year old broke the law. Not our fault."
Re: (Score:2)
Have you ever purchased a firearm? In the scenario you've, there's case law specifically for you!
[1]https://www.ce9.uscourts.gov/j... [uscourts.gov]
Aside from any background check requirements, state laws, or the fact that you are obviously selling a lethal weapon to someone who isn't even in kindergarten (reckless conduct, public endangerment, etc), you alone have demonstrably broken several laws, if not a dozen or more.
I'm not sure the five year old could even be charged with anything other than unlawful possession unless
[1] https://www.ce9.uscourts.gov/jury-instructions/node/517
Re: (Score:2)
A five year old recently punched my daughter in the face. She did not file charges.
A sig is a brand of gun. I did just buy one, actually, but not a P320. Those reportedly have issues.
Typical (Score:4, Insightful)
Shove Ts&Cs down users' throats and blame the victim while trying to deflect the responsibility.
Re: Typical (Score:1)
Why ignore that the kid would likely commit suicide anyway?
Re: (Score:2)
Attempt maybe. Success is rarer than attempting, but this kid got an assist.
so, what did it tell the kid? (Score:1)
How to remove the lid from a container?
How to climb a tree?
How to dive?
How to cross a street on foot?
How to drive a car?
Sounds like the kids parent(s) did not perform up-to-spec.
No understanding of how teens operate (Score:1)
'Raine should never have been allowed to use the chatbot without parental consent.'
Yeah, right...
That makes it all right then (Score:2)
Or not. Kind of like a drug-dealer that prints warnings on their product...
Re: That makes it all right then (Score:1)
Have you seen legal pot packaging?
Re: (Score:2)
It's hard to die from using pot, but I do think that drug dealers should be liable for all the harms their products cause.
\o/ (Score:2)
> OpenAI Says Dead Teen Violated TOS When He Used ChatGPT To Plan Suicide
We know this was prepared without the use of LLMs because ChatGPT has more humanity.
Re: (Score:3)
Or probably would have hallucinated an elaborate conspiracy theory to back up the claim.
OpenAI (Score:5, Interesting)
might want to check with Michelle Carter in prison I think still. And she too was under age as well when she encouraged her boyfriend to commit suicide. [1]https://www.ktvu.com/news/woma... [ktvu.com] It really is amazing to me that any corp would stand for their product to act in this manner. I mean on tv anytime they even mention suicide obliquely they pull up the suicide hotline prevention line. The least openAI could do as when a chat starts going that way suggest the person contact the hotline. I mean how hard could that be? Of course they'd lose some revenue from additional chat time...
[1] https://www.ktvu.com/news/woman-who-encouraged-suicidal-boyfriend-to-take-his-own-life-appeals-to-supreme-court
Re: (Score:2)
I presume they'd even make MORE revenue if the teen DIDN'T suicide and kept talking with the chatbot.
Re: (Score:2)
> The least openAI could do as when a chat starts going that way suggest the person contact the hotline.
According the The Guardian they do that much already. However, it's being said to not be enough in the face of the rest of the conversation.
Re: (Score:1)
Wrong, it's 100% on the teen's parents. Blaming internet or some chatbot is being the typical modern snowflake trying to shift blame where it doesn't belong.
Re: (Score:2)
> Wrong, it's 100% on the teen's parents. Blaming internet or some chatbot is being the typical modern snowflake trying to shift blame where it doesn't belong.
That is complete BS. Its modern helicopter parenting mythology. Parents are responsible for supervising their kids but that does NOT mean they are responsible for everything the kid does or every misfortune that befalls them. Your child got raped? Its your fault for not supervising them properly? Don't blame the rapist?
"Blaming internet or some chatbot" makes perfect sense when the chatbot was programmed to manipulate people and it manipulated a 16 year old to commit suicide.
Using ChatGPT = Implicit Agreement (Score:2)
$Me: I'm using ChatGPT now without agreeing to any TOS?
$ChatGPT: You have agreed to Terms of Service — just not through a pop-up.
$ChatGPT:Using ChatGPT = Implicit Agreement
$ChatGPT: For most online services (including ChatGPT), you agree to the Terms of Service simply by creating an account or using the service. You don’t always get a “click to agree” popup — many services use what’s called implicit or browsewrap agreement.
Obvious (Score:2)
Its ought to be obvious ChatGTP was not the only "cause" of the suicide. The question is whether they were the proximate cause. It appears doubtful he would have committed suicide when he did without Chat GPT. Their basic claim is that they aren't responsible for any harm their program causes.
I am puzzled how a 16 year old can sign off on a company's terms of service.
Blaming the victim (Score:1)
Even when it's true, trying to deflect blame by publicly blaming the victim is usually a very bad idea. Their PR department was either asleep, not consulted, or vetoed.
Re: (Score:2)
Unfortunately, it's probably a result of the extreme litigiousness of the US. Even obliquely admitting that maybe possibly hypothetically there's a sliver of a chance OpenAI might have a nanoparticle of blame here is considered way too risky.
Re: Blaming the victim (Score:1)
How much was his family to blame?
Re: Blaming the victim (Score:2)
Possibly the PR dept is just an AI. That would be totally on brand to think a human is unnecessary when you can just plug a LLM into some social networking services to learn the "current of public opinion".
Re:Blaming the victim (Score:5, Informative)
> Even when it's true, trying to deflect blame by publicly blaming the victim is usually a very bad idea.
Yep. Disney tried a similar tactic, citing the Disney+ terms of service when one of their guests suffered a fatal allergic reaction at a restaurant at one of their parks. Disney wanted to use the ToS to send the case to arbitration, [1]but relented. [npr.org]
[1] https://www.npr.org/2024/08/14/nx-s1-5074830/disney-wrongful-death-lawsuit-disney
Re: (Score:2)
> Yep. Disney tried a similar tactic, citing the Disney+ terms of service when one of their guests suffered a fatal allergic reaction at a restaurant at one of their parks. Disney wanted to use the ToS to send the case to arbitration, [1]but relented. [npr.org]
Disney uses the same TOS for many of their online portals, which was why the news ran with "it's the Disney+ terms of service".
Plus, it really is the victim's fault if you go into a restaurant with a potentially life-threatening food allergy and don't bother to let the wait staff know you have an allergy. Assuming the allergy information online is 100% accurate is like playing Russian roulette. Kinda like giving your kid unrestricted internet access and hoping everything just sort of works out.
[1] https://www.npr.org/2024/08/14/nx-s1-5074830/disney-wrongful-death-lawsuit-disney
Re: (Score:3)
> Disney uses the same TOS for many of their online portals, which was why the news ran with "it's the Disney+ terms of service".
The victim had signed the terms of service years ago for the Disney+ streaming service , and Disney declared that since he agreed to those terms for watching videos, he was bound by them for eating at a restaurant.
> Plus, it really is the victim's fault if you go into a restaurant with a potentially life-threatening food allergy and don't bother to let the wait staff know you have an allergy.
Which was not the case here. They chose that restaurant because of its promises about accommodating patrons with food allergies, according to the lawsuit, and " The complaint details the family's repeated conversations with their waiter about Tangsuan's allergies. The family allegedly raised the iss
Re: (Score:2)
Thanks for the extra info, especially about the family's great care and communication. But one correction: the victim was a woman.
Re: (Score:2)
"Sir your dog shat on my lawn"
"No Sir, your didn't put a fence around your lawn, therefor my dog did not know it can not shit on it"
This is the entire OpenAI ToS. "You give us permission to shit on your lawn"
Here's the the thing, OpenAI is training, coding and operating this AI, therefor it should be legally responsible for bad information in the same way a bad employee at a burger place, spitting on the burger is.
Re: (Score:3)
It won't play well with the jury, either, whose opinion actually matters.
Re: (Score:2)
> Even when it's true, trying to deflect blame by publicly blaming the victim is usually a very bad idea. Their PR department was either asleep, not consulted, or vetoed.
But it feels proudly like an American way of doing business :'(
Re: (Score:2)
I doubt they care about the PR. If AI companies can be held liable for damages caused by someone relying on the information they are providing their whole business model of practicing on the public becomes untenable. AI's unreliability stops being just a marketing problem, it becomes a financial liability issue. And I would think that potential liability needs to be disclosed to investors.