Parents Sue OpenAI Over ChatGPT's Role In Son's Suicide (techcrunch.com)
(Wednesday August 27, 2025 @11:21AM (BeauHD)
from the ineffective-safeguards dept.)
An anonymous reader quotes a report from TechCrunch:
> Before 16-year-old Adam Raine died by suicide, he had spent months consulting ChatGPT about his plans to end his life. Now, his parents are [1]filing the first known wrongful death lawsuit against OpenAI , The New York Times [2]reports . Many consumer-facing AI chatbots are programmed to activate safety features if a user expresses intent to harm themselves or others. But [3]research has shown that these safeguards are far from foolproof.
>
> In Raine's case, while using a paid version of ChatGPT-4o, the AI often encouraged him to seek professional help or contact a help line. However, he was able to bypass these guardrails by telling ChatGPT that he was asking about methods of suicide for a fictional story he was writing. OpenAI has addressed these shortcomings on its blog. "As the world adapts to this new technology, we feel a deep responsibility to help those who need it most," the post [4]reads . "We are continuously improving how our models respond in sensitive interactions." Still, the company acknowledged the limitations of the existing safety training for large models. "Our safeguards work more reliably in common, short exchanges," the post continues. "We have learned over time that these safeguards can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model's safety training may degrade."
[1] https://techcrunch.com/2025/08/26/parents-sue-openai-over-chatgpts-role-in-sons-suicide/
[2] https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html?partner=slack&smid=sl-share
[3] https://arxiv.org/pdf/2507.02990
[4] https://openai.com/index/helping-people-when-they-need-it-most/
> Before 16-year-old Adam Raine died by suicide, he had spent months consulting ChatGPT about his plans to end his life. Now, his parents are [1]filing the first known wrongful death lawsuit against OpenAI , The New York Times [2]reports . Many consumer-facing AI chatbots are programmed to activate safety features if a user expresses intent to harm themselves or others. But [3]research has shown that these safeguards are far from foolproof.
>
> In Raine's case, while using a paid version of ChatGPT-4o, the AI often encouraged him to seek professional help or contact a help line. However, he was able to bypass these guardrails by telling ChatGPT that he was asking about methods of suicide for a fictional story he was writing. OpenAI has addressed these shortcomings on its blog. "As the world adapts to this new technology, we feel a deep responsibility to help those who need it most," the post [4]reads . "We are continuously improving how our models respond in sensitive interactions." Still, the company acknowledged the limitations of the existing safety training for large models. "Our safeguards work more reliably in common, short exchanges," the post continues. "We have learned over time that these safeguards can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model's safety training may degrade."
[1] https://techcrunch.com/2025/08/26/parents-sue-openai-over-chatgpts-role-in-sons-suicide/
[2] https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html?partner=slack&smid=sl-share
[3] https://arxiv.org/pdf/2507.02990
[4] https://openai.com/index/helping-people-when-they-need-it-most/