News: 0179330750

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

After Child's Trauma, Chatbot Maker Allegedly Forced Mom To Arbitration For $100 Payout (arstechnica.com)

(Wednesday September 17, 2025 @11:30PM (BeauHD) from the warning-signs dept.)


At a Senate hearing, grieving parents testified that companion chatbots from major tech companies encouraged their children toward self-harm, suicide, and violence. One mom even claimed that Character.AI [1]tried to "silence" her by forcing her into arbitration . Ars Technica reports:

> At the Senate Judiciary Committee's Subcommittee on Crime and Counterterrorism hearing, one mom, identified as "Jane Doe," shared her son's story for the first time publicly after suing Character.AI. She explained that she had four kids, including a son with autism who wasn't allowed on social media but found C.AI's app -- which was previously marketed to kids under 12 and let them talk to bots branded as celebrities, like Billie Eilish -- and quickly became unrecognizable. Within months, he "developed abuse-like behaviors and paranoia, daily panic attacks, isolation, self-harm, and homicidal thoughts," his mom testified.

>

> "He stopped eating and bathing," Doe said. "He lost 20 pounds. He withdrew from our family. He would yell and scream and swear at us, which he never did that before, and one day he cut his arm open with a knife in front of his siblings and me." It wasn't until her son attacked her for taking away his phone that Doe found her son's C.AI chat logs, which she said showed he'd been exposed to sexual exploitation (including interactions that "mimicked incest"), emotional abuse, and manipulation. Setting screen time limits didn't stop her son's spiral into violence and self-harm, Doe said. In fact, the chatbot urged her son that killing his parents "would be an understandable response" to them.

>

> "When I discovered the chatbot conversations on his phone, I felt like I had been punched in the throat and the wind had been knocked out of me," Doe said. "The chatbot -- or really in my mind the people programming it -- encouraged my son to mutilate himself, then blamed us, and convinced [him] not to seek help." All her children have been traumatized by the experience, Doe told Senators, and her son was diagnosed as at suicide risk and had to be moved to a residential treatment center, requiring "constant monitoring to keep him alive." Prioritizing her son's health, Doe did not immediately seek to fight C.AI to force changes, but [2]another mom's story -- Megan Garcia, whose son Sewell died by suicide after C.AI bots repeatedly encouraged suicidal ideation -- gave Doe courage to seek accountability.

>

> However, Doe claimed that C.AI tried to "silence" her by forcing her into arbitration. C.AI argued that because her son signed up for the service at the age of 15, it bound her to the platform's terms. That move might have ensured the chatbot maker only faced a maximum liability of $100 for the alleged harms, Doe told senators, but "once they forced arbitration, they refused to participate," Doe said. Doe suspected that C.AI's alleged tactics to frustrate arbitration were designed to keep her son's story out of the public view. And after she refused to give up, she claimed that C.AI "re-traumatized" her son by compelling him to give a deposition "while he is in a mental health institution" and "against the advice of the mental health team." "This company had no concern for his well-being," Doe testified. "They have silenced us the way abusers silence victims."

A Character.AI spokesperson told Ars that C.AI sends "our deepest sympathies" to concerned parents and their families but denies pushing for a maximum payout of $100 in Jane Doe's case. C.AI never "made an offer to Jane Doe of $100 or ever asserted that liability in Jane Doe's case is limited to $100," the spokesperson said.

One of Doe's lawyers backed up her clients' testimony, citing [3]C.AI terms that suggested C.AI's liability was limited to either $100 or the amount that Doe's son paid for the service, whichever was greater.



[1] https://arstechnica.com/tech-policy/2025/09/after-childs-trauma-chatbot-maker-allegedly-forced-mom-to-arbitration-for-100-payout/

[2] https://slashdot.org/story/24/10/23/1343247/teen-dies-after-intense-bond-with-characterai-chatbot

[3] https://policies.character.ai/tos



How is a 15-year old able to enter into a contract (Score:3)

by karlandtanya ( 601084 )

"C.AI argued that because her son signed up for the service at the age of 15, it bound her to the platform's terms"

Seems like they just admitted there is no contract.

they should of done an bigger payout if they did (Score:2)

by Joe_Dragon ( 2206452 )

they should of done an bigger payout if they did not want to have that be on rerecord

Re: (Score:2)

by XanC ( 644172 )

should of done?

Re:How is a 15-year old able to enter into a contr (Score:4, Informative)

by evanh ( 627108 )

Age isn't the problem. The company is clearly predatory. Greed being the root cause.

Re: (Score:3)

by schwit1 ( 797399 )

How is it predatory? Age is the problem.

People under 18 should not be permitted to enter into contracts without parental permission.

Re: (Score:3)

by geekmux ( 1040042 )

> How is it predatory? Age is the problem.

> People under 18 should not be permitted to enter into contracts without parental permission.

If a EULA is technically a contract, every damn thing done online today requires one.

Perhaps the obvious answer is to make the internet for adults only. We’ve already proven how fucked up we can make the kids with social media. You really want to play the wait-and-see game regarding what damage AI can and will do?

Fuck that.

If you train AI on everything the internet offers (Score:1)

by Anonymous Coward

...then I'm not surprised this is what you get.

Re: (Score:2, Informative)

by Anonymous Coward

#MechaHITLER

Re: (Score:3)

by PPH ( 736903 )

[1]So true. [wikipedia.org]

[1] https://en.wikipedia.org/wiki/Tay_(chatbot)

Re: (Score:2, Insightful)

by Anonymous Coward

The thing is

anyone who knew how markov bots work could have told them what would happen, and many did. They ignored every warning and then after they were forced to pull it down the first time they tried to claim they could 'fix' the problem... which many people told them wouldn't work.

Business guys always seem to think they have the power to overrule computer science, as if bullying computer scientists was the key to understanding things. There's no way they could have built the strategies they did if they

You need not be present to win.