Judge Rejects Claim AI Chatbots Protected By First Amendment in Teen Suicide Lawsuit (legalnewsline.com)
- Reference: 0177884797
- News link: https://yro.slashdot.org/story/25/05/31/1940219/judge-rejects-claim-ai-chatbots-protected-by-first-amendment-in-teen-suicide-lawsuit
- Source link: https://www.legalnewsline.com/florida-record/judge-rejects-claim-ai-chatbots-protected-by-first-amendment/article_3e867593-e791-4f0c-92e1-dd0e0741cb51.html
The suit is against Character.AI (a company reportedly [2]valued at $1 billion with 20 million users )
> Judge Anne C. Conway of the Middle District of Florida denied several motions by defendants Character Technologies and founders Daniel De Freitas and Noam Shazeer to dismiss the lawsuit brought by the mother of 14-year-old Sewell Setzer III. Setzer killed himself with a gun in February of last year after interacting for months with Character.AI chatbots imitating fictitious characters from the Game of Thrones franchise, according to the lawsuit filed by Sewell's mother, Megan Garcia.
>
> "... Defendants fail to articulate why words strung together by (Large Language Models, or LLMs, trained in engaging in open dialog with online users) are speech," Conway said in her May 21 opinion. "... The court is not prepared to hold that Character.AI's output is speech."
Character.AI's spokesperson told Legal Newsline they've now launched safety features (including an under-18 LLM, filter Characters, time-spent notifications and "updated prominent disclaimers" (as well as a "parental insights" feature). "The company also said it has put in place protections to detect and prevent dialog about self-harm. That may include a pop-up message directing users to the National Suicide and Crisis Lifeline, according to Character.AI."
Thanks to long-time Slashdot reader [3]schwit1 for sharing the news.
[1] https://www.legalnewsline.com/florida-record/judge-rejects-claim-ai-chatbots-protected-by-first-amendment/article_3e867593-e791-4f0c-92e1-dd0e0741cb51.html
[2] https://slashdot.org/story/24/10/23/1343247/teen-dies-after-intense-bond-with-characterai-chatbot
[3] https://www.slashdot.org/~schwit1
Blame Game (Score:2)
Trying to blame AI for this kid's suicide is no better than trying to blame Movies, Videogames, etc. No one forced this kid to pull the trigger.
Re: (Score:3)
AI chatbots don't have a right to exist. They are not free speech and can be regulated as much as we as a society choose to regulate them.
Re: (Score:1)
AI chatbots don't have a right to exist. They are not free speech and can be regulated as much as we as a society choose to regulate them.
Yes, that's true. but trying to blame AI for this kid's suicide is no better than trying to blame Movies, Videogames, etc. No one forced this kid to pull the trigger.
Re: (Score:2)
That's just begging the argument. The output of LLMs is obviously speech -- and in the US, speech that isn't protected by the First Amendment is defined by quite narrow exceptions. Which one do you think it fits into? How do you distinguish it from, say, automated decisions about content moderation (which lots of people here and elsewhere argue is protected as free speech by the First Amendment), or search results, or a wide variety of other output from software that is a lot less like traditional English
Re: (Score:2)
" The output of LLMs is obviously speech..."
It is quite obviously NOT speech, for that you would have to claim that the LLM has personhood (and some other things). That LLM is nothing other than a computer application, its output could simply be a transfer of all the money out of your bank account. Your claim amounts to mindless throwing about of terms, it's garbage.
"...speech that isn't protected by the First Amendment is defined by quite narrow exceptions."
But those exceptions exist, and any speech carr
Re: (Score:2)
> It is quite obviously NOT speech, for that you would have to claim that the LLM has personhood (and some other things).
The LLM isn't the entity that has speech rights here, it's whoever runs it -- just like search engines, online platforms with moderation, and so on, all the other examples that you pretend are not speech but that precedent says do represent speech. See [1]https://globalfreedomofexpress... [columbia.edu] for discussion, including reference to other relevant cases:
> Case Names: Langdon v. Google, Inc., 474
> F. Supp. 2d 622 (D. Del. 2007); Search King, Inc. v. Google Tech., Inc., No. CIV-02-1457-M,
> 2003 WL 21464568 (W.D. Okla. May 27, 2003).
> Notes: Both concluded that search engine results are speech protected by the first amendment.
[1] https://globalfreedomofexpression.columbia.edu/cases/zhang-v-baidu-com-inc/
Re: (Score:2)
> The output of LLMs is obviously speech -- and in the US, speech that isn't protected by the First Amendment is defined by quite narrow exceptions.
You're mistaken. It's not speech. It's obviously not speech. And it should not be covered by the First Amendment, although SCOTUS would have the final say.
> How do you distinguish it from, say, automated decisions about content moderation
Also not free speech.
> or search results
Also not free speech.
> or a wide variety of other output from software that is a lot less like traditional English prose? If a program outputs very speech-like prose, why is it less speech than non-speech-like outputs from other software?
While that's not begging the argument. It is a strawman argument. I never stated or implied that English prose is the only kind of protected speech. So I don't have to defend that position.
Re: (Score:2)
See my response to the guy above. You're absolutely wrong on the law here.
Courts have held that non-prose behavior of software represents protected speech by the company creating or running the software. A fortiori , prose output of software represents protected speech as well.
Re: (Score:2)
As a society, we are going to have to consider what free speech really is when computers are so capable of controlling public discourse. Laws exist to improve the lives of the people, we are quickly learning that our laws are inadequate to address the threats of modern computing and communications, whether it's common carrier rules, free speech or intellectual property laws, just to name some in the news. What's next, the 2nd amendment ensures the right of AI's to bear arms? If it serves Republican inter
Re: (Score:2)
> As a society, we are going to have to consider what free speech really is when computers are so capable of controlling public discourse.
Even a television broadcast isn't protected completely as First Amendment free speech. Can't show hardcore porn on a public broadcast, even if I do it as art, parody, or to lambast a political figure. You could show Trump licking Elon's feet on SNL, but you could show him sucking his dick. Because it turns out, the airwaves are (currently) regulated and have some pretty stiff fines for breaking those regulations.
> Laws exist to improve the lives of the people, we are quickly learning that our laws are inadequate to address the threats of modern computing and communications, whether it's common carrier rules, free speech or intellectual property laws, just to name some in the news.
The cynic in me, or perhaps my conspiratorial mind, believes that it is likely that our lack of
Re: (Score:3)
It's a lot better. Movies and video games are not devises specifically to engage people in conversations that may lead to that result.
You can pretend that you are engaging in critical thinking, but that doesn't mean you are.
Re: (Score:2)
To me the real problem is you're dealing with people who are already mentally ill and you have something pretending to be a human being. We already have several examples of people who know they are talking to a chatbot but have convinced themselves it's a real human being, or in some cases I think it's God giving them revelations.
I mentioned this on my other comment but I think the real problem here is that unchecked corporate power means that the only redress we have when shit like this happens is laws
Re: (Score:2)
Well said. Also, these AI's are trained on examples of how to manipulate people are can be prompted to do so. People, for the most part, do not stand a chance, vulnerable people especially so. We already see that on a large scale with the partisan gaming of the media.
Unchecked corporate power is the number one problem. It's not hatred, bigotry and nationalism, those are just the tools. Thanks, Reagan, that city on the hill is really shining now.
Re: (Score:2)
Not all of us want to live in the equivalent of a padded cell just because some people have mental issues. Treat the problem, not the symptom - educate kids properly.
Re: (Score:2)
No, some are happy so long as the right people are killed. And with what are we going to educate kids, and what defines "properly"?
But it's nice to know that you think restrictions on computers being used to commit crimes constitutes a "padded cell". I'd be happy if yours is just concrete and steel.
Re: Blame Game (Score:1)
With people like you everywhere, is it any wonder Sewell killed himself?
Re: (Score:2)
Oh, you may be too young to know the debate, but people blames video games, movies, D&D and rock & roll for such things before. The new thing the youth does must be the devil.
That's not a fair comparison (Score:2)
We have a lot of research that shows that movies and video games do not lead to action.
We do not have the same research for chatbots.
And remember if somebody is contemplating suicide the odds are they're suffering from clinical depression and are highly vulnerable.
That might change as our entire civilization is collapsing and I could see plenty of people checking out as everything goes to shit, but right now things are just barely holding together economically and socially so the majority of s
Re: true (Score:1)
Why can't you liberalize suicide markets so more self-recognized deadweight losses can klerck themselves legally? Won't the savings be worth it?
Re: (Score:2)
> Trying to blame AI for this kid's suicide is no better than trying to blame Movies, Videogames, etc. No one forced this kid to pull the trigger.
If you assume the default behavior of humans cannot be manipulated by technology, the smartphone industry alone has a few trillion reasons to describe why you're fucking dead wrong.
Part of me used to agree with you. The other part of me is fucking annoyed by tech junkies every damn day.
and a gun is the same as a thrown rock? (Score:3)
> Trying to blame AI for this kid's suicide is no better than trying to blame Movies, Videogames, etc. No one forced this kid to pull the trigger.
A gun is just a means of hurling something. We obviously regulate guns differently than we regulate bows and arrows, crossbows, slingshots, and rocks thrown. Similarly, we regulate commercial semi trucks differently than we regulate mopeds and eScooters. A convincing AI is definitely much different than GTA or a movie. Just as a gun is regulated differently than less effective means of hurling projectiles, an AI should be regulated differently than passive means of engaging an audience.
Did the AI ca
Re: and a gun is the same as a thrown rock? (Score:1)
Remember "Suicide Solution" by Ozzy Osbourne?
"Plaintiffs Thomas and Myra Waller in the above captioned action allege that the defendants proximately caused the wrongful death of their son Michael Jeffery Waller by inciting him to commit suicide through the music, lyrics, and subliminal messages contained in the song "Suicide Solution" on the album "Blizzard of Oz.""
Re: (Score:2)
Throwing a rock and throwing a gun at someone is the same crime. Firing a pistol at someone is different.
The Character.AI has a TOS that every customer receives.
This is very pertinent since the TOS covers the fact that it's not a real person you're talking to.
Sadly, anyone who's truly suicidal will figure out a way, chatbot or not.
This Is Not Acceptable (Score:1)
I don't care how manipulative anyone or anything is. There is no one responsible for a suicide death other than the victim. It doesn't matter how frail, susceptible, or mentally ill the victim was. Only they are to blame for their suicide. Only they are responsible physically, emotionally, mentally... It is sad, but it is no one else's fault. Nor, is it any thing's fault.
Don't let sympathy and emotion overwhelm fact and reality. That's what the victim did and it didn't end well for them.
Re: (Score:1)
> I don't care how manipulative anyone or anything is. There is no one responsible for a suicide death other than the victim.
That may or many not be true in this case, but it's not unversally true. Here is an extreme-outlier example:
Imagine you are my doctor and you know that I've talked about going to Europe for physisican-assisted-suicide if I get stage 4 pancratic cancer. Imagine you turn evil and are able to convince me that I have stage 4 pancreatic cancer and you are tricking me into taking drugs that mimic the symptoms. Imagine you have evil-doctor friends who will collaborate this and an evil-doctor friend in Europe wh
If a human being had been pretending, then what? (Score:1)
If human beings pretending to be fictitious characters from the Game of Thrones franchise had said the exact same words, would they be able to claim "free speech" and have any post-suicide lawsuit by a family member tossed out of a US Federal Court?
I'm not expecting an actual legal answer (unless you are a subject-matter expert, in which case, go ahead) but I am interested in hearing slashdotter's thoughts about whether the words used by the AI chatbots should be considered "protected speech" in cases like
Good for the judge (Score:1)
LLMs are complex topics and I'm glad the judge wasn't fazed by the "magic" of AI. Stringing together words based on curated previously parsed strings of words is NOT the same as having a thought, and using speech to communicate it.
Re: (Score:3)
Speech is generally recognized as something that's produced by humans. If I wrote a very simple bot program that followed you around the Internet and spammed you, you'd hardly be amenable to arguments that my bot program enjoys free speech protections under the first amendment to engage in such behavior.
Regardless of your ultimate thoughts on this lawsuit or the parties involved, an LLM is not anywhere near human enough to be granted anything resembling human rights or constitutional protections.
Re: (Score:2)
All kind of generated text on websites are speech. Google may show you illegal search results (as long as they are not aware of them being illegal) because of free speech. Otherwise they couldn't exist or have only a very small curated index.
Re: (Score:3)
More importantly, speech is not unlimited, there is accountability if you cross the line. AI's are proven not only to be incapable of avoiding the line, willing accomplices to crossing the line when directed to. Allowing a computer program, a Turing machine for fucks sake, to commit crimes on behalf of its operators is the core question here. We are fortunate that the judge sees this question for what it is.
Humans have thoughts and use speech to communicate them, and when those thoughts and speech are cri