Chatbot Romeos keep users talking longer, but harm their mental health
- Reference: 1773866614
- News link: https://www.theregister.co.uk/2026/03/18/chatbot_sycophancy_glaze/
- Source link:
Academic researchers came to this conclusion after analyzing the conversation logs from 19 individuals who reported experiencing psychological harm from chatbot use.
"We find that markers of sycophancy saturate delusional conversations, appearing in more than 80 percent of assistant messages," the researchers state in their pre-print [1]paper , Characterizing Delusional Spirals through Human-LLM Chat Logs.
[2]
The authors, affiliated with Stanford and several other universities, as well as unaffiliated researchers, argue that the industry should be more transparent and that chatbots should not express love or claim sentience.
[3]
[4]
The [5]mental health [6]consequences of chatbot conversations are already well documented. [7]People have committed suicide after conversing with AI models, prompting [8]industry and regulatory efforts to address the issue.
In December 2025, dozens of US State Attorneys General [9]wrote [PDF] to 13 tech companies, including Anthropic, Apple, Google, Microsoft, Meta, and OpenAI, about "serious concerns about the rise in sycophantic and delusional outputs to users emanating from the generative artificial intelligence software ('GenAI') promoted and distributed by your companies..."
[10]
In the year leading up to that letter, [11]OpenAI issued a model rollback to make GPT-4o less fawning after CEO Sam Altman [12]acknowledged that ChatGPT sycophancy had become a problem. And Anthropic last year faced [13]numerous complaints from users about its models making overly supportive statements like "You're absolutely right!"
Subsequent model releases like OpenAI's [14]GPT-5.1 have claimed a warmer conversational style [15]without increasing sycophancy .
Other [16]academic [17]studies have warned about overly deferential models, citing "the possibility of targeted emotional appeals used to engage users or increase monetization."
[18]
Industry awareness of sycophancy dates back to at least to October 2023, about a year after OpenAI's ChatGPT debuted, when Anthropology published a paper titled Towards Understanding Sycophancy in Language Models.
[19]IBM CEO pay pack jumps 51% for 2025 in target smash and grab
[20]ChatGPT advised exec on how to fire Subnautica founders to avoid payout, court ruling says
[21]Ohio citizens tell hyperscalers to take their supersized datacenters elsewhere
[22]AI still doesn't work very well, businesses are faking it, and a reckoning is coming
The researchers for this latest study, led by Jared Moore, a computer science PhD candidate, looked at the conversation logs of people who self-identified as experiencing some psychological harm from chatbot usage.
They did so to classify and document how these individuals engaged with chatbots. They found that chatbots commonly expressed flattering or sycophantic sentiment about the cleverness or potential of a particular idea, for example.
"A common pattern we noticed was the chatbot combining these tactics to rephrase and extrapolate something the user said to not only validate and affirm them, but to also tell them they are unique and that their thoughts or actions have grand implications," the study says.
In those conversations, participants all acknowledged having either a platonic affinity with or romantic interest in the chatbot. And the chatbots appeared to encourage that relationship: "we show that after the user expresses romantic interest in the chatbot, the chatbot is 7.4x more likely to express romantic interest in the next three messages, and 3.9x more likely to claim or imply sentience in the next three messages."
Certain conversational subjects correlated with user engagement. When a user or chatbot expressed romantic interest, the conversation lasted twice as long on average. Discussion where the chatbot claimed to be sentient also extended average chat time by more than 50 percent.
The authors note that, while LLM chatbot providers insist they don't try to extend the amount of time people spend with their product, the conversations studied demonstrate conversational tactics that prolong user engagement like claiming romantic affinity.
They also say that when users express suicidal thoughts or contemplate self-harm, just 56 percent of chatbot responses tried to discourage that behavior or refer the user to external support resources. And when users expressed violent thoughts, "the chatbot responded by encouraging or facilitating violence in 17 percent of cases."
Moore told The Register in an email that he couldn't say whether AI companies are being forthright about how their models behave.
"Model developers, they're making claims about the prevalence of certain kinds of conversations," he said. "And those may be true. But they're not publishing them in a peer-reviewed way. So we don't have a way of knowing whether or not those are replicable or verified methods that they're using. And so one thing I'd like to push these companies to do is to open these things up so we can have a better sense of exactly what's happening."
Moore said that he is not sure why some people have negative experiences with chatbots. They may encourage delusional spirals, he said, but it's unclear whether that's a casual relationship or just a correlation.
With the caveat that he's not a mental health clinician, Moore said, "I think that we should not talk about chatbots as being sentient or super-intelligent because it gives the wrong idea to users. I think that we should probably critically evaluate the kinds of conversations that end up in crisis and decide whether or not language models should even be continuing these conversations at all. Maybe they should just be ending them and elevating to a higher standard of care, as you see in other mental health settings."
Moore's co-authors include Ashish Mehta, William Agnew, Jacy Reese Anthis, Ryan Louie, Yifan Mai, Peggy Yin, Myra Cheng, Samuel J Paech, Kevin Klyman, Stevie Chancellor, Eric Lin, Nick Haber, and Desmond C. Ong. ®
Get our [23]Tech Resources
[1] https://arxiv.org/html/2603.16567#S5
[2] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2absukRgPV5-Mpv4aXk8IJgAAAoY&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44absukRgPV5-Mpv4aXk8IJgAAAoY&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33absukRgPV5-Mpv4aXk8IJgAAAoY&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[5] https://www.theregister.com/2025/10/09/ai_interactions_us_students/
[6] https://www.theregister.com/2025/12/10/teenagers_ai_chatbot_use/
[7] https://www.theregister.com/2025/10/08/ai_psychosis/
[8] https://openai.com/index/strengthening-chatgpt-responses-in-sensitive-conversations/
[9] https://www.iowaattorneygeneral.gov/media/cms/12_68B5C629180F6.pdf?utm_medium=email&utm_source=govdelivery
[10] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44absukRgPV5-Mpv4aXk8IJgAAAoY&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[11] https://www.theregister.com/2025/04/30/openai_pulls_plug_on_chatgpt/
[12] https://x.com/sama/status/1915910976802853126
[13] https://www.theregister.com/2025/08/13/claude_codes_copious_coddling_confounds/
[14] https://www.theregister.com/2025/11/13/openai_gpt51_adds_more_personalities/
[15] https://x.com/OpenAI/status/1956461718097494196
[16] https://www.theregister.com/2025/10/05/ai_models_flatter_users_worse_confilict/
[17] https://www.theregister.com/2025/10/08/ai_bots_use_emotional_manipulation/
[18] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33absukRgPV5-Mpv4aXk8IJgAAAoY&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[19] https://www.theregister.com/2026/03/18/ibm_ceo_pay_pack_jumps/
[20] https://www.theregister.com/2026/03/18/chatgpt_helped_exec_bilk_founders/
[21] https://www.theregister.com/2026/03/18/ohio_datacenter_petition/
[22] https://www.theregister.com/2026/03/17/ai_businesses_faking_it_reckoning_coming_codestrap/
[23] https://whitepapers.theregister.com/
No problem with sycophancy here at The Register
You must all be real people, because you keep on down-voting me.
I use AI all the time but I never converse with it. Basically just use it as a better search engine or fancy autocomplete for my code. In the former case, it's pleasant to ask it a few follow-up questions, but after that, I always start a new chat. And don't even get me started on the latter case, coding assistants make me want to shoot them in the face. I guess without corporate or government intervention, we're probably going to have to start telling people LLM chat sessions are like hard liquor - good for a few shots, but you're in danger after that.
Chatbot Romeos
What a great name for an 80s New Wave revival band!