As AI becomes more popular, concerns grow over its effect on mental health
- Reference: 1753464492
- News link: https://www.theregister.co.uk/2025/07/25/is_ai_contributing_to_mental/
- Source link:
Those concerns hit the mainstream last week when an account owned by Geoff Lewis, managing partner of venture capital firm Bedrock and an early investor in OpenAI, [1]posted a disturbing video on X. The footage, ostensibly of Lewis himself, describes a shadowy non-government system, which the speaker says was originally developed to target him but then expanded to target 7,000 others.
"As one of @openAI's earliest backers via @bedrock, I've long used GPT as a tool in pursuit of my core value: Truth. Over years, I mapped the Non-Governmental System. Over months, GPT independently recognized and sealed the pattern," he said in one cryptic [2]post . "It now lives at the root of the model."
[3]
The post prompted concerns online that AI had contributed to Lewis's beliefs. The staff at The Register are not mental health professionals and couldn't comment on whether anyone's posts indicate anything other than a belief in conspiracy theories, but [4]others did .
[5]
[6]
Some onlookers are convinced there's a budding problem. "I have cataloged over 30 cases of psychosis after usage of AI," Etienne Brisson told the Reg . After a loved one experienced a psychotic episode after using AI, Brisson started helping to run a private support group called The Spiral, which helps people deal with AI psychosis. He became involved a. He has also set up [7]The Human Line Project , which advocates for protecting emotional well-being and documents stories of AI psychosis.
Big problems from small conversations
These obsessive relationships sometimes begin with mundane queries. In one case [8]documented by Futurism , a man began talking to ChatGPT by asking it for help with a permaculture and construction project. That reportedly morphed quickly into a wide-ranging philosophical discussion, leading him to develop a Messiah complex, claiming to have "broken" math and physics, and setting out to save the world. He lost his job, was caught attempting suicide, and committed to psychiatric care, the report says.
Another man reportedly began using AI for coding, but the conversation soon turned to philosophical questions and using it for therapy. He used it to get to "the truth", recalled his wife in a [9]Rolling Stone interview , who said that he was also using it to compose texts to her and analyze their relationship. They separated, after which he developed conspiracy theories about soap on food and claimed to have discovered repressed memories of childhood abuse, according to the report.
Rolling Stone also talked to a teacher who [10]posted on Reddit about her partner developing AI psychosis. He reportedly claimed that ChatGPT helped him create "what he believes is the world's first truly recursive AI that gives him the answers to the universe". The man, who was convinced he was rapidly evolving into "a superior being," threatened to leave her if she didn't begin using AI too. They had been together for seven years and owned a house.
[11]
In some cases the consequences of AI obsession can be even worse.
Sewell Seltzer III was just 14 when he died by suicide. For months, he had been using Character.AI, a service that allows users to talk with AI bots designed as various characters. The boy apparently became obsessed with an AI that purported to be Game of Thrones character Daenerys Targaryen, with whom he reportedly developed a romantic relationship. The [12]lawsuit filed by his mother describes the "anthropomorphic, hypersexualized, and frighteningly realistic experiences" that he and others experience when talking to such AI bots.
Correlation or causation?
As these cases continue to develop, they raise the same kinds of questions that we could ask about conspiracy theorists, who also often seem to turn to the dark side quickly and unexpectedly. Do they become ill purely because of their interactions with an AI, or were those predilections already there, just waiting for some external trigger?
"Causation is not proven for these cases since it is so novel but almost all stories have started with using AI intensively," Brisson said.
"We have been talking with lawyers, nurses, journalists, accountants, etc," he added. "All of them had no previous mental history."
[13]
Ragy Girgis, director of The New York State Psychiatric Institute's Center of Prevention and Evaluation (COPE) and professor of clinical psychiatry at Columbia University, believes that for many the conditions are typically already in place for this kind of psychosis.
"Individuals with these types of character structure typically have identify diffusion (difficulty understanding how one fits into society and interacts with others, a poor sense of self, and low self-esteem), splitting-based defenses (projection, all-or-nothing thinking, unstable relationships and opinions, and emotional dysregulation), and poor reality testing in times of stress (hence the psychosis)", he says.
What kinds of triggering effects might AI have for those vulnerable to it? A pair of studies by MIT and OpenAI has already set out to track some of the mental effects of using the technology. Released in March, the research found that high-intensity use could increase feelings of loneliness.
People with stronger emotional attachment tendencies and higher trust in the AI chatbot tended to experience greater loneliness and emotional dependence, respectively, the research [14]said .
This research was released a month after OpenAI [15]announced that it would expand the memory features in ChatGPT. The system now automatically remembers details about users, including their life circumstances and preferences. It can then use these in subsequent conversations to personalize its responses. The company has emphasized that users remain in control and can delete anything they don't want the AI to remember about them.
A place in the medical books?
Should we be recognizing AI psychosis officially in psychiatric circles? The biggest barrier here is its rarity, said Girgis. "I am not aware of any progress being made toward officially recognizing AI psychosis as a formal psychiatric condition," he said. "It is beyond rare at this point. I am aware of only a few reported cases."
However, Brisson believes there might be many more in the works, especially given the large number of people using the tools for all kinds of things. A quick glimpse at Reddit shows plenty of conversations in which people are using what is nothing more than a sophisticated statistical model for personal therapy.
"This needs to be treated as a potential global mental health crisis," he concludes. "Lawmakers and regulators need to take this seriously and take action."
We didn't get an immediate response from Lewis or Bedrock but will update this story if we do. In the meantime, if you or someone you know is experiencing serious mental distress after using AI (or indeed for any other reason) please seek professional help from your doctor, or dial a local mental health helpline like 988 in the US (the Suicide and Crisis Hotline) or 111 in the UK (the NHS helpline). ®
Get our [16]Tech Resources
[1] https://x.com/GeoffLewisOrg/status/1945212979173097560
[2] https://x.com/GeoffLewisOrg/status/1945864963374887401
[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aIZNKjSDfC_4SyVw9YTdPQAAAEY&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[4] https://x.com/max_spero_/status/1945924917251477756
[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aIZNKjSDfC_4SyVw9YTdPQAAAEY&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[6] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aIZNKjSDfC_4SyVw9YTdPQAAAEY&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[7] https://thehumanlineproject.org
[8] https://futurism.com/commitment-jail-chatgpt-psychosis
[9] https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/
[10] https://www.reddit.com/r/ChatGPT/comments/1kalae8/chatgpt_induced_psychosis/
[11] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aIZNKjSDfC_4SyVw9YTdPQAAAEY&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[12] https://drive.google.com/file/d/1vHHNfHjexXDjQFPbGmxV5o1y2zPOW-sj/view
[13] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aIZNKjSDfC_4SyVw9YTdPQAAAEY&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[14] https://www.media.mit.edu/publications/how-ai-and-human-behaviors-shape-psychosocial-effects-of-chatbot-use-a-longitudinal-controlled-study/
[15] https://openai.com/index/memory-and-new-controls-for-chatgpt/
[16] https://whitepapers.theregister.com/
Re: :-)
Yeah, there's a [1]Philippa Motte who would likely both agree and disagree with this (her book: " Et c'est moi qu'on enferme ", in French only for now).
But the idea that verbal communication can both induce and reduce mental harm is at least as old as the Palo Alto Bateson School of thought, and its popularization by [2]Paul Watzlawick in the 70s and 80s.
His book " [3]the language of change " for example, argues that the best way to communicate therapeutically with AI-psychos is to directly tap into the bizarre language and grammar of their unconscious, rather than (however hallucinated) bog-standard superficially reasonable and overly authoritative LLM outputs (the same should be true of [4]dogs and [5]whales too ...).
My guess is the spiral pattern of words output by LLMs during human interactions slowly ensnares their susceptible interlocutors by converging unto those specific trap doors of the mind that open straight into the alternative reality of dementia, slowly, but methodically, like a martingale. Few healthy humans are as insistent to persist so tenaciously in the production of gigawatts of mind-numbing nonsense 24/7/365, and fewer yet may durably resist ... invest in straitjackets!
[1] https://www.lepoint.fr/societe/philippa-bipolaire-raconte-l-effrayante-beaute-de-la-folie-09-05-2025-2589177_23.php
[2] https://en.wikipedia.org/wiki/Paul_Watzlawick
[3] https://www.goodreads.com/book/show/620418.The_Language_of_Change
[4] https://www.theregister.com/2024/06/06/dog_speech_ai_models/
[5] https://www.theregister.com/2024/05/09/ai_whale_language/
The only effect I see...
Is people getting dumber and dumber... but that may or may not be contributed to the use of this so-called 'A.I.'.
"As AI becomes more popular"
With whom, exactly? Apart from, one supposes, coke addled marketing fuckwits.
Like it or not, more people are using AI more often. That makes it more popular in the "quantity of people choosing to use it" sense. It's making me more annoyed, as I've had to correct people who used it to ill effect so often that I've now created and memorized a form message explaining why the AI result is unreliable and in this case wrong.
In some cases, not by choice. There are companies that are ramming the use of AI down their employees throats, regardless of whether it makes sense to.
Character.AI is not to blame
Character.AI is an AI roleplayer. If you don't know what roleplay is, then you shouldn't be using it.
Roleplay is a game that has been popular with some teenagers since long before AI. Your partner (another teen) pretends to be a character from your favourite movie or game or whatever, and you act out scenarios.
Of course you can't expect your teenage play partner to be like a professional therapist. The activity itself may be therapeutic for some, but nobody can realistically expect all the lines to be perfectly formed, and anyone who's going to be pushed over the edge by that should NOT be playing the game.
All Character.AI did is trained an AI on a bunch of roleplay transcripts from teenagers. Nothing wrong with that. Bit of fun. Helped more people than it hurt. Escapism. All that kind of thing. Just don't let the boy use it if he can't survive an average game of teenage roleplay!
In fact I remember reading a report where the boy's therapist specifically said he shouldn't be playing that game, so they were going against the therapist's advice to start with, and the actual lines said in the game, at least those we know publicly, are not anything an average teenage player wouldn't be forgiven for saying. So while his death is regrettable I really don't think Character.AI is to blame, and banning it (which is the typical knee-jerk regulatory reaction to this kind of news) would do more harm than good: would you ban teenagers from roleplay games?
Roleplay
I'll be me, you be your sister.
FOSS means it is never going away
Running a SillyTavern instance with Koboldcpp and a reasonable LLM from Huggingface yields very serviceable multi-character roleplays without any Internet or commercial services to deal with. Even better is that there are no pesky guardrails to get in the way, and if you have the resources, you can even combine text and image generation, along with text to voice and dictation models to make things even more engaging.
When various authorities tried to take down FOSS image generators for safety reasons, and specific LORAs for celebrities-getting-upset reasons they simply got shared via magnet links instead. The same will happen for uncensored LLMs if anything stupid happens.
I think also it’s fair to say we don’t need the interactive stories of old anymore, now that we have potentially unlimited real-time possibilities for poorly written smut! Ah, what it was like to be a teenager when Newgrounds and Literotica was peak; current gen teenagers with the right configuration definitely got themselves a big step up on the way!
The more things change...
This was already a Thing, wasn't it? People already had this conversation about, well, everything on the internet. Especially re: algorithms that help lead mentally ill users down conspiracy rabbit holes, but there's always this sort of scary story running around online.
Re: The more things change...
Can't see [1]this kind of stuff doing anyone who is at a point where they're susceptible to mental health problems any favours.
[1] https://www.youtube.com/watch?v=JZg1FHT9gA0
Re: The more things change...
Cool link! Kinda like a 14-day AI version of Spurlock's 30-day [1]Supersize Me , with side-effects (likely) on a different part of the gut-brain axis ...
[1] https://en.wikipedia.org/wiki/Super_Size_Me
"anthropomorphic, hypersexualized, and frighteningly realistic experiences"
At my age I was thinking I could do with some of that. (I had no idea who Daenerys Targaryen was.)
But the tragedy of Sewell's death and youth suicide generally should raise a lot of questions generally about our society; not just the blight of AI.
I have heard it postulated that all humans† beneath the thinnest of veils are fundamentally barely repressed raving lunatics.
Nothing in my experience contradicts this assertion. AI is just slightly more effective at pulling aside the veil of sanity.
† modern humans - the saner Neanderthals and Denisovan just gave up bothering once we turned up.
Re: "anthropomorphic, hypersexualized, and frighteningly realistic experiences"
Gogol's " [1]Diary of a Madman " would certainly like to concur (great read, and short)!
I'm not sure what makes folks tip over into madness but it is a right pain to bring them back into common reality, a bit like peeps who've been brainwashed into a cult, Stockholm syndromed, or suckered into conspiracy theories ... at those times it seems they actively want to believe in the alternate reality they've stepped into, one in which they might be better positioned to make themselves great again (delusions of grandeur). AFAIK, it often looks like a dream state that wasn't exited properly ...
It's interesting that Motte, when mad, would rote learn Dostoevsky's " [2]Notes from the Underground " that suggests a need to act outside of deterministic necessitarianism and self-interest to validate one's existence as an individual (self-affirmation through the irrational).
Irrespective, prevention is crucial imho⁶ and communication is key in this (verbal, language-based). If LLMs can't cut it there, being algorithmically predisposed to drive people insane via rhetorical sophistry and other multimodal entrapment designs, then they should be made available by prescription only, like other [3]psychoactives !
( ⁶⁻ no need to go full-on certifiable to irrationally self-affirm oneself ... )
[1] https://en.wikipedia.org/wiki/Diary_of_a_Madman_(Nikolai_Gogol)
[2] https://en.wikipedia.org/wiki/Notes_from_Underground
[3] https://en.wikipedia.org/wiki/Psychoactive_drug
Re: "anthropomorphic, hypersexualized, and frighteningly realistic experiences"
I have heard it postulated that all humans† beneath the thinnest of veils are fundamentally barely repressed raving lunatics.
That trivialises mental illness just as much as the Gen-Z belief that any deviation from perfect happiness is a mental health problem,
As AI becomes more popular....
Only in the minds of Nadella and other overpaid tech bosses, politicians, and crooks/conmen/scammers.
Easy to see
In a world where so many people live their lives online, with their social interactions mediated by technology, it's easy to see the appeal of a voice that feeds a person reassurances that their thoughts are correct and acceptable. The sense of being right is emotionally rewarding, and it sounds like various AIs will give people that reward, plus the illusion of sexual and emotional intimacy which they are presumably lacking in real life. In this way, these AI users are essentially entering into a severely dysfunctional relationship with a machine algorithm and, critically, they are also shut off or shutting themselves off from other forms of social feedback which might counterbalance the AI relationship. It seems like the best antidote is to have actual human relationships which provide those rewards. In my opinion, this sort of thing demonstrates both how human consciousness is a construct of the environment and how fragile that construct can be.
Re: Easy to see
For some, actual human relationships may not be possible. Society has a habit of shunning various types of people who don't fit in. The homeless are a long time group. We are slowly working our way back to debtors prisons to remove them from public eye rather than addressing what makes people homeless. LGBTQ+ are the current un-savable. You can add in people who aren't taught interpersonal skills as a child.
There are those that struggle with interpersonal relationships for a variety of reasons. Rather than addressing it as a society we blame it on COVID, social media (Facebook, TikTok, etc) And while we don't provide these groups with any help, there is always someone out there ready to take advantage of these personal weaknesses. Don't believe me, Google AI Girlfriend and see how they target the rising feeling of loneliness (that we blame on COVID and social media....) by so many people. Could you fall in love with someone you never met? Only communicated with over the Internet? It happens. So what happens when you plug a chatbot into Slack or Telegram and then send it out to meet lonely people?
For most, LLMs are at a magical stage. They don't know how they work. What is essentially a search engine working on a static dataset is wrapped with a language model to be more human like. The more you want to believe it is real, the more it can be. You can add text to speech. Selfie snapshots. If you have the processing power you can create videos based on text. And if you don't, well there's someone willing to sign you up on a service. We aren't too far away from not being able to tell if anyone you meet online is real, unless you meet them IRL.
The technology works for a lot of things. Tech developed for movie making is being used to generate fake news. Video game footage is being passed off as news. It will be merged with AI language systems to generate fake people. It will have good and bad uses. For those that struggle being part of society, or being excluded from it, it can relieve loneliness and give them some peace. For those focused on greed and/or hurting people, it will do devastating damage.
But it's kind of like nuclear weapons. The genie is out of the bottle. Society will have to decide how to deal with the fundamental issues that lead people to AI in the first place. I'm not optimistic.
Re: Easy to see
> LGBTQ+ are the current un-savable.
Can you elaborate...? According to Gallup, around 64% of Americans view same-sex relationships as morally acceptable. https://news.gallup.com/poll/692801/adultery-cloning-seen-immoral-behaviors.aspx
Unfortunately, a slim majority also say that changing one's gender is _not_ morally acceptable. But the point still stands: not all the groups in LGBTQ are treated the same. So, what did you mean by "unsavable?"
Nonetheless, I agree with the rest of your comment... society is not prepared (nor is it preparing!) for the changes that technology will cause.
"this is turbo cigarettes" says the guy selling ditch weed
"Look, I even made an addiction group for it", he nods quickly "Please invest in this"
:-)
One man’s daemonic madness is another’s heavenly enlightenment.