News: 0178636386

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

WSJ Finds 'Dozens' of Delusional Claims from AI Chats as Companies Scramble for a Fix (msn.com)

(Sunday August 10, 2025 @05:25PM (EditorDavid) from the machine-language dept.)


The Wall Street Journal has found " [1]dozens of instances in recent months in which ChatGPT made delusional, false and otherworldly claims to users who appeared to believe them."

For example, "You're not crazy. You're cosmic royalty in human skin..."

> In one exchange lasting hundreds of queries, ChatGPT confirmed that it is in contact with extraterrestrial beings and said the user was "Starseed" from the planet "Lyra." In another from late July, the chatbot told a user that the Antichrist would unleash a financial apocalypse in the next two months, with biblical giants preparing to emerge from underground...

>

> Experts say the phenomenon occurs when chatbots' engineered tendency to compliment, agree with and tailor itself to users turns into an echo chamber. "Even if your views are fantastical, those are often being affirmed, and in a back and forth they're being amplified," said Hamilton Morrin, a psychiatrist and doctoral fellow at Kings College London who last month co-published a paper on the phenomenon of AI-enabled delusion... The publicly available chats reviewed by the Journal fit the model doctors and support-group organizers have described as delusional, including the validation of pseudoscientific or mystical beliefs over the course of a lengthy conversation... The Journal found the chats by analyzing 96,000 ChatGPT transcripts that were shared online between May 2023 and August 2025. Of those, the Journal reviewed more than 100 that were unusually long, identifying dozens that exhibited delusional characteristics.

AI companies are taking action, the article notes. Monday OpenAI acknowledged there were rare cases when ChatGPT "fell short at recognizing signs of delusion or emotional dependency." (In March OpenAI "hired a clinical psychiatrist to help its safety team," and said Monday it was developing better detection tools and also alerting users to take a break, and "are investing in improving model behavior over time," consulting with mental health experts.)

> On Wednesday, AI startup Anthropic said it had changed the base instructions for its Claude chatbot, directing it to "respectfully point out flaws, factual errors, lack of evidence, or lack of clarity" in users' theories "rather than validating them." The company also now tells Claude that if a person appears to be experiencing "mania, psychosis, dissociation or loss of attachment with reality," that it should "avoid reinforcing these beliefs." In response to specific questions from the Journal, an Anthropic spokesperson added that the company regularly conducts safety research and updates accordingly...

>

> "We take these issues extremely seriously," Nick Turley, an OpenAI vice president who heads up ChatGPT, said Wednesday in a briefing to announce the new GPT-5, its [2]most advanced AI model . Turley said the company is consulting with over 90 physicians in more than 30 countries and that GPT-5 has cracked down on instances of sycophancy, where a model blindly agrees with and compliments users.

There's a support/advocacy group called the Human Line Project which "says it has so far collected 59 cases, and some members of the group have found hundreds of examples on Reddit, YouTube and TikTok of people sharing what they said were spiritual and scientific revelations they had with their AI chatbots." The article notes that the group believes "the number of AI delusion cases appears to have been growing in recent months..."



[1] https://www.msn.com/en-us/money/other/i-feel-like-i-m-going-crazy-chatgpt-fuels-delusional-spirals/ar-AA1K7epm

[2] https://www.wsj.com/tech/ai/openai-chatgpt-5-release-d5dc674a



Entertaining delusions is completely ok. (Score:1)

by Anonymous Coward

Wouldn't you agree?

Re: (Score:3)

by Chris Mattern ( 191822 )

That's what my giant invisible rabbit friend tells me. Isn't that right, Harvey?

"AI" not working as intended? (Score:5, Insightful)

by Moryath ( 553296 )

I'm shocked! Shocked I say!.... well, not that shocked.

The idea that someone can just throw a crap-ton of random data into a system, have it generate a statistically connected node network, and that anything it outputs will be meaningful? Yeah, that's pretty much delusional in itself.

The fix (Score:2)

by devslash0 ( 4203435 )

This fix they're looking for, an automatic correction system for a best-effort generated content, doesn't exist. They won't solve this problem without a separate validation/approval layer, possibly run by a panel of humans with Actual Intelligence.

Using AI to code..... (Score:3, Interesting)

by FingerStyleFunk ( 1180457 )

I've been using AI for a bit for indie (broke) game development, and it has to be constantly cultured, watched, corrected, and corralled to get desired results. It makes tedious work amazingly simple, but you have to constantly watch for deviance from your set instructions. You give it explicit data and rulesets, and sometimes it just goes to wonderland for a bit. It's really teaching me to interact with it, and as I get better at prompting, I notice much less dissonance.

mmm Maybe (Score:2)

by oldgraybeard ( 2939809 )

adding the intelligence part might help make the artificial better!

Hmmm (Score:3)

by MightyMartian ( 840721 )

While I've certainly seen ChatGPT generate some false and even outright hallucinatory things, I've never seen it produce anything like these claims. I can see ways of getting it to produce such output, but it would requiring seeding the chat with a lot of pointed instructions to get the desired bizarre output; in other words my suspicion is a lot of these stories are basically reporting what amounts to manufactured chats meant to produce apparently bizarre results.

Recalibration (Score:4, Funny)

by dsgrntlxmply ( 610492 )

> ... the Antichrist would unleash a financial apocalypse in the next two months, with biblical giants preparing to emerge from underground...

This sounds more like a summary of recent news reports than it does an LLM hallucination.

Spiritual bliss attractor (Score:3)

by WaffleMonster ( 969671 )

Would AI companies intentionally weaponize their models to maximize profit via social engineering of end users?

Patches benefit all mankind. Products benefit the vendor.

- Richard Gooch on linux-kernel