News: 0176855395

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

First Trial of Generative AI Therapy Shows It Might Help With Depression

(Saturday March 29, 2025 @12:34PM (BeauHD) from the it's-not-all-doom-and-gloom dept.)


An anonymous reader quotes a report from MIT Technology Review:

> The first clinical trial of a therapy bot that uses generative AI suggests it was as [1]effective as human therapy for participants with depression, anxiety, or risk for developing eating disorders . Even so, it doesn't give a go-ahead to the dozens of companies hyping such technologies while operating in a regulatory gray area. A team led by psychiatric researchers and psychologists at the Geisel School of Medicine at Dartmouth College built the tool, called Therabot, and the results were [2]published on March 27 in the New England Journal of Medicine .

Many tech companies are building AI therapy bots to address the mental health care gap, offering more frequent and affordable access than traditional therapy. However, challenges persist: poorly worded bot responses can cause harm, and forming meaningful therapeutic relationships is hard to replicate in software. While many bots rely on general internet data, researchers at Dartmouth developed "Therabot" using custom, evidence-based datasets. Here's what they found:

> To test the bot, the researchers ran an eight-week clinical trial with 210 participants who had symptoms of depression or generalized anxiety disorder or were at high risk for eating disorders. About half had access to Therabot, and a control group did not. Participants responded to prompts from the AI and initiated conversations, averaging about 10 messages per day. Participants with depression experienced a 51% reduction in symptoms, the best result in the study. Those with anxiety experienced a 31% reduction, and those at risk for eating disorders saw a 19% reduction in concerns about body image and weight. These measurements are based on self-reporting through surveys, a method that's not perfect but remains one of the best tools researchers have.

>

> These results ... are about what one finds in randomized control trials of psychotherapy with 16 hours of human-provided treatment, but the Therabot trial accomplished it in about half the time. "I've been working in digital therapeutics for a long time, and I've never seen levels of engagement that are prolonged and sustained at this level," says [Michael Heinz, a research psychiatrist at Dartmouth College and Dartmouth Health and first author of the study].



[1] https://www.technologyreview.com/2025/03/28/1114001/the-first-trial-of-generative-ai-therapy-shows-it-might-help-with-depression/

[2] https://ai.nejm.org/doi/10.1056/AIoa2400802



not surprising, really (Score:3)

by dunkelfalke ( 91624 )

For a mentally ill person it is far easier to talk to a machine. It doesn't judge and there is no countertransference.

Re: (Score:2)

by cstacy ( 534252 )

> For a mentally ill person it is far easier to talk to a machine. It doesn't judge and there is no countertransference.

These are the same machines that sometimes tell a person how horrible they are, and that they really should kill themselves.

Re: (Score:2)

by serviscope_minor ( 664417 )

For a mentally ill person it is far easier to talk to a machine.

I've used chatgpt a fair bit, and one thing is clear, it's obvious what the AI generated slop is, and I hate reading that. I can't imagine why it would be better if I needed therapy.

It doesn't judge and there is no countertransference.

In the strictest sense of it having no mind those are both true. But if those are in the training data it can do an awfully good simulacrum of them.

Re: (Score:2)

by jacks smirking reven ( 909048 )

> it can do an awfully good simulacrum of them.

I think that might be to it's advantage. It's not going to be insightful or context-aware but it can give effectively the same style of boilerplate life advice but the messenger does matter. The AI could be viewed as truly "neutral" from the user whereas another human could be seen as having an agenda or having some sort of baggage, that lack of judgement may lead the person to accept that advice versus another human even if its all just human derived kit bashing of text.

That would be the next interesting

Re: (Score:2)

by Tony Isaac ( 1301187 )

Depression is already rampant, long before AI was a thing, and a lot of people aren't getting the help they need. If this technology can be perfected, it would be a very good thing.

Re: (Score:2)

by DamnOregonian ( 963763 )

Right. Simply having a supportive "ear", as it were, can be beneficial for depression.

This honestly seems like a good use of tech that easily beats the Turing test, which means it "feels" alive enough for a person who isn't trying to prove it isn't.

Re: (Score:2)

by Tony Isaac ( 1301187 )

Agreed. And standard CBT is a lot about helping the patient think through their own healing process by surfacing what the patient already knows. This style of question/answer is a pretty good pattern for AI.

Re: (Score:1)

by account_deleted ( 4530225 )

Bring them to me. They do not need to be alive.

Privacy? (Score:3)

by ndsurvivor ( 891239 )

I would feel uncomfortable sharing my deepest feelings with a company who keeps records, and monitors the chat. When they make something that I can download locally and does not send out information, I may consider using it myself.

Re: (Score:3)

by Tony Isaac ( 1301187 )

These companies should be subject to HIPAA just like human practitioners.

Re: (Score:2)

by Mal-2 ( 675116 )

Oh yeah, that's doing so much to keep BetterHelp in line! Oh wait...

Re: (Score:2)

by Tony Isaac ( 1301187 )

I've personally helped companies develop HIPAA procedures and policies. They do NOT take them lightly. The penalties for failing to comply, are severe.

Re: (Score:2)

by presearch ( 214913 )

Penalties through the courts? Rights laws being upheld?

How quaint.

Re: (Score:2)

by Tony Isaac ( 1301187 )

No, HIPAA penalties do not require court cases, unless the company wants to appeal.

Re: (Score:2)

by Mal-2 ( 675116 )

That exists. It's called "Llama 3.3". Or "DeepSeek-R1". Run them locally and they don't call home. They don't even need to be connected to the Internet unless you expect them to do research. In the case of DeepSeek you can either have it do searches or Chain of Thought, not both at the same time, and a completely exposed CoT is frankly its best attribute, so there's pretty much no point in letting it have access to anything but what you need to feed it.

Llama 3.3 is installed by default when you install Olla

Re: (Score:2)

by Mal-2 ( 675116 )

I should probably mention that OpenOrca will happily run on a machine with 16 GB (maybe even with 8 GB, but I didn't test it) of RAM. Llama 3.3 requires 32 GB. DeepSeek-R1:70b requires 48 GB. The 1.58-bit quantization of DeepSeek-R1:671b will just fit in 192 GB of RAM.

Study design limitations (Score:1)

by st0nerhat ( 2540360 )

N=210 which isnâ(TM)t too bad. However the control was simply withheld access to the app, they didnâ(TM)t not receive human led talk therapy. So this study does not provide comparative insights at all. It just proves the chatbot is not completely worthless.

Re: (Score:1)

by suntzu3000 ( 10203459 )

> N=210 which isnâ(TM)t too bad. However the control was simply withheld access to the app, they didnâ(TM)t not receive human led talk therapy. So this study does not provide comparative insights at all. It just proves the chatbot is not completely worthless.

They have performed similar studies on human-provided therapy previously, so I think it is reasonable to compare the results of those previous studies against the results of this one.

Not surprising (Score:2)

by zawarski ( 1381571 )

You would be depressed too if you were forced to spend all your time generating bullshit Ghibli images.

Anything to promote their ripoff (Score:1)

by candeoastrum ( 1262256 )

I see they are in full desperation mode trying to promote their garbage that the masses aren't excited about. China did a rug pull on all of these fools and they are like cockroaches hugging their piggy bank.

Re: (Score:2)

by DamnOregonian ( 963763 )

You're an idiot.

I wouldn't be surprised (Score:2)

by cascadingstylesheet ( 140919 )

It's long been known that pretty much any kind of supportive, attentive conversational activity is helpful to some degree. And chatbots excel at that.

Re: (Score:2)

by DamnOregonian ( 963763 )

Kind of what I was thinking. Seems logical.

Wintermute (Score:2)

by dwillmore ( 673044 )

Did no one actually read Neuromancer?

In my day, we had Eliza, and we liked it. (Score:2)

by presearch ( 214913 )

When Eliza asked, with a kind heart, "TELL ME MORE ABOUT YOUR FAMILY", we knew we had found the loving care that was missing from our lives, and the long path to healing our nation's troubled souls had, at last, begun.

LLMs? bah! (Score:2)

by cellocgw ( 617879 )

Eliza should be fine for everyone . /s

miracle: an extremely outstanding or unusual event, thing, or accomplishment.
-- Webster's Dictionary