News: 0178304090

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

The Downside of a Digital Yes-Man (axios.com)

(Monday July 07, 2025 @05:30PM (msmash) from the AI-sycophancy dept.)


[1]alternative_right writes:

> A study by Anthropic researchers on how human feedback can encourage sycophantic behavior showed that AI assistants will sometimes modify accurate answers when questioned by the user -- and [2]ultimately give an inaccurate response .



[1] https://slashdot.org/~alternative_right

[2] https://www.axios.com/2025/07/07/ai-sycophancy-chatbots-mental-health



I sure observe the opposite (Score:2)

by Kiliani ( 816330 )

I typically challenge "AI" when it gives incorrect answers. And the outcome is quite predictable: "Of course, you are right, I made a mistake .. blah blah blah." It is comical, and it reminds me at every turn not to trust the answers.

So rather than sycophant I would say push-over .... just one data point, though a consistent one.

Re: (Score:2)

by Big Hairy Gorilla ( 9839972 )

to your point of > it reminds me at every turn not to trust the answers

At this point with minimal LLM use, I find same as you.

I also think, why is this a surprise to anyone?

It's trained to mimic us. If I ask my colleague, XYZ, and he doesn't know the answer, or perhaps has a superficial knowledge of the topic, then he gives me back his best estimate or guess, I would still likely want to check further, dig more into sources, or otherwise kick the idea around more with my colleague.

So far the LLMs are mor

Re: (Score:2)

by AleRunner ( 4556245 )

An honest human says something like "That's not an area I have much experience, but I guess that if you use concrete for the foundations the bridge will be fine. Maybe you should ask a civil engineer before starting on it? That doesn't sound like kind of thing you should just DIY"

let's quote my nearest LLM on that:

> Yes, you can absolutely use concrete for bridge foundations. In fact, it's the most common and preferred material for this purpose

Absolutely confident. Majority answer. Potentially seriously wrong

Re: (Score:2)

by TwistedGreen ( 80055 )

Great example. The default tone of ChatGPT is already sycophantic enough, and if you're the type of person who enjoys being fawned over like you're next in line for the throne, it can just magnify itself. Incredibly dangerous.

Re: (Score:2)

by Flexagon ( 740643 )

Isaac Azimov [1]predicted lying [wikipedia.org], even under his 3 laws, but the result of challenging the lie was much more interesting: catastrophic failure; though not as spectacular as [2]Nomad's [wikipedia.org]. So no, no big surprises here.

[1] https://en.wikipedia.org/wiki/Liar!_(short_story)

[2] https://en.wikipedia.org/wiki/The_Changeling_(Star_Trek:_The_Original_Series)

Re: (Score:2)

by Big Hairy Gorilla ( 9839972 )

interesting... I read a long web page analyzing the need for AI/LLMs to understand and have a sense of irony. I had no idea how nuanced and deep that would go. We use ironic implications in conversing with each other all the time, without really knowing, or paying attention to it. This is one of the vagaries of language, I'll say English, because it's the only one I know in depth, it seems that it would be same in other languages.

Idioms and local turns of phrase are likely to confuse an LLM as we know them

Re: (Score:2)

by EvilSS ( 557649 )

The Bing version of ChatGPT, when they first rolled it out, would argue with you endlessly if you tried that. It was really funny watching it come up with bizarre ways to try to explain why it wasn't wrong.

Is this a surprise? (Score:2)

by TheMiddleRoad ( 1153113 )

It's not that AI "knows" anything. It's just a big statistical web programmed with mass amounts of data leading to, eventually, a statistical output. Then the output changes by what's given as input. No surprise here.

Re: (Score:2)

by awwshit ( 6214476 )

Classic garbage in, garbage out.

Re: (Score:2)

by swillden ( 191260 )

> It's not that AI "knows" anything. It's just a big statistical web programmed with mass amounts of data

This just raises the question of what it means to "know". The LLMs clearly have a large and fairly comprehensive model of the world, the things in it and the relationships between them. If they didn't, they couldn't produce output that makes sense in the context of the models we have of the world, the things in it and the relationships between them.

Are you sure this is correct? (Score:2)

by jfdavis668 ( 1414919 )

Seems like your conclusion is based on weak information ;)

AI slop (Score:1)

by FireXtol ( 1262832 )

Article seems largely written by AI.

He who renders warfare fatal to all engaged in it will be the greatest
benefactor the world has yet known.
-- Sir Richard Burton