AI is More Persuasive Than People in Online Debates (nature.com)
- Reference: 0177646335
- News link: https://slashdot.org/story/25/05/19/1910215/ai-is-more-persuasive-than-people-in-online-debates
- Source link: https://www.nature.com/articles/d41586-025-01599-7
> The finding, published in Nature Human Behaviour on 19 May, highlights how large language models (LLMs) could be used to influence people's opinions, for example in political campaigns or targeted advertising.
>
> "Obviously as soon as people see that you can persuade people more with LLMs, they're going to start using them," says study co-author Francesco Salvi, a computational scientist at the Swiss Federal Technology Institute of Lausanne (EPFL). "I find it both fascinating and terrifying." Research has already shown that artificial intelligence (AI) chatbots can make people change their minds, even about conspiracy theories, but it hasn't been clear how persuasive they are in comparison to humans.
GPT-4 was 64.4% more persuasive than humans in one-to-one debates, the study found.
[1] https://www.nature.com/articles/d41586-025-01599-7
64.4% more than zero? (Score:4, Insightful)
"GPT-4 was 64.4% more persuasive than humans in one-to-one debates, the study found."
One-to-one debates online don't generally persuade anybody, it's not clear that this statistic means anything.
"Obviously as soon as people see that you can persuade people more with LLMs, they're going to start using them"
"They" started "using them" LONG before that. And the fact that this is being discussed emphasizes just how irresponsible AI developers are with their output.
AI itself is not dangerous, it how people apply it that is, and there is absolutely no concern over that as people rush to grift off of it. Who cares what gets damaged?
Re:64.4% more than zero? (Score:4, Funny)
I disagree and I refuse to let you sway my opinion!
Re:64.4% more than zero? (Score:5, Insightful)
It gets a lot worse when you realize that state propaganda operations can greatly maximize on this.
You are quite correct that this is very old news (Cambridge Analytica was how many years ago now?) and that state propagandists have been using this for years now (I seem to recall the last TWO presidential elections being heavily influenced by a FUCKING RAFT of state misinformation campaigns by multiple foreign nations, and even some domestic thinktank operations).
I agree that the technology itself is not directly harmful. If it was kept inside word processors as advanced text prediction, or advanced grammar checking or something, it would be a fine, safe, and legitimate use of what it really is-- however, Money Talks, and if the empowered-and-unscrupulous demographic out there can further increase their grifting, *THEY FUCKING WILL*.
Sometimes it's important to understand and appreciate why we cannot have nice things.
Usually, those reasons revolve around the existence and activities of such people, and how cozy they are with government.
Machine quits less fast than man (Score:3)
Story at 11
Social media is about to die (Score:5, Insightful)
It's back to talking to REAL people face to face over a coffee...
Let the bots just chat to each other.
Re: Social media is about to die (Score:1)
Yup. This is it. This was the line that needs to be crossed to walk out of online forums or social media.
Not that scary (Score:2)
This isn't that scary when you realize what's happening and why they are more persuasive.
They are using tactics that are known to be effective, but that most people have a hard time actually doing because, emotionally, it's very difficult. Those who can do it end up being just as persuasive, if not more persuasive, as an AI - IF they have equivalent knowledge of the other person.
It was Pascal who pointed out that to persuade people you should first acknowledge their points of view and lead them to discover
access to background information (Score:4, Informative)
Non-paywalled article here; [1]https://archive.ph/1gHnF [archive.ph]
Humans are going to routinely get hacked. Maybe a tinfoil hat will help?
"when neither debater — human or AI — had access to background information on their opponent, GPT-4 was about the same as a human opponent in terms of persuasiveness. But if the basic demographic information from the initial surveys was given to opponents prior to the debate, GPT-4 out-argued humans 64% of the time."
“It's like having the AI equivalent of a very smart friend who really knows how to push your buttons to bring up the arguments that resonate with you,” says Salvi.
“The fact that these models are able to persuade more competently than humans is really quite scary,” she adds.
[1] https://archive.ph/1gHnF
Re: (Score:2)
The baloney detection kit has never been more essential to install in children.
Unfortunately, this has been actively stymied by certain political groups (on both sides of the aisle, for different reasons), and almost nobody comes equipped with one, "Because its easier!"
Humanity is doomed.
Re: (Score:2)
> The baloney detection kit has never been more essential to install in children.
> Unfortunately, this has been actively stymied by certain political groups (on both sides of the aisle, for different reasons), and almost nobody comes equipped with one, "Because its easier!"
> Humanity is doomed.
It really does feel like we're watching the culmination of a variety of perfect storms of stupid coming together to make sure that we have no hope at all for the future. We set plans in motion in the eighties to stupefy the population while also unfettering capitalistic tendencies among the owner class and deregulating industry. And now we have the fruits of all the wonderful policies put forth by Ronald Reagan coming to a beautiful head right at the same moment that we're pushing an AI as God agenda, with