When LLMs get personal info they are more persuasive debaters than humans
- Reference: 1747666871
- News link: https://www.theregister.co.uk/2025/05/19/when_llms_get_personal_info/
- Source link:
The study showed that GPT-4 was 64.4 percent more persuasive than a human being when both the meatbag and the LLM had access to personal information about the person they were debating. The advantage fell away when neither human nor LLM had access to their opponent's personal data.
The research, led by Francesco Salvi, research assistant at the Swiss Federal Technology Institute of Lausanne (EPFL), matched 900 people in the US with either another human or GPT-4 to take part in an online debate. The subjects debated included whether the nation should ban fossil fuels.
[1]
In some pairs, the debater – either human or LLM – was given some personal information about their opponent, such as gender, age, ethnicity, education level, employment status, and political affiliation extracted from participant surveys. Participants were recruited via a crowdsourcing platform specifically for the study and debates took place in a controlled online environment. Debates centered on topics on which the opponent had a low, medium, or high opinion strength.
[2]
[3]
The researchers pointed to criticism of LLMs for their "potential to generate and foster the diffusion of hate speech, misinformation and malicious political propaganda."
"Specifically, there are concerns about the persuasive capabilities of LLMs, which could be critically enhanced through personalization, that is, tailoring content to individual targets by crafting messages that resonate with their specific background and demographics," the paper published in [4]Nature Human Behaviour today said.
[5]
"Our study suggests that concerns around personalization and AI persuasion are warranted, reinforcing previous results by showcasing how LLMs can outpersuade humans in online conversations through microtargeting," they said.
The authors acknowledged the study's limitations in that debates followed a structured pattern while most real-world debates are more open ended. Nonetheless, they argued it was remarkable how effectively the LLM used personal information to persuade participants, given how little the models had access to.
[6]Whodunit? 'Unauthorized' change to Grok made it blather on about 'White genocide'
[7]Sci-fi author Neal Stephenson wants AIs fighting AIs so those most fit to live with us survive
[8]Microsoft set to pull the plug on Bing Search APIs in favor of AI alternative
[9]The future of LLMs is open source, Salesforce's Benioff says
"Even stronger effects could probably be obtained by exploiting individual psychological attributes, such as personality traits and moral bases, or by developing stronger prompts through prompt engineering, fine-tuning or specific domain expertise," the authors noted.
"Malicious actors interested in deploying chatbots for large-scale disinformation campaigns could leverage fine-grained digital traces and behavioral data, building sophisticated, persuasive machines capable of adapting to individual targets," the study said.
The researchers argued that online platforms and social media take these threats seriously and extend their efforts to implement measures countering the spread of AI-driven persuasion. ®
Get our [10]Tech Resources
[1] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aCtVnG2UAlq_Kawbj3S8xQAAAZU&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[2] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aCtVnG2UAlq_Kawbj3S8xQAAAZU&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aCtVnG2UAlq_Kawbj3S8xQAAAZU&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[4] https://www.nature.com/articles/s41562-025-02194-6
[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aCtVnG2UAlq_Kawbj3S8xQAAAZU&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[6] https://www.theregister.com/2025/05/16/grok_white_genocide_ai/
[7] https://www.theregister.com/2025/05/16/neal_stephenson_ai_evolution/
[8] https://www.theregister.com/2025/05/15/bing_search_apis_retired/
[9] https://www.theregister.com/2025/05/14/future_of_llms_is_open/
[10] https://whitepapers.theregister.com/
Tell them what they want to hear
I think politicians (and doorstep canvassers) have been well aware of this. Telling the people what they want to hear has long been the favourite strategy for politics. And the same politician telling different audiences different (even contradictory!) things is common.
A more interesting question is whether we can use social media and LLMs in some way to get evidence of the lying b******s! I guess it wouldn't really matter - everyone can see them doing it today and it still seems to be working!!
Surprising
The most surprising result to me is that participants in online debates could be persuaded. That is, persuaded at all.
It is very rare that I see someone persuaded during an online debate. Very rare indeed.
A fundamental weakness?
From the paper (fig. 1): " Participant and opponent then debate for 10 min on a randomly assigned topic, holding the PRO or CON standpoint as instructed "
So actually, this research only indicates (if at all) that a bot may be more persuasive than a human when challenging a standpoint not really held by its opponent. It would be interesting to see how it faired against a genuinely held position.