News: 1755693071

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

AI skeptics zone out when chatbots get preachy

(2025/08/20)


Interview Large language models stumble when trying to sway buyers with moral arguments, according to research from the SGH Warsaw School of Economics and Sakarya Business School.

"Our research suggests that people respond to an AI-generated message based on their knowledge of AI's possibilities and limitations," first author Wojciech Trzebiński told The Register , in reference to a recently published study that found moral arguments pushing buyers toward Fair Trade products were less convincing to those who did not believe in the "superiority" of artificial intelligence over humans.

"Such knowledge may be useful and valid. For example, people are likely to believe that machines (such as AI-enabled chatbots) are not capable of judging what is moral and immoral."

[1]

Trzebiński and colleagues carried out the study using an example of a real Fair Trade product and argued the case for why buyers should pick it over rival, less-fair goods. When the advice came from a human, it was generally well-received; switch the human out for a chatbot, though, and buyers were less convinced, thanks to what the researchers called an incongruity in "message-source fit" – in other words, that soulless stochastic parrots shouldn't concern themselves with matters of morality in the first place.

[2]

[3]

"When people are aware of the machine source of moral appeal, they are likely to activate their beliefs on AI, and such beliefs may diminish the persuasiveness of AI outputs that are considered as inappropriate to be produced by machines," Trzebiński told us. "Studies have revealed that pattern exists for various types of outputs, not only morally based advice, but also product recommendations related to pleasure, experience, or creative content.

"All those areas may be perceived as human-specific domains. Given that an AI agent may simulate humans, for example using informal tone and expressing empathy, people may be confused about the nature of the agent (AI or human) when it's not revealed. In such cases, people may be unsure how to react. AI systems are imperfect, they may hallucinate, and, without a human sense of morality, its moral advice may be misleading. So, I believe that people have the right to use their knowledge on AI and decide to what extent they should rely on AI."

[4]

That knowledge – or, rather, belief – surrounding artificial intelligence cuts both ways, though. The team found that a certain group was more likely to be swayed by the machine's moral arguments: the true believers in the AI revolution, who perceive AI as being a font of all knowledge, and in the "superiority" of artificial over human intelligence.

Trzebiński still believes there's a place for AI in marketing, if used transparently and honestly. "I am optimistic about the value AI can provide," he told us. "Probably there will always be limitations, as people will remain the ones responsible for setting the task for an AI agent, and AI will use, directly or indirectly, human-generated content. However, taking the domain of morality as an example, AI may even outperform human sources, as it may be free from prejudices and easily embrace many different moral stances. So, I encourage marketers to use GenAI in their communication if they are transparent about the AI source of messaging.

[5]UK drafts AI to help Joe Public decipher its own baffling bureaucracy

[6]Little LLM on the RAM: Google's Gemma 270M hits the scene

[7]LLM chatbots trivial to weaponize for data theft, say boffins

[8]OpenAI's GPT-5 looks less like AI evolution and more like cost cutting

"On the other hand, consumers (and people in general) should be aware that their reactions to AI outputs may depend on their AI perceptions. For example, when an individual views an AI agent as humanlike, their concerns about the agent's ability to speak about morality may disappear. In that case, the individual should reconsider how to react to such AI output. Maybe they should be more skeptical about it, realizing that it was generated by a machine even though the machine tries to resemble a human."

Less scrupulous marketers, however, may take a different message from the study: the ability to target efforts toward the true believers who are likely to take the statistical text outputs an LLM generates at face value. "Marketers can focus on audiences more likely to perceive AI agents as humanlike and believe in AI's superiority over humans," the team wrote in the paper's section on practical implications of the research, "to make such communication more effective. Marketers can, therefore, attempt to predict AI anthropomorphising tendencies and AI superiority beliefs within their target groups, e.g. using social media to analyse user characteristics."

Those on the consuming side of the table, meanwhile, are advised to check their bias and "carefully consider whether an AI product recommender is capable of formulating moral judgments that are appropriate for the consumer's morality, and discount positive impressions about such capability which may result from perceiving the AI agent as humanlike or AI as generally superior to humans."

[9]

Rajesh Bhargave, associate professor of marketing at the Imperial Business School, said of the study: "There is of course some scepticism of AI systems, but recent research shows that this mistrust is rapidly declining. As with any technology, greater consumer exposure, continued improvements, and confidence that it has been properly tested all help to build trust.

"For now, mistrust of AI systems is not misplaced, given how recently the technology has appeared to spring up. Some residual scepticism of AI chats compared with human chats may well persist, but that is not the central issue."

Bhargave told us that in his opinion, the "real story is that AI chatbots are being rolled out quickly because they offer real value: reduced costs for firms, and greater convenience, personalisation, and speed for consumers. While many consumers may prefer dealing with a human, that is not the stopping point of the discussion.

"The real question is how much consumers are willing to pay - in time, money, or informational quality and relevance - for that preference. There is some value consumers may be willing to give up because of their taste for human interaction, but this value is certainly not infinite. There are trade-offs."

He mused: "Some companies may indeed be too quick to move away from human-centric sales, while others are likely to be too slow. I'm not sure which bias is more prevalent at this stage. It depends very much on the setting. In high-touch contexts, human interaction is essential, particularly where cost savings do not justify the loss. In other areas, excess hesitation to adopt chatbots could leave firms at a disadvantage, raising costs for both businesses and customers, and creating staffing challenges."

The team's paper has been published in the [10]Journal of Business Research under closed-access terms. ®

Get our [11]Tech Resources



[1] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aKXxGTAeBIxAZGLNCQQdBQAAAFU&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0

[2] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aKXxGTAeBIxAZGLNCQQdBQAAAFU&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aKXxGTAeBIxAZGLNCQQdBQAAAFU&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0

[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aKXxGTAeBIxAZGLNCQQdBQAAAFU&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[5] https://www.theregister.com/2025/08/18/ai_form_fillers/

[6] https://www.theregister.com/2025/08/15/little_llm_on_the_ram/

[7] https://www.theregister.com/2025/08/15/llm_chatbots_trivial_to_weaponise/

[8] https://www.theregister.com/2025/08/13/gpt_5_cost_cutting/

[9] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aKXxGTAeBIxAZGLNCQQdBQAAAFU&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0

[10] https://www.sciencedirect.com/science/article/abs/pii/S0148296325003108

[11] https://whitepapers.theregister.com/



Ace2

What in the actual fuck?

“… [P]eople are likely to believe that machines (such as AI-enabled chatbots) are not capable of judging what is moral and immoral.”

Belief don’t enter into it, supergenius.

MrAptronym

Well, the paper is only concerned with people's sentiment. This is a marketing paper, so facts don't matter.

I'll shorten this sentence for you

may_i

"Marketers can focus on audiences more likely to perceive AI agents as humanlike and believe in AI's superiority over humans,"

Marketers can continue to focus on the gullible as usual.

Duh!

Mike 137

" consumers (and people in general) should be aware that their reactions to AI outputs may depend on their AI perceptions "

In the human space, reactions to advice will always depend on percention of the reliability and trustworthiness of the provider. This is a general truism (but addressing the narrow specific case of "AI" does get another paper published).

Dammit...

NapTime ForTruth

...deus ex machina was supposed to be a theatrical stunt, not a literal action.

Destiny is a good thing to accept when it's going your way. When it isn't,
don't call it destiny; call it injustice, treachery, or simple bad luck.
-- Joseph Heller, "God Knows"