News: 0177471609

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Asking Chatbots For Short Answers Can Increase Hallucinations, Study Finds (techcrunch.com)

(Monday May 12, 2025 @11:30PM (msmash) from the stranger-things dept.)


Requesting concise answers from AI chatbots [1]significantly increases their tendency to hallucinate , according to new research from Paris-based AI testing company Giskard. The study found that leading models -- including OpenAI's GPT-4o, Mistral Large, and Anthropic's Claude 3.7 Sonnet -- sacrifice factual accuracy when instructed to keep responses short.

"When forced to keep it short, models consistently choose brevity over accuracy," Giskard researchers noted, explaining that models lack sufficient "space" to acknowledge false premises and offer proper rebuttals. Even seemingly innocuous prompts like "be concise" can undermine a model's ability to debunk misinformation.



[1] https://techcrunch.com/2025/05/08/asking-chatbots-for-short-answers-can-increase-hallucinations-study-finds/



Attention spans are shortening (Score:2)

by burtosis ( 1124179 )

I don’t understand this complex issue. Explain in 12 words or less.

Re: (Score:2)

by Mr. Dollar Ton ( 5495648 )

When shortening and argument, important logic can vanish from the abridged version.

My chatbot disagrees. (Score:2)

by gurps_npc ( 621217 )

In fact, my chatbot promissed it never hallucinates.

Re: (Score:2)

by Mr. Dollar Ton ( 5495648 )

In fact, it only daydreams.

hallucinations... (Score:2)

by wyHunter ( 4241347 )

...sounds so much better than 'making sh*t up and lying' doesn't it?

So ask for longer answers (Score:1)

by rogersc ( 622395 )

This is well known. It is why the AI chatbots have moved to longer "thinking" models.

Re: (Score:2)

by martin-boundary ( 547041 )

That's genius! ChatGPT from now on, please answer all my questions in 200 sentences or more, with supporting documentation and animated graphics. Also hide all those answers from me, and only give me a summary in 10 words or less.

There is a sweet spot to ChatGPT (Score:2)

by ironicsky ( 569792 )

ChatGPT is full of undocumented limits. I've found the other problem - lengthy chats, where you've provided a lot of context and have iterated over dozens or hundred of prompts, at some point also starts "retrograding" and providing answers to previous prompts instead of what you asked.

If you ask it to generate a file, don't expect the file to be there between now and an undetermined amount of time, because it deletes what it generates shortly after.

Re: (Score:2)

by DamnOregonian ( 963763 )

Ya, one major downside of operating using non-local LLMs is "magical"/undocumented/unknown context management strategies.

Coming from local-only operation, I was pretty surprised by some of the problems people had, but after trying some of the SOTA online models, I figured out what was going on pretty quickly- reminded me of how ollama silently caps you at 8k context, regardless of what the model was trained for, unless you manually specify larger in the Modelfile.

It just starts getting weird and acting l

Kind of like humans on Twitter (Score:2)

by Powercntrl ( 458442 )

"Well crap, there's not enough space for my in-depth five paragraph essay pointing out precisely what was wrong with what someone said, so I'll just call their mom a hoe."

Hallucinations? (Score:2)

by Some Guy ( 21271 )

Software cannot hallucinate.

Enough with anthropomorphizing these things. It's not a hallucination - it's an error.

Re: Hallucinations? (Score:1)

by CustomBuild ( 2891601 )

I agree with you. By attaching a living behavior to an application, we confuse people into believing that movie plots which focus on thinking machines are now a reality. AI algorithms fill knowledge gaps with inaccurate data, nothing more.

Translation ... (Score:1)

by Oh really now ( 5490472 )

Tell me if I'm close. When asked to concisely offer information, you're robbing the LLM from faithfully spewing your preferred rhetoric.

"... ability to debunk misinformation"

For fucks sake, grow up.

"Life, loathe it or ignore it, you can't like it."
-- Marvin, "Hitchhiker's Guide to the Galaxy"