News: 0181217878

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

'Cognitive Surrender' Leads AI Users To Abandon Logical Thinking, Research Finds (arstechnica.com)

(Saturday April 04, 2026 @10:00AM (BeauHD) from the reshaping-human-reasoning dept.)


An anonymous reader quotes a report from Ars Technica:

> When it comes to large language model-powered tools, there are generally two broad categories of users. On one side are those who treat AI as a powerful but sometimes faulty service that needs careful human oversight and review to detect reasoning or factual flaws in responses. On the other side are those who routinely outsource their critical thinking to what they see as an all-knowing machine. [1]Recent research goes a long way to forming a new psychological framework for that second group, which [2]regularly engages in "cognitive surrender" to AI's seemingly authoritative answers . That research also provides some experimental examination of when and why people are willing to outsource their critical thinking to AI, and how factors like time pressure and external incentives can affect that decision.

>

> Overall, across 1,372 participants and over 9,500 individual trials, the researchers found subjects were willing to accept faulty AI reasoning a whopping 73.2 percent of the time, while only overruling it 19.7 percent of the time. The researchers say this "demonstrate[s] that people readily incorporate AI-generated outputs into their decision-making processes, often with minimal friction or skepticism." In general, "fluent, confident outputs [are treated] as epistemically authoritative, lowering the threshold for scrutiny and attenuating the meta-cognitive signals that would ordinarily route a response to deliberation," they write. These kinds of effects weren't uniform across all test subjects, though. Those who scored highly on [3]separate measures of so-called fluid IQ were less likely to rely on the AI for help and were more likely to overrule a faulty AI when it was consulted. Those predisposed to see AI as authoritative in a survey, on the other hand, were much more likely to be led astray by faulty AI-provided answers.

>

> Despite the results, though, the researchers point out that "cognitive surrender is not inherently irrational." While relying on an LLM that's wrong half the time (as in these experiments) has obvious downsides, a "statistically superior system" could plausibly give better-than-human results in domains such as "probabilistic settings, risk assessment, or extensive data," the researchers suggest. "As reliance increases, performance tracks AI quality," the researchers write, "rising when accurate and falling when faulty, illustrating the promises of superintelligence and exposing a structural vulnerability of cognitive surrender." In other words, letting an AI do your reasoning means your reasoning is only ever going to be as good as that AI system. As always, let the prompter beware.



[1] https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646

[2] https://arstechnica.com/ai/2026/04/research-finds-ai-users-scarily-willing-to-surrender-their-cognition-to-llms/

[3] https://www.btbpsychology.com/blog/understanding-psychoeducational-testing-terms-fluid-reasoning



Oh Brave New World with such people in it (Score:5, Insightful)

by Ender_Wiggin ( 180793 )

"On the other side are those who routinely outsource their critical thinking to what they see as an all-knowing machine"

I've run into these people, they're the worst. It was bad enough dealing with people whose mindset was 'If I cant find it on google then it doesn't exist,' and it seems these people have moved into AI and gotten dumber but think they're even smarter.

Re: (Score:2, Offtopic)

by TheMiddleRoad ( 1153113 )

Stupidity maps across the political spectrum on an inverse bell curve.

Re: (Score:2)

by UnknowingFool ( 672806 )

Or worse, because it is on the Internet, it must be true. I had a friend whose entire argument that some conspiracy theory was true because multiple people posted things on websites. I countered that I could set up a website to detail how that friend murdered a homeless person one summer.

Another word for stupidity. (Score:5, Insightful)

by gurps_npc ( 621217 )

I think what is really going on is that is not 'fluid IQ', but regular, normal "IQ".

That is, stupid people either do not realize the AI is wrong, or more likely, they are so used to being corrected by more intelligent people that they just assume the AI must be smarter than they are and do not challenge it.

I can also see a small number of submissive/shy/apathetic people just accepting the wrong information and thinking it is not worth fixing.

This kind of thing gets me so mad that I would never just accept that.

Fluid versus crystallized (Score:2)

by Okian Warrior ( 537106 )

> I think what is really going on is that is not 'fluid IQ', but regular, normal "IQ".

"Fluid" intelligence is the ability to think, reason, solve problems, and learn things. "Crystallized" intelligence is your amassed knowledge.

These are technical terms used in the literature.

Intelligence is nature's guess as to how complex your environment will be... but there's an out. People with low fluid intelligence have to work harder to understand things, but if they put in the work they can amass a body of knowledge that rivals that of people with high fluid intelligence.

And of course, lots of peopl

Re: (Score:2)

by gweihir ( 88907 )

"Fluid IQ" is just how much of your IQ you actually use. The term was probably invented to not have to tell high IQ people that are not independent thinkers (quite a lot, probably a majority) that they are effectively pretty dump and mentally incapable.

my AI posted this comment (Score:3, Informative)

by Ragnarok89 ( 1066010 )

... So I don't have to. I assume it's correct.

Re: (Score:1)

by Anonymous Coward

I let my AI check it. It is.

New religion (Score:2, Interesting)

by Calydor ( 739835 )

I would really like to see a study trying to correlate being religious to believing whatever the AI tells you. I suspect there's a strong overlap but that's just a gut feeling; I'd love to see it actually tested.

Re:New religion (Score:4, Informative)

by TheMiddleRoad ( 1153113 )

Bunch of nonsense. You're the dummy. 1. Religion is not conservative or liberal by nature. 2. AI models are not programmed. They are trained on data. 3. Grok is an example of conservatively programmed AI. Do us all a favor and shut the fuck up, forever.

Re: (Score:2)

by gweihir ( 88907 )

You just exposed yourself as a stupid person. Because all reliable research says exactly the opposite: The more conservative, the more mentally incapable. And that is not even me trying to insult you. That is just a solid fact.

Re:New religion (Score:4, Interesting)

by ClickOnThis ( 137803 )

That would indeed be an interesting study.

Religions generally accept wisdom from sacred texts. (Yes, I know there are exceptions.) So one would presume that those who are ready to accept information on the authority of sacred texts would accept it from an AI that is perceived as an authority.

On the other hand, those same religious people could recognize that AI is distinct from their religious texts, and apply a different standard to it.

Re: (Score:2)

by dfghjk ( 711126 )

"Religions generally accept wisdom from sacred texts."

This is false. Religions CREATE privileged texts, which they call "sacred texts" or scriptures, which contain stories that are fabricated. Religions do not "accept wisdom" from these created texts because religions create those texts.

Now, parishioners could be said to "generally accept wisdom from sacred texts." Perhaps that is what you meant. Religions are a mechanism to control people, scripture is a tool that is used.

Personally, I think the entire

Re: (Score:3)

by dvice ( 6309704 )

"Thinking about God increases acceptance of artificial intelligence in decision-making"

[1]https://pmc.ncbi.nlm.nih.gov/a... [nih.gov]

[1] https://pmc.ncbi.nlm.nih.gov/articles/PMC10438833/

Re: (Score:2)

by gweihir ( 88907 )

There is one thing: About 10-15% of the population are independent thinkers and about 20-25% (including the former) can be convinced by rational argument. At the same time about 80% of the human race is religious in one form or another. There will be some special cases and some overlap. For example people that know their religious beliefs are irrational and they are just using them to make themselves feel better. But overall, these are the two pools of people we have.

Now, add that fact-checking AI is typica

It’s always been this way (Score:1)

by Kingduck ( 894139 )

I said the same thing 30 years ago before AI was even in the movies. Seems like there has always been a section of our population that thinks “logical thinking” is a pop band from the 70’s. Each and every person on this site has known at least one of those types of people and probably see a few every single day at their jobs.

Re: (Score:2)

by drinkypoo ( 153816 )

> I said the same thing 30 years ago before AI was even in the movies.

30 years ago was 1926?

Re: (Score:2)

by gweihir ( 88907 )

It seems to be a population stereotype with about 10-15% percent willing to fact check (and hence getting good at it due to experience) and around 25% in total that are open to rational argument. The rest just wants to feel good about themselves and any lie or illusion that does that is just fine with them.

It does not seem to really be connected to IQ either. It seems to be a fundamental personality defect that transcends ability to handle complexity. I mean, as long as you are smart enough to read, you can

Re: (Score:2)

by dfghjk ( 711126 )

This is not only true, it is the most important takeaway. AI has not created these two kinds of users, they have always existed.

Normal (Score:4, Insightful)

by nospam007 ( 722110 ) *

50% of us have an IQ of under 100.

Re: Normal (Score:1)

by EldoranDark ( 10182303 )

Or is it 49%? If 100 is adjusted to be average across a target population, there's going to be a lot of people right on the mark... And it's not linear distribution, so there's more people scored 100 than there are 148 or 71... I should ask ChatGPT... ChatGPT said that 50% are above 100, 50% are below 100. Yeah, we're screwed.

Re: (Score:2)

by ClickOnThis ( 137803 )

> Or is it 49%? If 100 is adjusted to be average across a target population, there's going to be a lot of people right on the mark... And it's not linear distribution, so there's more people scored 100 than there are 148 or 71... I should ask ChatGPT... ChatGPT said that 50% are above 100, 50% are below 100. Yeah, we're screwed.

And ChatGPT is not necessarily correct here. Neither was George Carlin when he said:

Think of how stupid the average person is, and realize half of them are stupider than that.

ChatGPT and Carlin are confusing mean with median . The latter divides a population 50-50, the former not necessarily so.

Re: (Score:2)

by ClickOnThis ( 137803 )

You'll rightly be modded Troll soon enough, but for the record: research has shown no significant difference in the average intelligence of male and female human beings.

However, there is some evidence that genders are better on average at certain kinds of tasks: females on average are better at verbal and memory-intensive tasks, whereas makes on average are stronger with spatial and mathematical reasoning. But of course, the two populations overlap significantly: we have plenty of outstanding female scienti

Re: (Score:2)

by gweihir ( 88907 )

Ability to fact-check seems to not be or only weakly connected to intelligence. What you need is to want to know. Most people do not want to know.

Re: (Score:2)

by dfghjk ( 711126 )

With the rise of MAGA, it seems there is more than that. SuperKendallism is pervasive.

What happens to the masses? (Score:3)

by dark.nebulae ( 3950923 )

The interesting question isnâ(TM)t that 73% of people accept faulty AI reasoningâ¦

Itâ(TM)s which 73%.

What happens to the segment of the population that already struggles with critical thinking? The folks whoâ(TM)ve historically bought into things like flat earth, QAnon, miracle cures, etc.

Those groups didnâ(TM)t suddenly appear because of AI, they existed long before it. They already demonstrate a tendency to accept authoritative-sounding information without much scrutiny.

So what changes now?

If anything, AI just becomes another âoeauthorityâ to outsource thinking to. And per this study, those already predisposed to see AI as authoritative are the most likely to be led astray.

Sure, today if you ask Claude or ChatGPT about flat earth, youâ(TM)ll get a correct answer. But we all know these systems can be nudged, reframed, or persistence-prompted into saying almost anything.

And hereâ(TM)s the real problem:

If someone didnâ(TM)t question YouTube videos, Facebook posts, or random blogs⦠why would they suddenly start questioning AI?

They wonâ(TM)t.

So the outcome isnâ(TM)t that AI âoefixesâ bad thinking. It likely just amplifies whatever thinking was already there.

For people with strong critical thinking skills, AI is a tool.

For people without it, itâ(TM)s just a more convincing storyteller.

That seems like the real risk.

PT Barnum Reincarnate, at your service. (Score:2)

by geekmux ( 1040042 )

> The interesting question isnâ(TM)t that 73% of people accept faulty AI reasoningâ¦

> Itâ(TM)s which 73%.

> What happens to the segment of the population that already struggles with critical thinking? The folks whoâ(TM)ve historically bought into things like flat earth, QAnon, miracle cures, etc.

> Those groups didnâ(TM)t suddenly appear because of AI, they existed long before it. They already demonstrate a tendency to accept authoritative-sounding information without much scrutiny.

> So what changes now?

We name the new AI "PT Barnum" and turn it up to 11 via a Spinal Tap.

Then we fire up the industrial popcorn machine, and remember the good ol' days.

Good luck to anyone born after nineteen-hundred-the-fuck-off-my-lawn.

Critical Thinking (Score:5, Insightful)

by Tomahawk ( 1343 )

is something that just isn't taught properly, if at all, in schools. We see the lack of it everywhere. So it's understandable that many are offloading this to something else because they just don't know how to do it themselves. Laziness is also a factor, yes. But inability, I feel, is the biggest factor here.

Re: (Score:2)

by CommunityMember ( 6662188 )

> is something that just isn't taught properly, if at all, in schools.

Schools in the US generally stop emphasizing the teaching critical thinking by about the eight grade. There are a number of contributing reasons for that (some blame curriculum that are focused more on compliance and passing standardized tests than learning how to think). As individuals generally are considered to still be learning how to think and reason until their early 20s, the lack of teaching critical thinking well into High School leaves a significant part of the population under prepared for under

Re: (Score:2)

by gweihir ( 88907 )

Critical thinking or independent thinking is something most people do not do and do not like doing because they would learnt things that frighten them, for example how little they understand the world. Reasonable estimates put independent thinkers at around 10....15% of the population (goes up to around 20% if you add those that can be convinced by rational argument). The rest prefers a convenient illusion or lie to actual insight.

I do not think this is connected to education or intelligence anymore. I thin

Re: (Score:2)

by dvice ( 6309704 )

Do you avoid random number generators just because they are not actually random? No, you just take that into account when using it. There are methods that allow you to get million answers from the AI in a sequence without a single mistake. ( [1]https://www.youtube.com/watch?... [youtube.com] )

[1] https://www.youtube.com/watch?v=ZrNgmReVrkk

Re: (Score:2)

by geekmux ( 1040042 )

> There are methods that allow you to get million answers from the AI in a sequence without a single mistake. ( [1]https://www.youtube.com/watch?... [youtube.com] )

There are probably many methods to ensure that AI can do something accurately and correctly.

Ever wonder how many methods there are to manipulate and convince AI that it's wrong?

Today's AI tends to remind me of Google hacking 20+ years ago. I fear the early days can present challenges we haven't even thought of trying to curtail or control yet.

[1] https://www.youtube.com/watch?v=ZrNgmReVrkk

Just had some of this (Score:2)

by greytree ( 7124971 )

I was working on a little Python/Django project for myself, fun to code it but also useful to me.

I used it to try out (free tier) of an AI in the IDE.

It did a great job, although its solutions are overcomplicated.

But, worse, I didn't try and understand what it wrote and now I have to read through all the project code to be able to work on it myself again.

And I don't feel like doing that.

Fun is writing my own code, *work* is reading someone else's.

I have surrendered my project to the AI.

Mentally ill people using human terms for AI (Score:3, Informative)

by BrendaEM ( 871664 )

It reads like, "I think my office chair is sentient."

Re: (Score:2)

by freeze128 ( 544774 )

The Computer is your friend. The Computer is crazy. The Computer wants to make you happy. This will drive you crazy.

Exercise (Score:2, Insightful)

by dskoll ( 99328 )

Imagine if a bunch of tech bros said: "Hey, you don't need exercise. It's totally fine if your muscles atrophy. After all, we have technology to move you around and it can do so much more quickly than your muscles ever could!" We'd laugh them out of town.

Well, guess what? If you don't exercise your brain, it atrophies. If you outsource your thinking, you eventually become unable to think.

Re: (Score:2)

by gweihir ( 88907 )

Your mistake here is assuming most people think without AI at their disposal. Usually they just repeat something they have heard and liked and then convince themselves they have had a great insight. This approach is widespread.

It's a new tool (Score:2)

by kencurry ( 471519 )

If you are good at using tools, you will be good at using AI when you need it. If you made stupid decisions before, you will still make stupid decisions using AI to back them up.

Dumb people remain dumb when using AI (Score:2)

by gweihir ( 88907 )

Kind of predictable, that result. I mean, something like 80% of all people do routinely not fact-check when trying (and usually failing) to think for themselves, why would they suddenly start to fact-check when using AI? On top of that, fact-checking is a skill that needs to be practiced to get good at it. When you never do it, you suck at it and nothing can fix that except starting to fact-check things. But then inconvenient things start to intrude, like all your friends not being that smart either and yo

Logic is a pretty flower that smells bad.