News: 0182933790

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Is AI Cannibalizing Human Intelligence? A Neuroscientist's Way to Stop It

(Sunday April 26, 2026 @03:34AM (EditorDavid) from the ghosting-the-machine dept.)


The AI industry is largely failing to ask a key design question, argues theoretical neuroscientist/cognitive scientist Vivienne Ming. Are their AI products building human capacity or consuming it?

In the Wall Street Journal Ming shares her experiment about [1]which group performed best at predicting real-world events (compared to forecasters on prediction market Polymarket) — AI, human, or human-AI hybrid teams.

> The human groups performed poorly, relying on instinct or whatever information had come across their feeds that morning. The large AI models — ChatGPT and Gemini, in this case — performed considerably better, though still short of the market itself. But when we combined AI with humans, things got more interesting. Most hybrid teams used AI for the answer and submitted it as their own, performing no better than the AI alone. Others fed their own predictions into AI and asked it to come up with supporting evidence. These "validators" had stumbled into a classic confirmation bias-loop: the sycophancy that leads chatbots to tell you what you want to hear, even if it isn't true. They ended up performing worse than an AI working solo.

>

> But in roughly 5% to 10% of teams, something different emerged. The AI became a sparring partner. The teams pushed back, demanding evidence and interrogating assumptions. When the AI expressed high confidence, the humans questioned it. When the humans felt strongly about an intuition, they asked the AI to come up with a counterargument... These teams reached insightful conclusions that neither a human nor a machine could have produced on its own. They were the only group to consistently rival the prediction market's accuracy. On certain questions, they even outperformed it...

>

> We are building AI systems specifically designed to give us the answer before we feel the discomfort of not having it. What my experiment suggests is that the human qualities most likely to matter are not the feel-good ones. They're the uncomfortable ones: the capacity to be wrong in public and stay curious; to sit with a question your phone could answer in three seconds and resist the urge to reach for it. To read a confident, fluent response from an AI and ask yourself, "What's missing?" rather than default to "Great, that's done." To disagree with something that sounds authoritative and to trust your instinct enough to follow it. We don't build these capacities by avoiding discomfort. We build them by choosing it, repeatedly, in small ways: the student who struggles through a problem before checking the answer; the person who asks a follow-up question in a conversation; the reader who sits with a difficult idea long enough for it to actually change one's mind. Most AI chatbots today default to easy answers, which is hurting our ability to think critically.

>

> I call this the Information-Exploration Paradox. As the cost of information approaches zero, human exploration collapses. We see it in students who perform better on AI-assisted tasks and worse on everything afterward. We see it in developers shipping more code and understanding it less. We are, in ways that feel like progress, slowly optimizing ourselves out of the loop.

The author just published a book called " [2] Robot-Proof: When Machines Have All The Answers, Build Better People ." They suggest using AI to "explore uncertainty.... before you accept an AI's answer, ask it for the strongest argument against itself."

And they're also urging new performance benchmarks for AI-human hybrid teams.



[1] https://www.wsj.com/tech/ai/is-ai-smarter-than-humans-cyborg-956e0f0e?mod=tech_lead_pos2

[2] https://amzn.to/4uuVJGH



Nothing surprising here! (Score:3)

by oldgraybeard ( 2939809 )

Using AI to (1) tell you the answer vs (2) confirm your answer vs (3) a tool to assist. Most humans will will go the route of 1 or 2 because they don't have the thinking thing going in the first place to use 3.

Today's AI-less(AI)(no reasoning/thinking going on here folks) will create less able humans that can't function and don't know how to do much of anything.

Re: (Score:3)

by martin-boundary ( 547041 )

You are not The One. You may be The One some day. But not now.

First, you must realize that There Is No Answer.

Always check sources (Score:3)

by TheMiddleRoad ( 1153113 )

Modern search AI catalogues everything. Then it finds links/sources that it summarizes. Within that, you can find the links, and from there you can actually see what pages say, some of them written by humans. Generally, when I search like this, I find answers, eventually.

I learnt early (Score:2)

by NotEmmanuelGoldstein ( 6423622 )

Skill involves remembering the edge cases, not the default, almost incorrect answers.

There's more than just recall (Score:2)

by Lunati Senpai ( 10167723 )

These studies really irk me, because it all reminds me of studies on "does the internet make us dumber?" and junk like the "google effect" where people are less able to recall things, because they remember how to look it up, but not the information.

We have a giant collection of all of human knowledge that doubles every seven or so years, which doubles again the next year, which probably is going to get even faster soon as we get more efficient.

I'd love to see a double blind study that compares someone who r

Some issues... (Score:2)

by Junta ( 36770 )

So I suppose the real point they want to make is that human consideration with GenAI input is better than GenAI alone, but there's some issues with the first bit about comparing 'pure human' to 'pure AI'.

The first sign is they are using Polymarket as a benchmark and distinct from "human prediction", but Polymarket is just comprised of human prediction. Polymarket is comprised of humans mostly, just a tendency to be humans that are more specifically informed about the topic they are betting on.

So we see "Th

Q: What's the difference between USL and the Graf Zeppelin?
A: The Graf Zeppelin represented cutting edge technology for its time.