News: 1762164011

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Students using ChatGPT beware: Real learning takes legwork, study finds

(2025/11/03)


A study of how people use ChatGPT for research has confirmed something most of us learned the hard way in school: to be a subject matter expert, you've got to spend time swotting up.

More than 10,000 participants took part in a series of experiments designed to determine how people's understanding of a subject differed when using ready-made summaries from AI chatbots, versus piecing together online information found through traditional web searches.

It found that participants who used ChatGPT and similar tools developed a shallower grasp of the subject they were assigned to study, could provide fewer concrete facts, and tended to echo information similar to other participants who'd used AI tools.

[1]

The researchers concluded that, while large language models (LLMs) are exceptionally good at spitting out fluent answers at the press of a button, people who rely on synthesized AI summaries for research typically don't come away with materially deeper knowledge. Only by digging into sources and piecing information together themselves do people tend to build the kind of lasting understanding that sticks, the team found.

[2]

[3]

"In contrast to web search, when learning from LLM summaries users no longer need to exert the effort of gathering and distilling different informational sources on their own — the LLM does much of this for them," the researchers said in [4]a paper published in October's issue of PNAS Nexus .

"We predict that this lower effort in assembling knowledge from LLM syntheses (vs. web links) risks suppressing the depth of knowledge that users gain, which subsequently affects the nature of the advice they form on the topic for others."

[5]

In other words: when you outsource the work of research to generative AI, you bypass the mental effort that turns information-gathering into genuine understanding.

Don't believe the black box

The research adds weight to growing concerns about the reliability of AI-generated summaries.

[6]

A recent BBC-led investigation found that four of the most [7]popular chatbots misrepresented news content in almost half their responses, highlighting how the same tools that promise to make learning easier often blur the boundary between speedy synthesis and [8]confident-sounding fabrication .

In the PNAS Nexus study, researchers from the University of Pennsylvania's Wharton School and New Mexico State University conducted seven experiments in which participants were tasked with boning up on various topics, including how to plant a vegetable garden, how to lead a healthier lifestyle, and how to deal with financial scams.

Participants were randomly assigned to use either an LLM – first ChatGPT and later, [9]Google's AI Overviews – or via traditional Google web search links. In some experiments, both groups saw exactly the same facts, except that one was presented with a single AI summary while the other was presented with a list of articles to read.

After completing their searches, participants were asked to write advice for a friend based on what they'd learned. The results were consistent: participants who used AI summaries spent less time engaging with sources, reported learning less, and felt less personal investment in what they wrote. Their advice to friends was also shorter, cited fewer facts and was more similar to that of other AI users.

[10]A simple AI prompt saved a developer from this job interview scam

[11]Blockchain just became an utterly mainstream part of the global financial system

[12]Alleged Scattered Spider teen cuffed after extortion Bitcoin used to buy games, meals

[13]Interpol bags 1,209 suspects, $97M in cybercrime operation focused on Africa

The researchers ran a follow-up test with 1,500 new participants, who were asked to evaluate the quality of the advice gleaned in the research experiment. Perhaps unsurprisingly, they deemed the AI-derived advice less informative and less trustworthy, and said they were less likely to follow it.

Support, not substitute

One of the more striking takeaways of the study was that young people's growing reliance on AI summaries for quick-hit facts could "deskill" their ability to engage in active learning. However they also noted that this only really applies if AI replaces independent study entirely — meaning LLMs are best used to support, rather than substitute, critical thinking.

The authors concluded: "We thus believe that while LLMs can have substantial benefits as an aid for training and education in many contexts, users must be aware of the risks — which may often go unnoticed — of overreliance. Hence, one may be better off not letting ChatGPT, Google, or another LLM 'do the Googling.'"

Get our [14]Tech Resources



[1] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aQiLSl3L8mit-q54wJjlDQAAARc&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0

[2] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aQiLSl3L8mit-q54wJjlDQAAARc&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aQiLSl3L8mit-q54wJjlDQAAARc&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0

[4] https://academic.oup.com/pnasnexus/article/4/10/pgaf316/8303888

[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aQiLSl3L8mit-q54wJjlDQAAARc&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[6] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aQiLSl3L8mit-q54wJjlDQAAARc&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0

[7] https://www.theregister.com/2025/10/24/bbc_probe_ai_news/

[8] https://www.theregister.com/2025/09/17/openai_hallucinations_incentives/

[9] https://www.theregister.com/2025/09/07/googles_ai_cites_written_by_ai/

[10] https://www.theregister.com/2025/10/20/ai_prompt_saved_developer/

[11] https://www.theregister.com/2025/10/01/swift_blockchain_plan/

[12] https://www.theregister.com/2025/09/19/scattered_spider_teen_cuffed/

[13] https://www.theregister.com/2025/08/22/interpol_serengeti_20/

[14] https://whitepapers.theregister.com/



Blimey

ICL1900-G3

Who would have thought it?

Re: Blimey

Pascal Monett

I'm guessing a lot of people did, but it takes one or more studies like this to carry the message to people.

In the end, cheaters will be cheaters. The advantage of LLMs is apparently that actually competent people are going to be able to sort out the chaff a bit easier.

Not just AI

MisterHappy

I have found this with many things that present you with the answer instead of requiring you to 'learn' the answer.

The immediate thing that comes to mind is how it seems to take me twice as long to learn a route using sat-nav than it did when I had to use maps & planning. Being given the answer instead of having to find the answer doesn't seem to embed the knowledge as quickly.

Or I could just be old...

Idiocracy - in real life

b0llchit

The film was supposed to be entertainment, not a peek into the future.

I can't do that Dave

Delbert

What could possibly go wrong allowing AI to do academic work ? writing nonsense that you might not realise to be nonsense because you failed to study and understand the work. On the plus side it should weed out a few howling idiots from the herd. Obviously that does not apply to political studies where they do not understand empirical evidence or think it has something to do with royalty.

CF&C stole it, fair and square.
-- Tim Hahn