Doctors get dopey if they rely too much on AI, study suggests
- Reference: 1755126442
- News link: https://www.theregister.co.uk/2025/08/13/doctors_risk_being_deskilled_by_rely_on_ai/
- Source link:
One recent [1]study shows that using this AI tool results in a 12.5 percent increase in the adenoma detection rates (ADR). That is [2]expected to save lives .
But when doctors lose access to AI assistance, their ability to spot adenomas tends to drop below what it was before they started relying on AI, according to [3]a study published in The Lancet Gastroenterology & Hepatology.
[4]
"Continuous exposure to AI might reduce the ADR of standard non-AI assisted colonoscopy, suggesting a negative effect on endoscopist behaviour," the study concludes.
[5]
[6]
The analysis, based on data from four endoscopy centers in Poland between September 2021 and March 2022, compares the change in ADR of standard, non-AI assisted colonoscopy before and after endoscopists were exposed to AI in their clinics.
"The ADR of standard colonoscopy decreased significantly from 28.4 percent (226 of 795) before to 22.4 percent (145 of 648) after exposure to AI, corresponding with an absolute difference of minus 6.0 percent," the study says.
[7]
The 21 authors of the Lancet paper note that in 2019, the European Society of Gastrointestinal Endoscopy (ESGE) warned about the risk of "deskilling" in its [8]AI guidelines [PDF].
"Possible significant risks with implementation, specifically endoscopist deskilling and over-reliance on artificial intelligence, unrepresentative training datasets, and hacking, need to be considered," the ESGE said.
[9]Some users report their Firefox browser is scoffing CPU power
[10]AI model 'personalities' shape the quality of generated code
[11]Perplexity takes a shine to Chrome, offers Google $34.5 billion
[12]Suetopia: Generative AI is a lawsuit waiting to happen to your business
The authors say they believe their study is the first to look at the effect of continuous AI exposure on clinical outcomes and they hope the findings prompt further research into the impact of AI on healthcare.
AI, for all its purported benefits in efficiency, may impose a cost on the people who use it. In June, MIT researchers published a related [13]study that found the use of LLM chatbots [14]associated with lower brain activity .
Concern about "deskilling" due to automation dates back decades. As noted in [15]a recent paper from Purdue researchers, psychologist Lisanne Bainbridge's 1983 work " [16]Ironies of Automation " explored how the automation of industrial processes may expand problems for human system operators rather than solve them.
[17]
The Purdue academics argue the situation is similar for designers who come to rely on AI.
"Our findings suggest that while AI-driven automation is perceived as a means of increasing efficiency, excessive delegation may unintentionally hinder skill development," they conclude.
Princeton University computer scientist Arvind Narayanan recently [18]argued that developer deskilling as a result of AI is a concern. It's not like compilers eliminating people's ability to write machine code, a fear expressed years ago that never happened.
"On the other hand, if a junior developer relies too much on vibe coding and hence can't program at all by themselves, in any language, and doesn't understand the principles of programming, that definitely feels like a problem," he said. ®
Get our [19]Tech Resources
[1] https://pmc.ncbi.nlm.nih.gov/articles/PMC11866038/
[2] https://www.thelancet.com/journals/landig/article/PIIS2589-7500(22)00042-5/fulltext
[3] https://www.thelancet.com/journals/langas/article/PIIS2468-1253(25)00133-5/abstract
[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aJ1fVjSDfC_4SyVw9YQjAgAAAEg&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aJ1fVjSDfC_4SyVw9YQjAgAAAEg&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[6] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aJ1fVjSDfC_4SyVw9YQjAgAAAEg&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[7] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aJ1fVjSDfC_4SyVw9YQjAgAAAEg&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[8] https://www.esge.com/assets/downloads/pdfs/guidelines/2019_a_1031_7657.pdf
[9] https://www.theregister.com/2025/08/13/firefox_ai_scoffing_power/
[10] https://www.theregister.com/2025/08/13/ai_model_personalities_shape_the/
[11] https://www.theregister.com/2025/08/12/perplexity_takes_shine_to_chrome/
[12] https://www.theregister.com/2025/08/12/genai_lawsuit/
[13] https://www.brainonllm.com/
[14] https://www.theregister.com/2025/06/18/is_ai_changing_our_brains/
[15] https://arxiv.org/abs/2503.03924
[16] https://ckrybus.com/static/papers/Bainbridge_1983_Automatica.pdf
[17] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aJ1fVjSDfC_4SyVw9YQjAgAAAEg&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[18] https://www.linkedin.com/posts/randomwalker_when-we-use-generative-ai-for-work-there-activity-7325125273492885504-O2FF/
[19] https://whitepapers.theregister.com/
Doctors?
It isn't only doctors...
This dwarfs other problems with current AI usage
We must not be Bashful or remain Sleepy; when a Doc gets Dopey we should be Grumpy. The AI peddlers want us to be Happy to use their wares all the time, but deskilling is a serious problem and not something to be Sneezy at.
We can not rely on the random appearance of a Prince on a white charger to save us in the last act and must be grateful for such researchers, and TFA, giving us warnings so we can decide when to spit out this poison apple befores it gets permanently stuck.
PS
Claude, Claude on the wall, who is the fairest of us all? And don't *immediately" say "you are", I read [1]the article about AI sycophancy .
[1] https://www.theregister.com/2025/08/13/claude_codes_copious_coddling_confounds/
Not sure they're asking all the right questions
I'm more curious about the qualitative analysis. What percentage of cancers do experienced humans see that AI does not? What percentage of cancers does AI see that experienced humans do not? If those numbers not small they aren't seeing the same set of cancers so you would like to have both experienced doctors and AI looking - but if the AI makes the experienced doctors "dopey" then any such advantage will be fleeting.
Re: Not sure they're asking all the right questions
Doctors have been relying on “expert system” diagnosis for many years now, in fact decades.
Lookit this “doctor” resorting to begging advice from a Commodore PET, for heaven’s sake!
https://youtu.be/xwBHXx2SllA&t=1005
Same thing with automobile mechanics
As soon as some dope invented the hydraulic jack, auto mechanics everywhere got real weak, real fast.
I wonder if the referenced study used 'AI' tools to reach that conclusion?
Probably.