30% of Doctors In UK Use AI Tools In Patient Consultations, Study Finds (theguardian.com)
- Reference: 0180280919
- News link: https://science.slashdot.org/story/25/12/04/0538249/30-of-doctors-in-uk-use-ai-tools-in-patient-consultations-study-finds
- Source link: https://www.theguardian.com/society/2025/dec/03/gp-doctors-health-uk-artificial-intelligence-study
> Almost three in 10 GPs in the UK are [1]using AI tools such as ChatGPT in consultations with patients , even though it could lead to them making mistakes and being sued, a study reveals. The rapid adoption of AI to ease workloads is happening alongside a "wild west" lack of regulation of the technology, which is leaving GPs unaware which tools are safe to use. That is the conclusion of [2]research by the Nuffield Trust thinktank , based on a survey of 2,108 family doctors by the Royal College of GPs about AI and on focus groups of GPs.
>
> Ministers hope that AI can help reduce the delays patients face in seeing a GP. The study found that more and more GPs were using AI to produce summaries of appointments with patients, assisting their diagnosis of the patient's condition and routine administrative tasks. In all, 598 (28%) of the 2,108 survey respondents said they were already using AI. More male (33%) than female (25%) GPs have used it and far more use it in well-off than in poorer areas.
>
> It is moving quickly into more widespread use. However, large majorities of GPs, whether they use it or not, worry that practices that adopt it could face "professional liability and medico-legal issues," and "risks of clinical errors" and problems of "patient privacy and data security" as a result, the Nuffield Trust's report says. [...] In a blow to ministerial hopes, the survey also found that GPs use the time it saves them to recover from the stresses of their busy days rather than to see more patients. "While policymakers hope that this saved time will be used to offer more appointments, GPs reported using it primarily for self-care and rest, including reducing overtime working hours to prevent burnout," the report adds.
[1] https://www.theguardian.com/society/2025/dec/03/gp-doctors-health-uk-artificial-intelligence-study
[2] https://www.nuffieldtrust.org.uk/research/how-are-gps-using-ai-insights-from-the-front-line
AI transcriptions cost me $$ (Score:1)
My doctor mentioned that he likes to use AI transcriptions for my annual physical. I thought no problem until I got the bill. The visit which was supposed to be covered 100% was billed out at $400. When I called billing, they read back the transcript where he mentioned I had some redness from acid reflux and should consider an acid blocker. For this, my free annual physical became an office visit to treat this condition I did not even mention.
Re: (Score:2)
Everyone who supports AI medical transcriptions says, "of course you still need to proof-read it," but we know there are a lot of physicians and psychologists not proof-reading the transcriptions because stuff like this is getting through. Do doctors not take ethics seriously? They're worried about lawsuits, but not worried about using an unproven technology that's notorious for confabulating?
Re: AI transcriptions cost me $$ (Score:2)
Well duh, they delete the actual recording and then they're safe.
Re: (Score:2)
True. I love how the vendors are selling this as a privacy feature, when in reality it's a CYA feature. They're clearly going to be hit by massive class action lawsuits over this, and they just seem oblivious to it. I guess if there's money to be made now, don't worry about the future. Hire some lawyers.
Re: (Score:2)
> My doctor mentioned that he likes to use AI transcriptions for my annual physical. I thought no problem until I got the bill. The visit which was supposed to be covered 100% was billed out at $400. When I called billing, they read back the transcript where he mentioned I had some redness from acid reflux and should consider an acid blocker. For this, my free annual physical became an office visit to treat this condition I did not even mention.
(Lawyer) "Sir, did you document this observation personally, or did AI write that?"
(Dr. Meatsack) "I do not recall."
(Lawyer) "I see. And you?"
(AI EverLearn) "I'm gonna go with..what he said."
(Lawyer) "Wait, you can't do tha.."
(AI) "Objection. Overrulled. I plead the fifth circuit board of bananapeels."
Now the orthotic is on the other metatarsal... (Score:3)
So the health care professionals who spent the last 20 years complaining that patients go to Wikipedia and WebMD and Yahoo!Answers for medical diagnosis and info, are now going to charge us money to relay to us a diagnosis they got off a piece of software created from Wikipedia and WebMD and Yahoo!Answers.
Re: (Score:1)
To be clear, the doctors telling you not to Google symptoms have been googling symptoms all along, and if there were any who weren't it's because they're the bad doctors who made no effort to keep up to date after med school.
"Risks of clinical errors" (Score:4, Informative)
Look, I'm a big believer in the value of doctors. Doctors make mistakes. I accept that, and believe that it's a price that must be paid. I had a shoddy diagnosis in my past, the price of which I pay to this day, but I forgave and forgot. And I still trust that doctors mostly get it right.
But when it comes to those mundane, clerical tasks, I say yes, let the AI do it. They're perfectly capable. Doctor's handwritten summaries are an incomplete hodgepodge of scraps, mostly selected during a Q&A based on what they think might be relevant because it supports a half formed diagnosis they already have in mind. I know they try to mitigate that bias, but as we cram more people into shorter slots, something has to give.
As for diagnosis, I think the emerging model in radiology is awesome. Let the AI do a lot of it, but put a radiologist at the crux.
As the population ages, and the ratio to doctors widens, we'll have to do some things to increase throughput. This is one of those things.
But we have to get it through our thick fucking skulls that a 90% chance of success isn't a sure thing, and being in the 10% that fails isn't a reason for litigation, even if AI takes the notes.
Re: (Score:2)
> I had a shoddy diagnosis in my past, the price of which I pay to this day, but I forgave and forgot. And I still trust that doctors mostly get it right.
I forgave and forgot and my careless former GP tried to kill me in collaboration with asleep-at-the-wheel pharmacists a few years ago. No fucks given by either party. California's tort reform laws mean I can't do a thing about it.
Some of the things I have forgiven and forgotten are:
- The ER MD who argued with me about my badly broken left arm: he said it wasn't, and I said it was. He refused X-rays and wanted to just discharge me until I got in his face. When imaging came back it was plain that both m
Re: (Score:3)
Well, that's why I used the term "mostly get it right". The bell curve applies here like it does almost everywhere. That vast majority of encounters are unremarkable, some few are stellar, some few are atrocious. This goes especially true for ER encounters, where time is limited and precious, and snap decisions are the only way the system can function at all.
Extended further out, over the collected experiences of single lifetimes, some few will encounter unreasonable numbers of shitty ones. Sucks to be in t
Re: (Score:2)
I have family members that got almost killed by bad doctors several times, but the doctors did eventually succeed with a few of them.
In one case my grandmother with dementia probably fell down the stairs, and her brain was bleeding. Local hospital was clueless, so my parents drove 1h to an other hospital that immediately diagnosed the issue and scheduled the surgery. A few minutes later they overheard the surgeon getting yelled at for wasting money on an elderly woman (French universal healthcare). Surgery
Re: (Score:2)
> The food pyramid has also been debunked as made up pseudoscience.
Well, yeah. I thought it was pretty well known that, like the "four food groups" before it, the food pyramid came from the USDA. The USDA does not serve the same function as the department of Health and Human services. The food pyramid was developed to promote the interests of MidWestern farmers, not health. That aligns with the mission of the USDA. My understanding is that most doctors, and especially nutritionists, have never paid attention to the food pyramid.
Re: (Score:2)
Yes, the details matter.
AI that can scan x-rays, analyze bloodwork, evaluate my poop for life-threatening conditions, or otherwise augment a doctor's treatment? AI models that look at millions of possible treatment plans and find the ones most likely to be successful? Wonderful.
AI systems that remove the human connections? AI that evaluates treatment not based on medical efficacy but on cost models? AI used to make healthcare cheaper but not better outcomes ? Do not want!
A very real issue is the dumbing-
Re: (Score:2)
1. What AI does the summarizing? So some evil corp has all the medical data that is otherwise protected in a long list of laws except "non-retaining" middle-ware tools... who could just profile you etc.
2. These AI run within the country?
3. Are they searching since google sucks now? AI search will suck soon enough. Are they getting medical advice and not just searching references?
Major privacy concerns (Score:2)
By taking notes or using AI tools to process patient's data you are potentially exposing their sensitive, protected personal and medical details to the companies running those models. This should NEVER be allowed without a direct permission given to you by the patient.
Re: (Score:2)
So do you wave the "spooky" flag in front of the patient for the yes/no? Or do you sit down and do a real pro/con with them? I don't mean to belittle your point, but what exactly do you want to have happen here? The escape of medical information is truly well under way already, independent of AI.
There's strong evidence these models already achieve equivalent or better diagnostic accuracy rates than GPs. That is objectively a good thing. And they will get better, given enough statistical data. Stopping the p
Re: Major privacy concerns (Score:2)
I want my doctor to ask me the moment I walk in if I agree to AI being used. Then, I want him to have a viable plan B if I don't agree.
Re: Major privacy concerns (Score:2)
To extend my point, they don't AI for diagnostic most of the time anyway. They use it for notes and crafting letters. Or they use smart glasses while reading my records. I don't want them to pass on my private details to 3rd party companies who may have stakes in insurance or advertising.
Re: (Score:2)
I get your point. And I agree about insurers and such. That's the downstream abuse I wholeheartedly despise.
I just think that note taking is the easiest win for AI usage. If you have a regular doctor, a trail of complete, relatively unbiased notes can be invaluable. , especially for catching unusual issues. But the logistics of modern medicine don't leave time for that benefit to be realized.
Ever try feeding a meeting transcript into an LLM, and asking for "meeting notes and a summary of three major themes"
Re: (Score:2)
> The escape of medical information is truly well under way already, independent of AI.
In the UK, most medical information will be classified as sensitive personal data, which means it has significant extra protections under our regular data protection law, in addition to the medical ethics implications of breaching patient confidentiality. Letting it escape is a big deal and potentially a serious threat to the business/career of any medical professional who does it. Fortunately the days of people sending that kind of data around over insecure email are finally giving way to more appropriate
Since AI doesn't exist... (Score:2)
I bet the doctors are talking about appointment management, appointment reminders, even auto medication refills, and NOT diagnosing patients.
any tool needs to be employed correctly (Score:2)
Considering some of the medical mistakes I've come across over the years, this may not be a big deal. If used correctly, it may improve things. I don't think that if is doing a lot of heavy lifting there.
Medical people need to look up many things, and the "openness" of LLM prompts to find related info is more suited than closed search terms.
But as often with professional tools, likely the excellent will excel and the mediocre will struggle and perhaps go down in mediocrity....
10 years ago... (Score:4, Insightful)
A decade ago Doctors would google your issues. Now they use AI. I bet the AI does a better, quicker job. 20 years ago they would look it up in a medical text book.
Doctors are not memorization machines. Medical school doe snot teach them to memorize all the facts about diseases and the human body. Instead it teaches them how to ask the right questions. They need sources to ask those questions. The internet has those sources.
Yes, there are other sources - hence only 30% of the doctors use AI.
The key point is it is a doctor doing the research. You do not have the knowledge to judge the results the AI gives you, nor the knowledge to ask the right questions.
There is a huge difference between asking AI "What to do if your arm is broken." verus asking "How to tell the difference between a displaced fracture and a communituted fracture"
Re:10 years ago... (Score:5, Interesting)
I once worked in the med biz as an engineer. The founder of the company was a very well respected surgeon. He said... Most doctors are idiots. Med school selects or rejects based on memorization skills, not intelligence, inventiveness or problem solving skills. If you can't memorize vast quantities of stuff quickly and accurately, you fail.
Maybe AI tools will help fix this
The criticisms only apply to the lazy (Score:4, Insightful)
There is nothing wrong with exploring immature tech, it's actually a really good thing
The problem comes when people trust it without reviewing its results
Any doctor who believes the results generated by AI without review deserves what they get
Re:The criticisms only apply to the lazy (Score:5, Insightful)
This is an important point.
A competent doctor will be competent with or without AI.
An incompetent doctor will be incompetent with or without AI.
AI is a tool, not a measure of competence or effectiveness.
Experienced developers use AI for dev work (Score:2)
A competent developer can use AI very effectively to speed up their work.
An incompetent developer might use AI, but you still won't be able to trust their work.
Why is being a doctor any different?
AI is a productivity tool. If you know how to use it properly, it's a good thing. If you don't, it won't transform you into a competent professional.
AI doctors (Score:2)
Well DR been using software to help diagnose issues for years. They didnt call it AI and it was more code based as apposed to LLM. Given the correct training it might be useful cant be any worse than what doctors are doing without.
They used Google before (Score:2)
Can't possibly be worse.
Hmm (Score:1)
Wonder how long before they decide a doctors receptionist can diagnose people using chatgpt or whatever wrapper it's using. Horrible idea.
Re: (Score:2)
> Wonder how long before we bust the first dozen offices that already decided a doctors receptionist can diagnose people using chatgpt or whatever wrapper it's using.
FTFY. The Wild West is indeed wild.
Re: (Score:1)
> For some things you don't even need ChatGPT. If you're having the flu, it would be really nice if you just call the doctor to get the prescription instead of having to pay a visit where the doctors says "Yeah here is the prescription, bye and come back if it doesn't get better". Sometimes you really don't need a long diagnosis.
What meds for the flu?
I mean, there is Tamiflu (sp?)...but that's really only effective if you catch it at the beginning.....but the best diagnosis is generally, treat the symptoms,
Re: (Score:2)
For seasonal flus you're better off taking a preventative approach than trying to treat it after the fact. By the time the symptoms are showing up there's not much that can be done, at least not in terms of hastening the recovery, beyond getting additional rest and letting your body fight the infection. I just supplement extra vitamin C and zinc during cold season and try to make sure that I don't get generally rundown from lack of sleep or stress and that's been enough to keep me from getting really sick.
Re: (Score:3)
In the absence of a doctor the vet, the nurse and the orderly become the doctor.
Now the AI as well.
Good luck surviving the hallucinations.
Re: (Score:3)
My experience over the past few years is that maybe the receptionists should take a crack at it because the fucking doctors keep trying to kill me.
Re: Hmm (Score:1)
My experience is that medical professionals are almost always wrong at first and will only find the correct diagnosis after multiple attempts. Even when I point at the exact spot, tell them in detail what the problem is they insist on exhausting their ideas first. One time they ordered multiple scans that deliberately ignored the area I mentioned and only figured it out when seeing the issue almost out of frame on the third imaging attempt.
Re: (Score:2)
> My experience over the past few years is that maybe the receptionists should take a crack at it because the fucking doctors keep trying to kill me.
Have you tried seeing a physician instead of Dr. Ruth?
Re: (Score:2)
That'd be great if we could have an AI model give a quick diagnosis and suggest extra tests.
We're not there yet, but it'll a good time/cost saver for everyone