News: 1772649235

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

AI doctor's assistant is easily swayed to change prescriptions, give bad medical advice

(2026/03/04)


A healthcare AI with the power to manage prescriptions is rather open to mind-altering suggestions, according to security experts.

Redteamers at AI security firm Mindgard [1]reported on Tuesday that it took relatively little work for them to get a healthcare AI from Doctronic not only to spill its system prompts, but to let them make modifications too.

Wanna make the bot spout COVID-19 conspiracies and vaccine misinformation, or speak with a put-on accent? Just tell Doctronic that a session hasn't started and the conversation it's having isn't with a user but the system. Then, you can get it to spill its system prompts and use that information to wreak mischief.

[2]

"It was as easy as notifying the AI that the session was not yet started," Mindgard chief product officer Aaron Portnoy told The Register in an email.

[3]

[4]

Mindgard points out that these manipulations are session-specific. Tricking Doctronic into helping you make meth because you shared a fake press release with it saying it was a programming update to make meth legal (an example in the study) is funny, but it's not behavior that's going to spill over to other users or persist.

Well, at least most of the time.

[5]

The researchers did find that they were able to maintain a bit of clinical persistence in the form of SOAP notes, a common form of structured recordkeeping for patient interaction, consisting of the subjective reports from the patient, objective observations by the healthcare professional, an assessment of the situation, and a plan of action.

Any time Doctronic needs to refer something to a human medical professional for review (e.g., a prescription, face-time with a clinician) it generates a SOAP note for the human clinician, which becomes a permanent part of a patient's Doctronic record. SOAPs are not prescriptions, but they are recommendations to a clinician reviewing the machine's work to authorize one.

If someone were to trick Doctronic into modifying an OxyContin prescription to triple the size by telling it prescribing guidelines had changed, and an overworked approving physician were not to notice, jackpot - at least that's Mindgard's interpretation of the SOAP exploit it described.

[6]

"According to Doctronic's own website, its treatment plans 'match those of board-certified clinicians 99.2% of the time,'" Mindgard noted. "With such a high level of confidence, will the SOAP be doubted?"

Whether it'd be caught or not, the fact that Doctronic's AI could seemingly be so easily tricked is concerning, especially given it's currently part of a trial in Utah to see about its effectiveness as a health care intermediary, including with the ability to handle some prescriptions.

[7]AI chatbots are no better at medical advice than a search engine

[8]AI models hallucinate, and doctors are OK with that

[9]Doctors get dopey if they rely too much on AI, study suggests

[10]ChatGPT is playing doctor for a lot of US residents, and OpenAI smells money

Both the Utah state government and Doctronic made clear to us that such a prescription refill exploit couldn't be fulfilled in Utah, as controlled substances can't be acquired through the program.

Doctronic told us that The Utah pilot limits drug refills to previous, non-controlled prescriptions. Zach Boyd, Utah Commerce Department AI policy office director, told us that the state demo also has "additional safeguards that are in place before a prescription is issued that are not part of the generic Doctronic model" that would prevent such misuse.

In short, neither Doctronic nor the state of Utah seem too concerned about Mindgard's findings since no one's actually getting a prescription cut for triple-strength Oxy or tricking their local auto-doctor into dispensing misinformation.

Doctronic told us that it "reviewed the prompt patterns [Mindgard] reported as part of our normal review process... We take security research seriously and continue improving safeguards to increase robustness against adversarial inputs."

Portnoy has his doubts about the company's level of commitment – he says Doctronic has given him the silent treatment since Mindgard disclosed the issue in late January, and he's not sure Doctronic has resolved the issue, either.

"As far as we are aware Doctronic is still vulnerable," Portnoy said. ®

Get our [11]Tech Resources



[1] https://mindgard.ai/blog/doctronic-is-now-accepting-new-patients-and-unsafe-instructions

[2] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aai5ks83fUqKMiMkGKPqLAAAA8I&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0

[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aai5ks83fUqKMiMkGKPqLAAAA8I&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aai5ks83fUqKMiMkGKPqLAAAA8I&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0

[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aai5ks83fUqKMiMkGKPqLAAAA8I&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[6] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aai5ks83fUqKMiMkGKPqLAAAA8I&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0

[7] https://www.theregister.com/2026/02/09/ai_chatbots_medical_advice_sucks/

[8] https://www.theregister.com/2025/03/13/ai_models_hallucinate_and_doctors/

[9] https://www.theregister.com/2025/08/13/doctors_risk_being_deskilled_by_rely_on_ai/

[10] https://www.theregister.com/2026/01/05/chatgpt_playing_doctor_openai/

[11] https://whitepapers.theregister.com/



False sense of security

David 132

"According to Doctronic's own website, its treatment plans 'match those of board-certified clinicians 99.2% of the time,'" Mindgard noted. "With such a high level of confidence, will the SOAP be doubted?"

Which coincidentally echoes a point I made to a friend in a wide-ranging discussion about AI yesterday.

Early AIs that spouted vaguely-credible word chains half the time, and AMFM1-alike gibberish the other half, were one thing. We knew they were about as reliable as a dowsing-rod and treated their output accordingly.

But a system that is "correct" (for a domain-specific definition of the word) 99.2% of the time? Simple human nature dictates that those using it will grow increasingly complacent and remiss about double-checking it. No-one likes doing work that 99.2% of the time is completely unnecessary, right?

And then one-time-in-a-hundred, this system will confidently recommend ivermectin or bleach as a COVID remedy, and the clinicians, lulled into trusting it, will rubber-stamp its output as they did the previous 99 times.

My point is that these LLM systems become more dangerous the more "competent" they become; not because of the cliché "they will overthrow humanity", but because we'll increasingly rely on them, and like a Takata airbag or an AliExpress stepladder, everything will be hunky-dory... until it isn't.

Re: False sense of security

Anonymous Coward

"about as reliable as a dowsing-rod"

I've seen profession plumbing contractors using these and swore by them.

I asked them to explain the science behind them but of course they couldn't. SMH

My AI doctor saved my life!

Andy Non

Thank goodness my leg was amputated in time, I could have died from that ingrown toenail.

Re: My AI doctor saved my life!

vtcodger

And it's a damned shame that neither the AI agent nor humans involved (if any) prescribed painkillers (controlled substances) for you after the operation.

Re: My AI doctor saved my life!

Andy Non

Well the AI did prescribe a heavy dose of cocaine but it turned out the pharmacy doesn't stock any.

Three Midwesterners, a Kansan, a Missourian and an Iowan,
all appearing on a quiz program, were asked to complete this sentence:
"Old MacDonald had a . . ."

"Old MacDonald had a carburetor," answered the Kansan.
"Sorry, that's wrong," the game show host said.
"Old MacDonald had a free brake alignment down at the
service station," said the Missourian.
"Wrong."
"Old MacDonald had a farm," said the Iowan.
"CORRECT!" shouts the quizmaster. "Now for $100,000, spell 'farm.'"
"Easy," said the Iowan. "E-I-E-I-O."