News: 0175020915

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

AI Tool Cuts Unexpected Deaths In Hospital By 26%, Canadian Study Finds (www.cbc.ca)

(Wednesday September 18, 2024 @11:22AM (BeauHD) from the life-saving-tech dept.)


An anonymous reader quotes a report from CBC News:

> Inside a bustling unit at St. Michael's Hospital in downtown Toronto, one of Shirley Bell's patients was suffering from a cat bite and a fever, but otherwise appeared fine -- until an alert from an AI-based early warning system showed he was sicker than he seemed. While the nursing team usually checked blood work around noon, the technology flagged incoming results several hours beforehand. That warning showed the patient's white blood cell count was "really, really high," recalled Bell, the clinical nurse educator for the hospital's general medicine program. The cause turned out to be cellulitis, a bacterial skin infection. Without prompt treatment, it can lead to extensive tissue damage, amputations and even death. Bell said the patient was given antibiotics quickly to avoid those worst-case scenarios, in large part thanks to the team's in-house AI technology, dubbed Chartwatch. "There's lots and lots of other scenarios where patients' conditions are flagged earlier, and the nurse is alerted earlier, and interventions are put in earlier," she said. "It's not replacing the nurse at the bedside; it's actually enhancing your nursing care."

>

> A year-and-a-half-long study on Chartwatch, [1]published Monday in the Canadian Medical Association Journal , found that use of the AI system led to a striking [2]26 percent drop in the number of unexpected deaths among hospitalized patients . The research team looked at more than 13,000 admissions to St. Michael's general internal medicine ward -- an 84-bed unit caring for some of the hospital's most complex patients -- to compare the impact of the tool among that patient population to thousands of admissions into other subspecialty units. "At the same time period in the other units in our hospital that were not using Chartwatch, we did not see a change in these unexpected deaths," said lead author Dr. Amol Verma, a clinician-scientist at St. Michael's, one of three Unity Health Toronto hospital network sites, and Temerty professor of AI research and education in medicine at University of Toronto. "That was a promising sign."

>

> The Unity Health AI team started developing Chartwatch back in 2017, based on suggestions from staff that predicting deaths or serious illness could be key areas where machine learning could make a positive difference. The technology underwent several years of rigorous development and testing before it was deployed in October 2020, Verma said. Dr. Amol Verma, a clinician-scientist at St. Michael's Hospital who helped lead the creation and testing of CHARTwatch, stands at a computer. "Chartwatch measures about 100 inputs from [a patient's] medical record that are currently routinely gathered in the process of delivering care," he explained. "So a patient's vital signs, their heart rate, their blood pressure ... all of the lab test results that are done every day." Working in the background alongside clinical teams, the tool monitors any changes in someone's medical record "and makes a dynamic prediction every hour about whether that patient is likely to deteriorate in the future," Verma told CBC News.



[1] https://www.cmaj.ca/lookup/doi/10.1503/cmaj.240132

[2] https://www.cbc.ca/news/health/ai-health-care-1.7322671



Modern Medicine is just a glorified Decision Tree. (Score:4, Interesting)

by hashish16 ( 1817982 )

and it's ripe for AI intervention. I work in medical diagnostics and I've been telling folks for years that AI will have the biggest impact augmenting doctors and nurses. What I've realized is that your average doctor is just that, average. In some areas AI is better, others it's worse. By bringing down the cost of testing, we can essentially replace many early stage medical professionals and achieve much better outcomes.

Re:Modern Medicine is just a glorified Decision Tr (Score:4, Interesting)

by Baron_Yam ( 643147 )

It'll be a fight - AI will be superior at finding non-obvious but common indicators of common issues. In other words, things humans might easily miss. Statistically it will be awesome and have a huge positive effect on outcomes.

What it will also do is enable reliance on it and reduce humans detecting unusual things. "The AI says it's nothing". So if you're an outlier medically, this might not be great news.

Re: (Score:2)

by Luckyo ( 1726890 )

This is the "we need to keep radar operators" narrative. For those not in the know, the main human problem in scenarios which require constant vigilance is human adaptability. We adapt to "this is just as all the case before it, so it must be it".

And brain stops noticing incoming German bomber attack on the radar screen that is clearly visible to command officer who walks in and takes a look at the screen already being looked at by operator who isn't raising the alarm. It's a biologically hard coded feature

they have been done since 1900's (Score:1)

by johnjones ( 14274 )

your correct

only trust surgeons and nurses

doctors are a waste of space and they know it and have done since the 1800's its been a AI target since forever

skill and dexterity are not something that machines can replace (surgeons and nurses ) everything else is trivial and has been for a long time they protect it with a ivory tower

JJ

Re: (Score:2)

by Ed Tice ( 3732157 )

I believe that robotic surgery is now the norm for the most complex procedures. It's still a very skilled surgeon operating the machine. I'm not so sure on the dexterity portion of it. I imagine that the machines allow for surgery to be performed with less dexterity which would be good from a medical outcome perspective. Take away the dexterity portion of it so that surgery is performed by those most skilled in deciding the what, where, and how.

Re: (Score:2)

by Errol backfiring ( 1280012 )

Well off course, that is what science is.

any science consists of :

Observation

trying to see a pattern in the observations

formulating a theory

testing that theory

repeat

This is true for medicine in Roman days, in medieval times, modern medicine, music theory, physics, etc.

So basically, any science is "one big decision tree" if you apply the existing theory.

Re: Modern Medicine is just a glorified Decision T (Score:2)

by hashish16 ( 1817982 )

Medicine is applying pre-existing solutions, not formulating theories and testing them. Insurance companies wouldn't allow it. This is why learning models are so helpful in this area. Diagnostic tests are cheap relative to an actual doctor. MD-PhD's are a different story, but the average patient is seeing an average doctor. Average doctors are very average at their job.

Re: (Score:2)

by GoTeam ( 5042081 )

I heard about this really cool device that only needs a drop of your blood to run all kinds of diagnostic tests in a short period of time. The company is run by a dynamic young go-getter. I see great things in their future!!! I think the company is called Theranos...

Re: (Score:2)

by Hasaf ( 3744357 )

I heard that the person leading the company that was working on this was sent to prison in order to keep her from disrupting the entire medical industry. Billions of dollars were at risk, there is no way "Big Medicine" could have allowed her to be successful!

Or, just possibly she was just a fraud. . . I'll take conspiracy theory #1. It sounds better on late-night talk radio than the possibility that she was "just a fraud."

Re: (Score:2)

by gtall ( 79522 )

"formulating a theory". Eh? It is the formulating a theory step that not part of the decision tree in that mechanizing it with AI won't be straightforward if at all doable.

Think of AI formulating Einstein's relativity. Where's the training data that replaces Newton's equations with Einstein's equations? How does that training data indicate what form those new equations should take? What changes in the actual conception of the problem are simply hiding that AI can find them? Even if it could, which conceptio

Re: (Score:2)

by omnichad ( 1198475 )

It's ripe for classical procedural intervention. No need for fuzzy matching or hallucinations. It can really just be threshold numbers preprogrammed in.

Not to say AI can't help. But until it was a buzzword, nobody seemed to do any of this automation before more cheaply.

Re: (Score:3)

by jenningsthecat ( 1525947 )

> No need for fuzzy matching or hallucinations. It can really just be threshold numbers preprogrammed in.

I suspect that's very much not true. It's not just about "threshold numbers" - it's about correlating the numbers and evaluating relationships among them. For example, borderline high - or low - blood sugar, may not be significant except in the context of some other measurement, symptom, or condition.

As the number of combinations and permutations increases, it's about analysis and inference, not just thresholds. It seems to me that various flavours of (what we really shouldn't be calling) AI excel at that k

In unrelated news... (Score:1)

by slipped_bit ( 2842229 )

...AI tool increases MAID deaths 26%.

Unexpected Deaths? (Score:1)

by Froggels ( 1724218 )

So is there now a 26% increase in expected deaths?

Re: (Score:2)

by omnichad ( 1198475 )

It would be a fairly small decrease in overall deaths, because a lot of them are late stage pneumonia, metastatic cancer, trauma wounds bleeding out, etc.

So it makes sense to only look at how big a piece of the small number it is. Because unexpected often means preventable.

Re:mRNA vaccine deaths = unexpected deaths. (Score:4, Informative)

by gweihir ( 88907 )

You morons are still around? Well, at least some of you died off from being non-vaccinated. But apparently not enough.

Incidentally, this has _never_ been controversial. It _always_ was insane FUD and nothing else. You probably take bleach, believe in blood-letting and think the world is flat as well.

Re: (Score:2)

by omnichad ( 1198475 )

Useless FUD. A spike protein binding to an ACE2 receptor can kill susceptible people if there's too much activity. Actual SARS-CoV-2 infections do that in greater quantities. An infection that is more common than influenza. By what other mechanism do you propose that it's dangerous?

Good use but nothing general about it (Score:3)

by evanh ( 627108 )

"The technology underwent several years of rigorous development and testing before it was deployed ..."

Not only does it only do a single narrow job, of matching known alert conditions, it has also been carefully trained at that one job. No pillaging of unfiltered datasets.

Basically, it's like anything other computer program in that it is infinitely repeatable and relentless.

Re: (Score:3)

by gweihir ( 88907 )

Indeed. It also is not an LLM but an actually proven technology.

AI? (Score:3)

by RobinH ( 124750 )

Is it really using AI or is it just a typical program looking for specific signals?

Re: (Score:2)

by nedlohs ( 1335013 )

It's traditional AI, an expert system. Not generative garbage.

Re: (Score:3)

by unrtst ( 777550 )

I wondered the same thing! And if it is making use of AI, I'd extend that to ask if the use of AI is in any way necessary to its operation?

TFS says it, "measures about 100 inputs," and alerts on changes or those that exceed expected norms. Those norms are things we already have codified (ex. every time I get a blood test, the acceptable ranges are provided right along with the data), and detecting and alerting on changes is not difficult to codify. I'm quite curious about *how* they've made use of AI here,

Re: (Score:2)

by quantaman ( 517394 )

> I wondered the same thing! And if it is making use of AI, I'd extend that to ask if the use of AI is in any way necessary to its operation?

> TFS says it, "measures about 100 inputs," and alerts on changes or those that exceed expected norms. Those norms are things we already have codified (ex. every time I get a blood test, the acceptable ranges are provided right along with the data), and detecting and alerting on changes is not difficult to codify. I'm quite curious about *how* they've made use of AI here, and if it's actually a better option than a manually coded analysis program would be.

> Also, if all that personal data is fed into an AI system for it to do this monitoring and prediction work, how secure is that data? I imagine that a big portion of this problem is getting all the data points in one place and correlating them. Makes for a good target.

I think it's a combination.

They're probably only training on data from that one hospital unit, so just the raw readings probably isn't enough data for the model to predict outcomes.

At the same time various thresholds probably get exceeded a lot, so if you alert every time there's a threshold exceeded then it's just spam.

But if you do a bunch of feature engineering on the raw data (thresholds + various relationships between readings) then the model now has enough data to make useful predictions.

But I also ag

Re: (Score:2)

by Ed Tice ( 3732157 )

I'm guessing that they used AI to help come up with the alerting thresholds but the actual implementation is just a lookup table. I have no direct involvement with the project and don't know. But that's at least the first approach that one should consider for this type of situation.

Re: (Score:1)

by spiryt ( 229185 )

> Is it really using AI or is it just a typical program looking for specific signals?

I suppose there's more to the AI than just LLMs which are getting all the press these days. (although I do hate that term 'AI' which sort of became meaningless these days).

Re: (Score:2)

by bill_mcgonigle ( 4333 ) *

This sounds like something I worked on in the 90's where a doc would get a page if an inpatient's metrics exceeded a sigma threshold.

Where AI (ML) could be good is to look at all of these data points together and find patterns where a moving set of metrics would lead to poor outcomes inside of those individual sigmas.

That is to say AI should be used here but TFS doesn't want us to know if it is or not.

This is great news (Score:1)

by rsilvergun ( 571051 )

now the Private Equity companies that own all our hospitals will be able to cut nursing staff by 26%, adding much needed shareholder value!

Insurers already lining up to buy it... (Score:2)

by Lavandera ( 7308312 )

as title says..

AI...but not really AI (Score:3)

by Arrogant-Bastard ( 141720 )

Before I get into this, it'd be instructive to study the history of [1]Mycin [wikipedia.org], a simple inference engine from the 1970's specifically designed to diagnose infectious diseases. Mycin was quite a bit simpler than today's AI/LLM diagnostic models and it's rulesets were predetermined based on simple clinical rules, e.g., "if patient is exhibiting symptom X, then possible infections are A, B, and C". It was designed to mimic the diagnostic process (i.e. decision tree) used by clinicians. Valid critiques of Mycin included its ad hoc ruleset, but worth noting is that unlike human clinicians, Mycin never got tired, stressed, overworked, or forgetful. So in that sense, systems like Mycin could be a useful backstop, e.g., a second opinion that might, in some circumstances, catch something that slipped by a human.

Now to this present case. Sure, this was done by an AI, but did it need to be? And when it was done, was that actually an AI function or just trend line analysis that could have been accomplished with far simpler software? Certainly if a patient's temperature increases (beyond normal diurnal variations) then that should be flagged for human scrutiny; if it increases rapidly or displays other unexpected behavior, that flag should be marked "urgent". And that's just one parameter: if it's examined in concert with others (e.g. blood pressure, heart rate, respiration rate, VO2) then a set of simple rules would suffice to catch most (but not all) deteriorating conditions that might get by overworked hospital staff.

What I'm getting at is that this particular task shouldn't require AI for the most part . Yes, there are edge cases that won't get picked up by simplistic rulesets used in an inference engine, but then again it's questionable whether they'd be picked up by AI either. Realtime analysis of vital signs (supplemented by other monitoring as appropriate) should suffice to catch a heck of a lot of things and frankly should have already been in place. It's not computationally complex, it's repeatable, and it's not difficult to customize to the patient (age, weight, height, underlying conditions, etc.)

[1] https://en.wikipedia.org/wiki/Mycin

But I've been told AI will never replace humans! (Score:2)

by mmell ( 832646 )

I still refuse to believe that an AI could outperform a human. AI's aren't really smart, they just appear to be intelligent. Apparently, the AI cares more than the hospital staff. It probably tries harder, too.

Re: (Score:2)

by Luckyo ( 1726890 )

You've been told that AI will replace humans in so far that it will make humans much more efficient at their jobs. The job of countless radar operators of WW2 is now handled by a single radar operator because of systems like it. Did it replace radar operators? No, you still need someone to operate the radars. But one operator can now do a job of what required an army of them.

Same will apply in medical field. Instead of an army of diagnosticians we run today, we'll have maybe one or two. Who will operate the

More than one mechanism? (Score:2)

by jenningsthecat ( 1525947 )

I can well believe that the analytical abilities of Chartwatch are, on their own, worth the cost and effort. But I wonder how much of this improvement in survival rate is down to that analysis, and how much of it is the result of having another set of 'eyes' on patients' charts.

THESE eyes never need to sleep, and their efficiency isn't compromised by emotions, distractions, forgetfulness, or any number of other human characteristics which may lead to less-than-optimal attention and analysis. So they aren't

wiring room (Score:2)

by groobly ( 6155920 )

Google "wiring room experiment" to find out why this result is bogus.

How bad is the situation in US? (Score:1)

by Lobotomy656 ( 7554372 )

I'm not a medical professional but if someone has a "really, really high" amounts of white blood cells it shouldn't take an AI to figure out something is very wrong here. Is this article basically telling us people working in US hospitals are shit at their job so bad, that a few simple rules (because this AI can be substituted by just a few "if" statements) applied to test results reduces mortality by 26%. I knew US is just a 3rd world country pretending to be a superpower but this is a new low, even for y

The justifications for drug testing are part of the presently fashionable
debate concerning restoring America's "competitiveness." Drugs, it has been
revealed, are responsible for rampant absenteeism, reduced output, and poor
quality work. But is drug testing in fact rationally related to the
resurrection of competitiveness? Will charging the atmosphere of the
workplace with the fear of excretory betrayal honestly spur productivity?
Much noise has been made about rehabilitating the worker using drugs, but
to date the vast majority of programs end with the simple firing or the not
hiring of the abuser. This practice may exacerbate, not alleviate, the
nation's productivity problem. If economic rehabilitation is the ultimate
goal of drug testing, then criteria abandoning the rehabilitation of the
drug-using worker is the purest of hypocrisy and the worst of rationalization.
-- The concluding paragraph of "Constitutional Law: The
Fourth Amendment and Drug Testing in the Workplace,"
Tim Moore, Harvard Journal of Law & Public Policy, vol.
10, No. 3 (Summer 1987), pp. 762-768.