CEO of America's Largest Public Hospital System Says He's Ready To Replace Radiologists With AI (radiologybusiness.com)
- Reference: 0181197026
- News link: https://slashdot.org/story/26/04/01/0619250/ceo-of-americas-largest-public-hospital-system-says-hes-ready-to-replace-radiologists-with-ai
- Source link: https://radiologybusiness.com/topics/artificial-intelligence/ceo-americas-largest-public-hospital-system-says-hes-ready-replace-radiologists-ai
> Katz -- who has led the 11-hospital organization since 2018 -- said he sees great potential for AI to increase access to breast cancer screening. Hospitals could potentially produce "major savings" by letting the technology handle first reads, with radiologists then double-checking any abnormal screenings. Fellow panelist David Lubarsky, MD, MBA, president and CEO of the Westchester Medical Center Health Network, said his system is already seeing great success in deploying such technology. The AI Westchester uses misses very few breast cancers and is "actually better than human beings," he told the audience. "For women who aren't considered high risk, if the test comes back negative, it's wrong only about 3 times out of 10,000," Lubarsky said.
>
> Katz asked fellow hospital CEOs if there is any reason why they shouldn't be pushing for changes to New York state regulations, allowing AI to read images "without a radiologist," [2]Crain's reported . In this scenario, rads could then provide second opinions, if AI flags any images as abnormal. Sandra Scott, MD, CEO of the One Brooklyn Health, a small hospital facing tight margins, agreed with this line of thinking, according to Crain's. "I mean, I'm in charge of a safety-net institution. It would be a game-changer," Scott said about AI being used to replace rads.
[1] https://radiologybusiness.com/topics/artificial-intelligence/ceo-americas-largest-public-hospital-system-says-hes-ready-replace-radiologists-ai
[2] https://www.crainsnewyork.com/health-care/cny-health-care-ceo-forum-20260325/
Radiologists (Score:5, Funny)
Say they're ready to replace their CEO with AI. Patient outcomes have improved significantly. Shareholders are crying.
Re: Radiologists (Score:2)
Depends on who trained the Ai... Some of us expect ai to be better than humans, more rational, but it has the same weaknesses as we do. It all depends on how it is brought up. It is an interesting philosophical theme that leads to questioning what the meaning is of life, the universe and everything. If it is trained well, it will say 42. In a few seconds instead of billions of years.
Re: Radiologists (Score:2)
That's a joke right? :/
Re: Radiologists (Score:2)
This technology fabricates citations to non-existent medical research. There were news headlines about it just a year ago.
People who expect it to be "more rational" will get what they deserve. But their customers deserve better.
Re: Radiologists (Score:2)
You do understand progress right?
Re: (Score:2)
> You do understand progress right?
Right. Now it fabricates unjustified citations to real existing medical research so that you won't manage to catch it or disprove the references. We understand progress.
Re: (Score:2)
Absolutely. For the replacement of him with AI, they can hire a number more radiologists, and nurses, and doctors...
Re: (Score:3, Insightful)
> Shareholders are crying.
Really?
If we're looking for an actual downside here, fire all the radiologists and put CEOs in their place to be personally liable for ALL diagnostic readings until AI gets it perfect enough to be defended 100% in every court case.
Perhaps then we'll see how much of a loophole "AI" is with regards to dismissing a Recession.
Re: (Score:2)
Yeah I like this one. If a human gets it wrong they can be liable. So if we replace it with AI, let's make the CEO liable. Fantastic idea.
Re: (Score:2)
Dream on. We can't even make Waymo officers responsible for their robotaxis zooming past stopped school buses.
Re: (Score:2)
I was thinking along the same lines...
I'd say let the CEO do it. And when it goes south - which it will - make sure the CEO is held personally accountable for any misdiagnoses and deaths.
Re: (Score:2)
>> Shareholders are crying.
> Really?
> If we're looking for an actual downside here, fire all the radiologists and put CEOs in their place to be personally liable for ALL diagnostic readings until AI gets it perfect enough to be defended 100% in every court case.
> Perhaps then we'll see how much of a loophole "AI" is with regards to dismissing a Recession.
It will just be another line on the forms you sign, like acknowledging the risk of the radiation dose you're signing up for, or the high powered magnets vs. metal stuff in your body, or the contrast enhancing stuff they inject you with, or all the other risks.
I'm not even a lawyer, it's just kind of obvious that you don't have a reasonable expectation of a 100% accurate reading, or zero risk. Your bar has to be somewhere else.
Re: (Score:2)
Well, he did say "for some imaging tasks". That's probably a reasonable goal...but you've got to be *very* selective.
Re: (Score:2)
> Shareholders are crying
Replace them with AI as well.
Mitchell H. Katz, MD swallows the AI koolaid (Score:2)
Like “The Cloud” was going to get rid of the in-house computer department, this AI kool-aid is going to massively fail to deliver. A useful tool as long as it is applied in specific use cases and verified by a human. In this case, since you've fired all your radiologists and AIs are prone to hallucinations, you can't be sure its diagnosis is accurate. You would get lots of false positives or false negatives, with no one left to verify the results.
Re: (Score:2)
He posts anonymously.
Re: (Score:3)
You expect a bot to use their real name? Bots don't have a real name!
Re: (Score:1)
But the Glacier Bay is cheaper, thus its what most people want.
You first! (Score:1)
I'll agree as long as Mitchell H. Katz, MD signs off on AI making binding decisions for all of his and his family's healthcare!
Less Liability When AI Fucks Up - Can't sue the AI (Score:2)
So in our new world of irresponsibility and negligence by AI who takes on the liability and pays out the injury awards? Probably, some reverse Centaur...
There is a book coming out on this [1]topic soon! [macmillan.com]
We'll all be so happy when we are employed by AI which cannot legally be held liable for all the mistakes it makes and we see everyday.
[1] https://us.macmillan.com/books/9780374621575/thereversecentaursguidetolifeafterai/
Re: (Score:1)
There is a massive shortage of doctors,and we can not affrord to train and hire more.
To expand care with the same money, we have to innovate and let technology help us scale the very expensive and rare humans we have.
The question isn't "would it be better to have a highly trained radiologist take the first pass or an AI?" The question is, would I rather screen 100 women with a 90% accuracy rate (human) or 1000 women with an 85% accuracy rate (AI).
I also believe you can continually train the AI to get bette
False Positives Vs False Negatives (Score:5, Insightful)
There are two distinctly different types of errors when it comes to these kind of tests:
False Positives: This is where the test in question falsely says "You have Cancer!" when in fact you do not have it.
False Negatives: This is where the test in question falsely says "You are Healthy" when in fact you have cancer.
False Positives cost money and time, but it is fairly easy to double check them as they should be uncommon.
False Negatives cost human lives and are almost impossible to double check them as most people should test negative for cancer.
For an AI test, you want to have false positives. If it saves you money by not requiring humans to look things over, then costing you money and time to double check things is a fair trade. If it costs too much to double check, then do not use the AI.
False Negatives should be a no no. If the AI has more false negatives than human radiologists do, then do not use the AI test. No one cares how much money you are saving if people are dying.
Note, with regards to jobs, this will likely be relatively flat. There are not that many humans doing this job - they take the results from radiologist exams from all over the country and send them to just a few companies. Those companies find the few people that do it best and hire them. I bet we are talking about less than a hundred people in the US, especially as the best of the best will be kept to double check the results.
Re: (Score:2)
This all depends on tuning the algorithms' ROCs. You are advocating for ultra low false negatives, allowing higher false positives. It's unlikely that will pass muster, because it costs more money than higher false negatives. Furthermore, false negatives are money-makers, as future therapy once the cancer progresses is quite lucrative.
Re: (Score:2)
False negatives are lucrative to the treatment company, Not to the detection company.
And medical field has some (not all or even most) ethical people. While some people get into it for the money, lots of people get into it for other reasons. Enough ethical people in it are likely to keep the nightmare you propose from coming about.
Re: (Score:3)
False positives, in fact, are the cause of all kinds of human misery- often leading to unneeded surgeries, and sometimes surgeries with very serious consequences.
They're a part of why there are screening guidelines.
Re: (Score:2)
> If the AI has more false negatives than human radiologists do, then do not use the AI test. No one cares how much money you are saving if people are dying.
If the standard is "No one cares how much money you are saving if people are dying.", I would argue that the rate of AI only checked false negatives would need to be lower than the rate of AI first read + radiologist check false negatives rather than just radiologist check alone.
Re: (Score:2)
The shareholders VERY MUCH care about saving money and will kill any number of people that are not themselves to do it!
Re: (Score:2)
> False Positives cost money and time, but it is fairly easy to double check them as they should be uncommon.
You don't understand the math of false results.
The population of people who actually should have a negative result is far larger than the population of people that should have a positive result.
A small percentage of a large population [the actually negative] results in a significant number of false positives, while a small percentage of a small number of people [the actually positive] results in a very small number.
Re: (Score:2)
False Positive: Type I Error
False Negative: Type II Error
Oh thatâ(TM)s not concerning at all. (Score:1)
Therac-25 ?
double checking (Score:2)
Double checking. Haha. They barely look at the images when single checking. You think they are going to put any effort into double checking? What a joke.
LOL! (or: statement correction) (Score:2)
> [CEO of America's Largest Public Hospital System] argued the technology presents an opportunity to simultaneously cut costs and expand access.
LOL! More like: "[CEO of America's Largest Public Hospital System] argued the technology presents an opportunity to simultaneously cut staffing costs and expand profits ."
My Understanding (Score:2)
Is that most situations could be more efficiently diagnosed by AI, but there's still a fair share of cases where experienced radiologists would be needed. In any case, I think it's just matter of time before AI can actually replace them. I mean, it's like the perfect use case for it.
All radiologists do is analyze digital images (Score:5, Informative)
For years now, radiology has been a poor career choice, but it only makes sense to send those digital images to the place with the cheapest doctors. Turns out one thing AI is really much better at than humans is analyzing digital images, so yes, radiology careers will soon be extent. All the job growth is in health care, but it's all in the jobs that require you to be in the same room as the patient. Administrators and back office staff are all getting laid off. The people that clean the rooms, body fluids and all? Hospitals can't get enough of them. (Five of my relatives all work at OSHU. Four are housekeeping, one is a pharmacy technician. They have pharmacy robots now...)
Re: (Score:3)
Problem with "AI and digital images" is that classifier is only as good as its training data.
The cost to train goes up dramatically the more training data you give it. This means there is a financial incentive to not train on edge cases, which means your AI *will not catch them.*
Like all things, it's not the tech I fear. It's the executives in charge of monetizing it.
so, who gets sued for misdiagnosis? (Score:2)
so, who gets sued for misdiagnosis? As long as liability remains with them- the CEO for making the decision.
But also... let me look at crystal ball...
-Costs go up, not down
-premium tier charge for human review
-waivers of liability
and of course- worse patient care outcomes
AI is not am effin replacement for life and death decision making - it is to inform and assist. Not Make them. Let me hallucinate lack of cancer... or better yet.. yet me hallucinate a cancer...
Re: (Score:2)
The lone Radiologist they kept on staff will be sued. This is an "Accountability Sink", and the lone radiologist will be what's known as a "Reverse Centaur". They'll just use up and wear out all of the fired Radiologists. They'll be plenty of them.
Cory Doctorow wrote an article exactly about Radiologists going obsolete here:
https://doctorow.medium.com/https-pluralistic-net-2025-12-05-pop-that-bubble-u-washington-8b6b75abc28e
Hilarious timing (Score:3)
Just yesterday I stumbled on this substack post about a research paper whose authors found that AI scored well on x-ray evaluations even when the AI took the test WITHOUT ACCESS to the x-ray images.
[1]https://drjo.substack.com/p/wh... [substack.com]
The moral of this story is that properly evaluating AI performance in classification tasks requires very very carefully designed tests, because neural nets are very very good at picking up correlations between the desired outputs and utterly unintentional signals in the inputs.
[1] https://drjo.substack.com/p/who-needs-actual-x-rays
Obviously, patients will not pay less (Score:2)
They will just get worse service. Anything else would be un-American.
the real win (Score:2)
will be when health insurance companies are replaced by AI
Where's the Proof? (Score:1)
Where are the numbers? How about testing a radiologist + AI, rather than each against the other. I'll bet most people would prefer to have both involved in their care. I think they've skipped over the false negatives bit too. How do you ensure you're not missing a bunch of stuff? You'll need humans to find what the AI's miss. I'm not saying keep all the humans, but I am saying the vision and arguments presented are far from complete, and as usual, focus on the upfront costs rather than the overall cost
CEO should pay for it (Score:2)
People who advocate for AI taking over tasks, should be help personally liable for the inevitable failure of the AI system
Here comes the accountability sink (Score:2)
An accountability sink is a system or structure within an organization that obscures or deflects responsibility for decisions, making it difficult to identify who is accountable when things go wrong. This often occurs when decision-making is delegated to complex rules or automated processes, preventing effective feedback and learning from mistakes
Quoting thoughts from Cory Doctorow:
So there will be one Radioligist on staff whose job will be to vet and certify all AI determinations. This person will be the "
Re: (Score:2)
This almost exactly matches the observed behavior from [my comment](https://slashdot.org/comments.pl?sid=23954882&cid=66072694)... which is an anecdote already dating back to 2009.
I've worked in medical imaging (Score:2)
Twice, in fact -- once in an academic research lab and once at a company that designed and built medical imaging equipment.
In both cases we worked on image classification using digital image processing and statistical pattern recognition. (In one of the two cases we also used syntactic pattern recognition and machine learning.) It's very, very, very hard to make this accurate enough for clinical use even if you pour effort and time and money into it. There's no way this technology should be deployed w
This isn't exactly new (Score:3)
I worked as a programmer at a medical billing company back in 2009, and let me tell you it was eye opening. We had radiologists working remotely (in 2009!) with mutliscreen setups that would show an original image on the left of one screen, a computer-enhanced version on the other side of the screen, with a computer generated opinion pre-generated at the bottom of the image (again: 2009 already had this). The other screen, usually rotated 90 degrees, would show minimal required relevant patient history/demographic on the top and offer a place to enter the radiologists opinion below, along with a button to copy over the computer-generated opinion.
Let's game out their options.
Let's say the agree with the computer, and they're right. No extra reward, they're just doing their job.
Let's say they agree with the computer, and they're wrong. Well, that must have been a hard case. Oh well.
Let's say they disagree with the computer, and they're right. Again, just doing their job./
But now if they disagree with the computer, and they're wrong, that is a world of malpractice lawsuit about to drop on their heads.
That is, every incentive this person has is to just always agree with the computer. There is no great bonus for doing better, and potentially huge consequences when they disagree. (And, by the way, this is now the training data for more recent AI options).
And it's this context we had at least one doctor billing $300,000.
Per month.
So, in this case at least, yes please bring on the AI. Because it's already doing it, and I'm sure the AI won't have to cost as much.
Great idea (Score:1)
AI has never, ever gotten an image wrong ever. Anyway, "Sir, you have a 1910 model freight train in your kidneys. It simply cannot be mistaken."
Hospitals could potentially produce "major savings (Score:2)
Hospitals could potentially produce "major savings".....savings which in our for profit healthcare systems will no be passed down to the pati------er customers.
Look at calendar before posting. (Score:2)
Yep, and WSJ as well.
March 31 article (Score:2)
The linked article is dated March 31, 2026.
Re: Steps (Score:2)
Allows them to employ less people