Too much AI has big drawbacks for doctors — and their patients


Synthetic intelligence in medical care is right here to remain — however it may do extra hurt than good, particularly if these implementing it lose sight of the important significance of a physician’s scientific judgment.

As a primary-care doctor, my job is to guage and re-evaluate a affected person in an ongoing personalised approach even one of the best AI might by no means attain. 

Right here’s an instance: An 80-year-old affected person of mine with persistent coronary heart failure drank and ate an excessive amount of on a latest Caribbean cruise and ended up in a hospital, his lungs crammed with fluid.

A cardiac echo revealed an ejection fraction (how nicely the center is pumping) of solely 15%.

The truth is, a latest research concluded AI may need assessed that ejection fraction higher than the heart specialist who did so, and this evaluation is clearly going to be an vital position for AI.

However the precise administration of the affected person went nicely past a easy quantity.

And repeated reassessment was required to provoke the proper therapeutic response every time his blood stress dropped or he gained a couple of kilos or turned barely wanting breath.

No AI might have dealt with it.

No AI might have managed this affected person.

This explicit affected person didn’t prefer to complain, and years of expertise guided me in tips on how to consider his character in a approach no AI might have thought-about.

If my affected person had consulted the favored AI app ChatGPT for fast solutions in real-time, lots of these solutions wouldn’t have had the nimbleness to assist him. (Sure, some are pushing such makes use of of ChatGPT.)

Keep in mind, AI is proscribed by the quantity of information you set into it.


Some in the medical field are pushing for using AI tools like ChatGPT to diagnose patients.
Some within the medical subject are pushing for utilizing AI instruments like ChatGPT to diagnose sufferers.
Christopher Sadowski

As a substitute, I practiced the artwork of medication.

I stored adjusting his blood-pressure medicines and his diuretics.

With much less resistance to pump towards, his coronary heart perform improved to an ejection fraction of greater than 30% and extended pricey hospitalizations have been prevented.

AI could possibly be there on the again finish to precisely reassess coronary heart perform however might by no means have managed the affected person alongside the way in which as I might.

There are different anticipated roles for AI too.

Insurance coverage firms and healthcare methods can get monetary savings within the brief run by implementing AI to exchange conventional features.

Certainly one of these is pre-certifications or pre-authorizations, the place a affected person requires particular permission from his or her insurer to carry out a particular take a look at or remedy that will transcend normal protocol.

I could wish to order an MRI, for instance, due to the slightest tingling or weak point in an extremity that could possibly be indicative of a a lot bigger drawback.

This won’t attain AI’s standards, however how am I going to argue successfully with a pc slightly than an insurance coverage firm’s medical director? 

Or slight shortness of breath and fatigue won’t attain an insurance coverage firm’s AI standards for approval for a stress echocardiogram although I really feel it’s indicated.

Or a calcium-scoring CT scan to search for coronary artery illness could also be turned down as a result of an algorithm determines the affected person is simply too younger.

All these points are depending on the actual affected person, and it’s my position to advocate for them with the insurer and its medical director.


The rise of AI in medicine could lead to reduction in quality of care.
The rise of AI in drugs might result in discount in high quality of care.
AP Picture/Michael Dwyer, File

However there may be rising stress for cost-saving AI to take over approvals and rejections, which provides one other thick stage of forms to an already-arduous course of.

Certainly, rising use of AI in medical apply threatens to superimpose a one-size-fits-all mannequin that’s been rising for the reason that day the Inexpensive Care Act handed.

True, there are estimates AI will result in $1.3 billion in financial savings to health-insurance firms this yr. 

However at what value to high quality of care?

We should contemplate that short-term financial savings could cause longer-term losses as diagnoses are missed or therapies are delayed.

Cigna is already utilizing algorithms to mass reject well being claims (with out even truly studying them), and few are appealed, ProPublica simply discovered. 

How can any physician discover the time or wherewithal to enchantment?

After which there’s malpractice. Working towards physicians like me are involved we can be held to a typical set by synthetic intelligence.

What if I disagree with a pc evaluation however am later confirmed to be flawed?

It will depart me and medical doctors like me open to legal responsibility and push us additional within the course of inflexible robotic care to keep away from being sued.

And conversely, elevated use of AI to diagnose or resolve on scientific care places a hospital or different well being system ready of legal responsibility when the AI isn’t as much as the duty or decides flawed.

“With rising integration of synthetic intelligence and machine studying in drugs, there are issues that algorithm inaccuracy might result in affected person damage and medical legal responsibility,” a latest Milbank Quarterly article famous.

However the issue with the authors’ resolution of increasing a “legal responsibility framework” to cowl the addition of AI is that it means yet one more layer of forms between the affected person and the precise care she or he requires.

AI padding the interface is a nasty resolution to the present health-care bloat.

Synthetic intelligence, when used correctly, goes to be a helpful healthcare instrument.

However medical doctors should management it slightly than the opposite approach round to safeguard the important doctor-patient relationship.

Marc Siegel, MD, is a scientific professor of medication and medical director of Physician Radio at NYU Langone Well being and a Fox Information medical analyst.