I am reflexively skeptical of any technology that promises to streamline, simplify, or otherwise facilitate basic functions generally performed, however imperfectly, by the human brain. I fear such innovations are luring our species into some dystopian wilderness where our gradually numbed neural pathways become increasingly irrelevant to the performance of everyday tasks.
Apart from those trifling concerns, I’m totally OK with the spell-check feature in my writing software.
As you might expect, people smarter than me tend to voice a more nuanced opinion on the topic of artificial intelligence — especially as it relates to AI’s nascent incursion into our healthcare system and its subsequent interactions with elderly patients. Take, for example, Peter Abadir, MD, and Rama Chellappa, PhD, who laid out their vision of AI earlier this year in The Journals of Gerontology. Abadir, a gerontologist at Johns Hopkins University School of Medicine, and Chellappa, an AI researcher at Johns Hopkins, believe the technology can revolutionize healthcare for seniors.
“The goal is to foster an environment where AI not only coexists with traditional healthcare practices but also enhances them, always keeping the well-being and dignity of older adults at the forefront,” they write.
Among the enhancements Abadir and Chellappa tout include a Stanford Hospital algorithm that assesses a patient’s mortality risk, encouraging end-of-life conversations with those expected to expire within three to 12 months; a Google AI system that outperforms radiologists on lung cancer detection; a Johns Hopkins algorithm that helps surgeons determine the best candidates for spinal surgery; and a Bayesian Health data integration system that can help physicians detect sepsis in a more timely manner.
As impressive as these developments may appear to technological illiterates like me, AI is likely to be tested more vigorously in the moral and ethical realms than in its ability to illuminate the cerebral inadequacies of mere humans. And I can’t help but find it slightly disingenuous that its promoters, including Abadir and Chellappa, seem to feel it necessary to repeatedly emphasize that robots will never replace physicians, surgeons, and gerontologists, while constantly celebrating AI’s ability to increase efficiency. Never do these boosters seem to acknowledge that healthcare administrators tend to stake their livelihoods on doing more with less.
But it’s not the prospect of losing her job to a robot that has Sarah Hull, MD, a cardiologist and clinical ethicist at Yale School of Medicine, questioning the enthusiasm surrounding AI in a healthcare setting. Her critique, delivered in a recent JAMA interview, is centered on the belief that medicine is “as much a moral endeavor as a technical one.”
“[Medicine] is “as much a moral endeavor as a technical one.”
To illustrate the difference between a human interaction with a patient and one relying on an algorithm, Hull describes a diagnostic dilemma she faced early in her career. Listening to a patient’s heartbeat, she detected a soft clicking noise. The patient’s health history seemed to indicate a low risk for a heart attack, so she decided against ordering an echocardiogram. But the decision troubled her overnight, and she changed her mind the next day. The test subsequently revealed a malfunctioning aortic valve that required surgery.
“If the task of diagnostic evaluation had been delegated to a futuristic stethoscope turbocharged with AI, its bell and diaphragm would sit quietly in a drawer overnight, unworried about what it ‘heard,’” she explains.
The technology might be useful as a supplemental mechanism that helps a physician interpret data or as part of an information-gathering effort to inform decision-making, she argues, but providers need to create guardrails to prevent its use for tasks that require moral agency. If, for example, a robot has shown the ability to outperform a surgeon on a particular procedure, and it is allowed to operate, then who is held accountable when a complication arises?
“Someone needs to own the ethical repercussions of that outcome,” she says. “So, I think that question needs to be answered before we would ever move to that scenario.”
And though AI technology could relieve some administrative burdens at the clinic level, Hull believes we should perhaps lower those expectations. Electronic medical records were supposed to provide similar relief, she notes, but they ultimately proved to be every bit as burdensome as paper files.
And efficiency, after all, isn’t everything. If these tools push our healthcare system to prioritize productivity at the expense of quality care, their implementation will have proven to be a detriment rather than a benefit.
“Patients already feel that they’re shuffled in and out too quickly at times, that the system is overburdened,” she says. “I think it is in many ways, but I don’t think the answer to unburdening the system is by trying to squeeze more and more throughput out of the same number of clinicians just by saying, ‘Well, use AI. So you can see people faster.’”
Before we get to that point, Hull suggests that healthcare providers and AI engineers need to engage with a diverse range of stakeholders — especially members of historically vulnerable communities — to identify their priorities and concerns about this technology.
“What would they like to see AI do for them? Not just What would we as the profession like to see AI do for us,” she says. “I think that’s going to be an absolutely critical piece in terms of ensuring the most ethical deployment of AI going forward.”
This Post Has 0 Comments