Skip to content
Join Life Time
a patient meets with a clinician with robotic text over the top

Anyone who has spent any time sitting in an examination room while their doctor taps away at a computer has probably found themselves yearning in vain for a brief whiff of empathy. So, you may be pleased to learn of recent research suggesting that our healthcare system is not completely devoid of practitioners who will offer a caring word on occasion. It’s just that they happen to be chatbots.

Researchers downloaded a set of questions patients had posted on a social media forum and, after collecting the responses of doctors, they fed the same questions to ChatGPT and compared its responses to those of the physicians. The results, published in JAMA Internal Medicine, revealed that the bot offered more empathetic — and high-quality — feedback.

“We do not know how chatbots will perform responding to patient questions in a clinical setting,” the authors noted, “yet the present study should motivate research into the adoption of AI assistants for messaging.”

It’s just the latest sign that artificial intelligence (AI) is quietly — and not so quietly — insinuating itself into our already fragile healthcare system. The incursion is primarily a response to an epidemic of burnout among doctors, but it can also be viewed as a brazen attempt to streamline processes and boost profits.

Much has already been made of the trend among Medicare Advantage insurers to employ algorithms that more efficiently deny claims without regard to a physician’s opinion, but the potential effects of AI on the doctor-patient relationship have only recently come under scrutiny.

There’s no question that doctors are overwhelmed by paperwork, a fact that has led to the rise of medical scribes at some clinics and hospitals. My trip to the ER last summer, for instance, featured a consultation with a doctor who was accompanied by both a nurse and a woman who took notes. It allowed the doctor to focus more intently on the chastened septuagenarian and his sudden hypertensive crisis. When I met a few days later with my general practitioner, on the other hand, she spent the bulk of our time together pecking away on her computer.

Could an AI device handle those administrative duties more seamlessly, allowing physicians to connect more effectively with their patients? Several companies are betting that it could. A San Francisco startup called Glass Health, for instance, has developed a chatbot that would “listen” as doctors relayed patient information to it — and even generate a list of possible diagnoses along with a treatment plan.

“The physician quality of life is really, really rough. The documentation burden is massive,” Glass Health founder and CEO Dereck Paul, MD, tells National Public Radio. “Patients don’t feel like their doctors have enough time to spend with them.”

“The physician quality of life is really, really rough. The documentation burden is massive,” Glass Health founder and CEO Dereck Paul, MD, tells National Public Radio. “Patients don’t feel like their doctors have enough time to spend with them.”

The Glass AI chatbot, he argues, would relieve doctors of those burdensome duties while delivering reliable information. “We’re working on doctors being able to put in . . . a patient summary, and for us to be able to generate the first draft of a clinical plan for that doctor,” he explains. “So, what tests they would order and what treatments they would order.”

Paul notes that Glass AI is programmed with data from actual practicing physicians — a “virtual medical textbook” — that will help the chatbot avoid some of the botched responses users of ChatGPT often report. But some researchers who have studied this fast-evolving technology are less sanguine.

Mark Succi, MD, a researcher at Massachusetts General Hospital, has found that the diagnostic performance of chatbots generally mirrors that of a third- or fourth-year med student. And MIT computer scientist Marzyeh Ghassemi, PhD, says her research has revealed that the bots tend to reflect a level of racial bias similar to that which exists in our healthcare system.

Princeton University computer science professor Arvind Narayanan, PhD, echoes these concerns in a recent JAMA interview. He cites a study in the journal Science that analyzed the performance of an algorithm used by many hospitals to predict patient risks and recommend treatment.

“What it found was that the algorithm had a strong racial bias in the sense that for two patients who had the same health risks — one who is white and one who is Black — the algorithm would be much more likely to prioritize the patient who is white,” Narayanan says.

“What it found was that the algorithm had a strong racial bias in the sense that for two patients who had the same health risks — one who is white and one who is Black — the algorithm would be much more likely to prioritize the patient who is white,” Narayanan says. “Like all AI algorithms, it’s trained on past data from the system. Since most hospitals had a history of spending more on patients who are white than on patients who are Black, the algorithm had learned that pattern.”

The algorithm, he points out, was working perfectly. The impact on the Black patient? Not so great.

“What we really need are evaluations of medical professionals using these tools in their day-to-day jobs on an experimental basis, and for AI experts to evaluate them in actual clinical use,” he argues. “Until we have those kinds of evaluations, we should have very little confidence in how these are going to work in the real world.”

A chatbot may offer more empathy than my doctor, in other words, but I’d take a proper diagnosis and effective treatment plan over an occasional caring comment any day.

Craig Cox
Craig Cox

Craig Cox is an Experience Life deputy editor who explores the joys and challenges of healthy aging.

Thoughts to share?

This Post Has 0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

ADVERTISEMENT

More Like This

a robot's hand holding an apple

PUMPING IRONY: Robots to the Rescue?

By Craig Cox

The first comprehensive review of U.S. nursing homes in more than 35 years reveals an industry that has done little to improve resident care. And while policymakers talk about reform, everyone else seems to be talking about robots.

a video camera installed above a living room

PUMPING IRONY: Surveillance State

By Craig Cox

Surveillance technologies can make it easier for the elderly to age in place, but will it mean we’ll see our kids even less often than we do now?

Back To Top