If, like me, you harbor reasonable doubts that the American healthcare system can become any more impersonal than it already is, you probably haven’t yet fallen prey to its newfound reliance on artificial intelligence. Frances Walker has.
As Casey Ross and Bob Herman report in STAT News, the 85-year-old Walker was hospitalized in 2019 after shattering her left shoulder and transferred to a nursing home to recover. An algorithm employed by her Medicare Advantage insurer crunched her data (diagnosis, age, living situation, physical function, etc.) and searched for similar cases amid a database of some 6 million patients before deciding she would be ready to safely return to her apartment, where she lived alone, in 16.6 days — at which time her care would no longer be covered.
When day 17 arrived, Walker remained in great pain, could not push a walker on her own, and was unable to dress herself or make it to the bathroom. Ignoring the actual medical reports, her insurer cut off payments to the nursing home, forcing Walker to deplete her savings and enroll in Medicaid to pay for three more weeks of treatment. Meanwhile, she appealed the decision, and waited for more than a year before a federal judge ordered the insurer to compensate her, noting that the company’s decision to follow the algorithm’s logic was “at best, speculative.”
Cost-containment is an obvious priority for insurers, especially since the Affordable Care Act in 2010 introduced a payment system that encouraged healthcare providers to spend less while delivering better treatment outcomes. That policy reform spurred the development of AI companies providing database algorithms to prevent nursing homes, in particular, from padding their revenues by keeping patients long after they were ready to return home.
The most prominent of those companies, NaviHealth, had secured contracts to manage post-acute care for more than 2 million people by 2015. Recognizing AI’s revenue potential, Medicare Advantage giant UnitedHealth five years later shelled out $2.5 billion to acquire the firm.
But the rise of AI-generated coverage decisions in recent years has tilted the playing field dramatically against patients. Traditional Medicare coverage pays for up to 100 days of care in a nursing home after a three-day hospital stay, for instance, but Medicare Advantage firms are using algorithms to justify cutting off payments much earlier.
“It happens in almost all these cases,” Christine Huberty, a Wisconsin attorney who offers free legal services to Medicare patients, tells STAT News. “But [the algorithm’s report] is never communicated with clients. That’s all run secretly.”
“[The algorithm’s report] is never communicated with clients. That’s all run secretly.”
And as the AI-generated denials became more common, Medicare Advantage patient appeals skyrocketed, rising by 58 percent between 2020 and 2022, Ross and Herman report. Nearly 150,000 appeals were filed in 2022 alone.
When Brian Moore, MD, who heads a team at North Carolina–based Atrium Health that reviews medical necessity criteria, began to look more closely at the uptick in claim denials, he was struck by the sloppiness of the AI’s assessments. Patients suffering from serious stroke complications couldn’t get coverage for treatment at rehabilitation hospitals, for example. And amputees were blocked from postsurgical recovery services. “It was eye-opening,” he says. “The variation in medical determinations, the misapplication of Medicare coverage criteria — it just didn’t feel like there [were] very good quality controls.”
But try to elicit a response from an insurer about its algorithmic approach to healthcare and you’re likely to be disappointed. “They say, ‘That’s proprietary,’” explains Amanda Ford, RN, who handles rehabilitation services at Lowell General Hospital in Massachusetts. “It’s always that canned response: ‘The patient can be managed in a lower level of care.’”
That’s what Holly Hennessy encountered when her 89-year-old mother, Dolores Millam, was notified that UnitedHealthcare had cut off coverage 15 days after she’d entered a nursing home to recover from surgery to repair a broken leg. Millam’s doctor had ordered her to stay off her leg for at least six weeks, and medical reports at the time coverage was denied revealed that she “is not yet safe to live independently.”
“I must have made — I’m not kidding — 100 phone calls just to figure out where she could go [and] why this was happening,” Hennessy recalls. “It’s simply a process thing to them. It has nothing to do with care.”
“I must have made — I’m not kidding — 100 phone calls just to figure out where she could go [and] why this was happening. “It’s simply a process thing to them. It has nothing to do with care.”
Hennessy’s appeals were denied by UnitedHealthcare as well as by an outside “quality improvement organization,” while her mom spent three more months in the nursing home before she had recovered sufficiently to return home — running up a nearly $40,000 bill. A federal judge ultimately ruled in their favor, finding no justification for denying care to someone who was a “safety risk.”
This AI revolution and its consequences have not escaped the notice of the Centers for Medicare and Medicaid Services, which in December proposed new rules prohibiting insurers from denying coverage “based on internal, proprietary, or external clinical criteria not found in traditional Medicare coverage policies,” Ross and Herman report. If approved later this spring, the guidelines would also require insurers to establish a “utilization management committee” to conduct an annual evaluation of the company’s practices.
The new rules wouldn’t bar internal coverage criteria — even if they’re used by an algorithm — but they’d have to be “based on current evidence in widely used treatment guidelines or clinical literature that is made publicly available.”
UnitedHealth and its brethren will no doubt flex their political muscles to dilute any changes that may threaten their profit margins, but the larger question about AI’s emergence in our healthcare system (and everywhere else) may relate less to institutional prerogatives than to cultural absorption — and acquiescence. As Ezra Klein put it last week in the New York Times, “What we cannot do is put these systems out of our mind, mistaking the feeling of normalcy for the fact of it.”
It’s tempting, Klein notes, to assume that AI development will conform to human-based timelines, allowing us to determine the pros and cons of, say, ChatGPT-infused virtual doctors, before they become the sole option for patients. But his conversations with AI leaders suggest that we’re “wildly” underestimating the pace at which this industry is moving us toward an “unrecognizably transformed world.”
“There is a natural pace to human deliberation,” he writes. “A lot breaks when we are denied the luxury of time.” Our enthusiasm for free-market healthcare solutions, such as NaviHealth, tends to produce answers before we’re prepared to ask prudent questions. An impersonal system then becomes a little less humane, and real humans — Frances Walker, Dolores Willam — needlessly suffer.
“That is the kind of moment I believe we are in now,” Klein argues. “We do not have the luxury of moving this slowly in response, at least not if the technology is going to move this fast.”
This Post Has One Comment
Great piece, Craig!!