Skip to content Skip to navigation

AI Now? Really?

March 14, 2017
| Reprints
"The test of a first-rate intelligence is the ability to hold two opposed ideas in mind at the same time and still retain the ability to function." - F. Scott Fitzgerald

Once again, I am failing Fitzgerald’s test of first-rate intelligence.  It happens frequently. This week the cause is artificial intelligence (AI) for healthcare. I blame HIMSS 2017.

HIMSS always has three types of topics: Perennial Favorites, Freak-outs and the Next Big Thing. Perennial favorites are the usual suspects like population health, interoperability, analytics, telehealth and rev-cycle optimization. Freak-outs are the annual “big scary thing we all need to freak out about immediately." This year it was cybersecurity and the freak out seems mostly justified and a bit overdue. Lastly, we have the coveted “Next Big Thing” or NBT. NBT is always transformational and will “change everything." It’s bright and shiny and it’s coming “real soon." NBT often involves a black box that most of us can’t really understand and requires a high degree of trust in the domain experts. For 2017, it’s AI for healthcare – specifically the near-term application of AI as an independent actor when it comes to diagnosis and treatment.

The definition of AI is malleable but the premise seems straightforward. Healthcare is a vast, dense and complex subject matter. The human mind, for all its power, has limitations. We get tired and distracted. We rush. We forget or we are ignorant. We can only hold a limited number of facts or ideas in our heads at any moment. We are prejudiced and exercise bad judgement. AI can, in theory mitigate some of these inherit, human limitations. Functioning as a kind of “auxiliary brain,” AI will help us think more clearly and completely. Some contend that eventually the roles will reverse and the humans will become the auxiliary brain or maybe just the “muscle” that delivers AI-directed care. Or perhaps the robots will do that part too. I’ll be at the Tiki Bar if you need a human.

All of this makes sense from a purely academic, theoretical perspective. The limits of human capacity and their impact on healthcare outcomes are real and well documented. There is something very, very appealing about the notion that AI could make us smarter and less prone to error. I can easily hold that single idea in my head and still function. I even think its plausible and close at hand when I see what can be done today with predictive analytics. It’s when I begin to think about the larger context that my head starts to hurt. Three specific issues make me skeptical about AI for healthcare at this juncture: opportunity cost, high-reliability, and zebras.

Healthcare is subject to limited resources so every strategic choice carries an opportunity cost: “if we do this, then we can’t do that.” I worry the opportunity cost of pursuing AI right now is too high. There are so many other important issues and opportunities of higher priority. There is low hanging fruit when it comes to better care, lower cost and higher satisfaction that does not require highly advanced IT or biomedical technology. Perhaps it’s the former family doc in me, but it seems like we have plenty we could do to improve the care of chronic diseases like diabetes and hypertension. It’s also clear we must improve at recognizing and treating acute conditions like sepsis, heart attacks and strokes. And don’t forget better prenatal care, prevention and wellness. Or palliative and end-of-life needs. This is where the vast burden of illness, suffering and costs lie and where we often fall short on best practices and evidence-based care. AI likely has little to offer here of immediate value and can divert resources and attention from these harder (and frankly less sexy) needs. And make no mistake, AI does require significant resources—hardware, software, people and time.

My second big concern relates to high-reliability in healthcare. When my kids roll their eyes and say, “Yeah Dad, I know." My usual response is, “There’s knowing and there’s doing. They are not the same." The hallmark of high-reliability organizations (HROs) is that they both know and do consistently. For example, it’s not enough to “know” how to recognize and treat sepsis. You must “do” by delivering the right care in a consistent and reliable way every time. The goal is zero defects in care. Now I can see where AI might help me recognize sepsis sooner or consider some nuance in the treatment based on unusual circumstances. That’s the “knowing” part. But, I think it’s a stretch to claim it will make the mechanics of the delivery system, the “doing,” more consistent. That has far more to do with workflow, culture and teamwork, equipment, training and a host of other technical and soft-skills issues. Airplanes and nuclear power plants are safe because they take a comprehensive, evidence-based, well-resourced approach to high-reliability. Healthcare needs to do the same. It’s not clear that AI has much to offer here in the near term.

Which brings us to the zebras. There’s an old expression in healthcare, “When you hear hoof beats, think horses not zebras.” It’s a clever way of making the point that “common things occur commonly.” Sure, that patient with new hypertension may have pseudopheochromocytoma, an extremely rare cause of high blood pressure: that’s a zebra. But it’s far more likely to be “the horse”: routine essential hypertension. I expect AI will help us remember to consider the zebras and determine if they are worth evaluating—that’s a good thing. But my understanding is that this is an infrequent problem. Most medical errors occur not because rare diagnoses are missed but because we fail to recognize the commonplace or simply screw up the process of delivering care. AI may help but I suspect it will be on the margins.

Pages

Topics