Editor’s Note: Throughout the next week, in our annual Top Ten Tech Trends package, we will share with you, our readers, stories on how we gauge the U.S. healthcare system’s forward evolution into the future.
Reading about the future of healthcare these days likely means there will be some reference to artificial intelligence (AI). It’s one of those “buzz terms” that is being used in a variety of ways across the sector, though applications are still quite early in most cases. But make no mistake—for healthcare stakeholders of all types, AI is a term that’s on their minds.
A big reason why AI in healthcare has become such a popular concept certainly is due to the mainstream media coverage of IBM Watson, an artificial intelligence supercomputer that was thrusted into the world of healthcare just a few years after it won in Jeopardy! against record-setting champions in 2011. Watson Health, a unit of IBM, was launched at the 2015 HIMSS conference and employs thousands of people. However, along with the popularity of Watson has come intense scrutiny, especially in the last year.
A STAT News report from September 2017 was one of the first major stories detailing how Watson has been performing in hospitals, specifically examining Watson for Oncology—a solution that aims to help physicians quickly identify key information in a patient’s medical record, surface relevant articles and explore treatment options to reduce unwanted variation of care and give time back to their patients.
But the piece found that Watson for Oncology has struggled in several key areas, noting that while IBM sales executives say that Watson for Oncology possesses the ability to identify new approaches to cancer care, in reality, “the system doesn’t create new knowledge and is artificially intelligent only in the most rudimentary sense of the term.” A more recent report, also from STAT, included internal documents from IBM Watson Health which indicated that the Watson for Oncology product often returns “multiple examples of unsafe and incorrect treatment recommendations.”
There was also one newsworthy story last year about a partnership between IBM and MD Anderson Cancer Center, part of the University of Texas, which soured to the point where the $62 million project for the cancer center to deploy Watson had been scratched. Lynda Chin, M.D., who oversaw the Watson project at MD Anderson before it fell apart, told STAT reporters that it was quite challenging to make the technology functional in healthcare. “Teaching a machine to read a record is a lot harder than anyone thought,” she told STAT, noting how her team spent countless hours trying to get the machine to deal with the idiosyncrasies of medical records.
Meanwhile, in a recent interview with Healthcare Informatics, Francine Sandrow, M.D., chief health information officer (CHIO) at the Corporal Michael J. Crescenz Veteran's Affairs Medical Center in Philadelphia, notes that her team was working on a project with Watson, which was being used to identify patients who were at-risk for post-traumatic stress disorder (PTSD), but had not actually been diagnosed with it. This project focused on simply feeding their charts into the Watson engine, says Sandrow, who is involved in several Veterans Health Administration clinical informatics initiatives.
Unfortunately, she says, “They de-funded [the project] before we got to the results part.” She explains, “When you’re dissecting a chart, the first thing you have to do, when you’re training a computer to recognize [something], is define the terms that would be included as triggers for a particular condition. “So, for post-traumatic stress disorder, she continues, the high volume of terms meant that there weren’t too many charts that would be eliminated. In other words, there were too many indicators for the Watson machine to effectively pull out those patients at risk. “I’m not certain that they would be able to get the specificity that they were looking for. There’s a lot of subtle indicators for PTSD, and human behavior, that I think it would have clouded up the ability of the computer to recognize it, simply from the chart,” Sandrow says.
IBM, according to STAT, has reiterated to its customers that all data included in Watson for Oncology is based on real patients and that the product has won praise around the world for its recommendations. Discussions have also emerged on just how much the company should be blamed—versus the end user—for implementation struggles. To this point, Leonard D'Avolio, Ph.D., an assistant professor at Harvard Medical School and CEO and co-founder of healthcare technology company Cyft, notes, “Who is at fault there? IBM or the provider team that bought the product for marketing and hoped it would fulfill a vision?”
Of course, Watson is just one example of an AI technology that has sparked debate, but due to IBM’s immense industry standing and given how the tech giant has marketed Watson, for one of its top tech trends this year, Healthcare Informatics sought out to ask industry leaders what they were seeing and hearing about the AI supercomputer, and how its performance has affected the broader artificial intelligence landscape.
Humans versus Computers
Bill Kassler, M.D., is the deputy chief health officer at IBM Watson Health, and as a physician, he offers a unique dual-perspective on AI as he comes from both ends of the spectrum: a healthcare practitioner and a technology solution company executive. When asked about the skepticism that has surrounded Watson of late, Dr. Kassler says that in general in healthcare, doctors, hospital administrators and other decision makers are conservative and operate in resource-constrained ways. “They are skeptical about technology, drugs, and anything else that’s new. That’s the baseline culture.”
Bill Kassler, M.D.
Kassler contends that even though IBM must work around this challenge, its AI offerings remain quite popular worldwide. Indeed, IBM Watson Health’s Oncology and Genomics business has doubled in revenue year after year since 2015, and its AI offerings are now being used in more than 230 hospitals around the world. Last year at this time, that number was just 55 hospitals, he says.
For traditional physicians, one of the primary critiques with AI is that the computer’s treatment recommendations may differ from the doctor’s. For instance, a physician that makes decisions based on decades of experience might not take too kindly to a computer recommendation that the doctor firmly believes is not the best option for the patient.
Kassler says he gets asked this question frequently, and attests that studies have been done on how often the Watson computer agrees with a panel of patient care experts. He references one particular study, published last year in the journal The Oncologist, that was led by oncologists at the University of North Carolina’s Lineberger Comprehensive Cancer Center. The oncologists tested Watson for Genomics on more than 1,000 retrospective patient cases. More than 99 percent of the time, Watson agreed with the physicians, but beyond that, in more than 300 cases, Watson found clinically actionable therapeutic options that the physicians had not identified.
To this point, Kassler acknowledges that if the technology simply always agrees with the human, there is “limited utility.” While it can improve unwanted variation and quality, “what you really want is for that system to surface new insights,” he says. In a separate study of Watson for Oncology that Kassler mentions, inclusive of nearly 2,000 high-risk breast cancer patients, 30 percent of the time, Watson identified a new tumor mutation and had actionable recommendations.
As such, Kassler says, “If there’s a conflict [between computer and human], our hope is that Watson will deliver a list of recommended treatment options, the doctor will look at that and [compare] what his or her patient has with the other factors that Watson has included, and will then choose to accept the computer’s recommendations or not. And then the doctor will tell Watson why he or she made that decision so that Watson can learn from it,” he explains.
Expanding on this point, Yan Li, Ph.D., an assistant professor of information systems and technology at California-based Claremont Graduate University, notes that most AI technologies are in the form of a black box—that is, providing an output (recommendations) from a set of inputs without an explanation as to why. “It is very difficult for an experienced clinician to trust such an output without a logical explanation, especially if the output is different from his or her experience-based judgment,” Li asserts.
Is it Worth the Battle?
More broadly speaking, the reason why so many innovators are bullish on leveraging AI in healthcare has to do with the computer's learning or computation capabilities—specifically its speed and volumes in consuming information, Li says. “To provide high-quality care, medical practitioners must continuously update their clinical knowledge and keep current with the research literature,” she says, referencing a study that estimated it would require a physician approximately 627.5 hours per month to evaluate newly published research in primary care. But for computers, Li says, “processing this literature would take a matter of a few hours, and even less if we horizontally scale up the computation power.”
At the same time, there are a fair share of challenges, beyond the aforementioned trust issue. Li believes that in their current state, most AI solutions require training. “It is not the computer; rather, it is the computational algorithm that is trained based on historical data, and then makes predictions, classifications, or inferences based on input data. AI algorithms fall short in not considering relevant clinical information that may not be captured in the training data,” she says, offering an example of a diagnostic conversation between the patient and the clinician.
There is additionally a fear conundrum: the concern that AI technologies will eventually diminish the need for certain human jobs as they have begun to do in many other sectors. But the experts interviewed for this piece believe that this apprehension is mostly unwarranted. “It’s not a valid fear. It’s just something that sells stories because talking about replacing humans is something that’s super interesting,” says Cyft’s D'Avolio. Sanket Shah, an instructor for the University of Illinois at Chicago’s Department of Biomedical and Health Information Sciences, agrees with D'Avolio, noting, “Physicians need not fear being replaced by AI. Physicians are the providers of care and AI is one of the many tools they use to administer that care and improve their craft.”
Leonard D'Avolio, Ph.D.
In the end, when all the concerns and potential benefits are added altogether, most experts are still bullish on how AI can provide key clinical decision support to improve patient outcomes and lower costs. D'Avolio believes that many health system leaders have recently broadened “what was once a narrow view of AI and machine learning within their organizations.”
What’s sorely needed, most leaders in this space agree, is better education on how AI offerings exactly will work in healthcare organizations. And in this sense, Watson’s successes and failures can be used to learn lessons moving forward. In the first STAT report, the authors wrote, “The actual capabilities of Watson for Oncology are not well-understood by the public, and even by some of the hospitals that use it.”
Of course, at what level a provider might leverage AI might also depend on several other factors. IBM’s Kassler notes, “If you are a small, one-person family practice in rural Vermont that is now just starting to use Excel spreadsheets for population health registries, yes, it’s too early [to start using AI]. But if you are a large integrated delivery network looking to invest in and be part of the development and perfection of this technology, it’s a great time,” he says.
As such, it makes it tough to answer if AI is at a crossroads in this current moment, and this will probably be a meaningful health IT trend in the years to come. As Kassler acknowledges, “For those on the leading edge, it’s a great time to get involved, but it’s not for everyone.”