Editor’s Note: Part 1 of this article, which covered how AI is being applied in healthcare right now, can be read here.
Although the use of artificial intelligence (AI) in healthcare is still very much at a premature level, prognosticators are quite bullish on how AI platforms could be incorporated in the future to improve patient care. Indeed, a 2016 study by market researcher Frost & Sullivan revealed that the market for AI in healthcare is projected to reach $6.6 billion by 2021, representing a 40 percent compound annual growth rate.
The study specifically noted that “Clinical support from AI will strengthen medical imaging diagnosis processes. In addition, the use of AI solutions for hospital workflows will enhance care delivery. Overall, AI has the potential to improve outcomes by 30 to 40 percent while cutting treatment costs by as much as 50 percent.” Researchers attested that AI is already being leveraged at a high level in other sectors, so it’s only a matter of time before “AI systems are poised to transform how we think about disease diagnosis and treatment.” They added, “By 2025, AI systems could be involved in everything from population health management to digital avatars capable of answering specific patient queries. On a global scale, in regions with high underserved patient populations, AI is expected to play a significant role in democratization of information and mitigating resource burdens.”
While the idea is to have AI systems learn and understand new medical functions, and in turn empower doctors to make better evidence-based decisions at the point of care, there has been significant discussion about whether or not the technology’s potential is so powerful that it could one day actually replace human doctors. Indeed, the issue has been written about in major media outlets, with one article in Fortune even quoting athenahealth CEO Jonathan Bush as saying, “The human is wrong so freaking often, it’s a massacre. Nobody ever goes after the radiologist—they’re wrong so often we don’t blame [th]em."
However, most healthcare observers will refrain from going as far down that road as Bush did. Many even will say that there is no chance AI will ever replace doctors. They attest that the job of artificial intelligence and machine learning is to mimic human cognitive functions, and to eliminate repetitive work for doctors—not eliminate the doctors themselves.
Jason Bhan, M.D., a family physician who is the co-founder of New York City-based AI company Prognos, cautions folks to not get too far ahead of themselves. “A lot of people are talking about replacing the doctor, but I am not at all convinced. It’s actually more like ‘beat the doctor,’ or ‘help the doctor in a friendlier way,’” he says. Bhan notes that as he’s going through his patient’s chart, what he doesn’t want is the computer to tell him what to do. “No doctor would be thrilled by that,” he admits. But, he adds, “We understand how to take care of our patients and we do want to be helped. That’s where there’s a huge opportunity for AI to help clinicians in their decision making.”
Bhan brings up an example of looking at a patient chart, where he can draw from his years of clinician experience and predict that the patient has a significant chance of getting diabetes in the next few years. “But machines can look at those patients, bounce it against millions of other patients like that, and say this patient has an 80 percent chance of developing diabetes in the next few years. That really changes my management,” he says. “With the clinical data and the lab data, you can hone that timeframe down into something that’s actionable. That’s where we see AI going.”
Meanwhile, senior executives from consulting firm Sapient Healthcare note that the CIOs they talk to within provider organizations, as well as the physicians in the trenches themselves, have expressed some concern that AI could replace physicians, but the consultants are working to quell those fears. “The real story is that AI will augment your [work] and let you do more interesting things. And there’s truth to that,” says Larry Lefkowitz, Ph.D., chief scientist at SapientRazorfish, a company under Sapient that launched this year. “Also, looking at the strengths and weaknesses of [AI], the technology can be very complementary. An example of that could be physicians and researchers using tools to get their hands on information more readily to help them make the decisions. In those cases, the system isn’t making the decision and the researcher doesn’t want to spend loads of time trying to find the right information, so you have a win-win,” says Lefkowitz.
Peter Borden, managing director at Sapient, notes that people are using the term “augmented intelligence”—meaning that AI is not replacing people, but rather trying to make things more effective. “But that fear of how it will affect people’s lives has to get figured out,” Borden says. “As strong as the business case might be for an organization, if the people internally don’t know how it will affect them, it won’t get adopted.”
Lefkowitz gives an example himself of how AI could supplement a radiologist’s work, as radiology is one area in healthcare where AI and machine learning are already being leveraged in critical situations. He says that numerous studies have shown that a human has a certain error rate and an automated system has a certain error rate, but when used together they have a much lower error rate. In particular, he explains, “Radiologists almost never get a false-positive [result on a mammogram], so if they say it’s a cancer or whatever it might be, they are almost always right, but they’re likely to miss many cases. But on the flip side, the machine learning approaches almost never get a false-negative and tend to be more conservative. So you can combine that and have the machine learning take the first pass at it, [meaning] virtually nothing will get through, and you will be able to present a much smaller number of cases for a human analyst to then look at. So you are again allowing the human to focus on what they do best,” says Lefkowitz.
While clinician pushback may or may not be a real barrier to AI being leveraged more in healthcare, other concrete challenges do exist. Experts who were interviewed for this article all say that access to “good and clean” data remains a real problem. In fact, Bhan calls it the “biggest issue we have right now in this space.” Pundits point to healthcare data sets not yet being big enough, and the correct answers that will be learned are often ambiguous or even unknown in their current state. Much of this stems from the human body being quite complex, with lifestyle and environmental functions playing a role but being hard to measure.
What’s more, the comfort level of humans using the technology could also pose challenges. Borden says that in his conversations with CIOs, it’s not necessarily that there is pushback towards AI, but rather they want to know that it’s supported by the business. And for that to be the case, there has to be well-defined strategies around leveraging AI that incorporate easing people into the program. “Certainly, the idea of having a holistic view of data in order to do analyses is core to the roadmaps for every CIO. So we don’t see much pushback on that,” Borden says. “But the businesses are weary; they know that there’s huge potential, but they intuitively feel the risk about what this change will mean. It’s a change management program, so easing the program into the organization is key,” he says.
Bhan appoints out that everyone is quick to say that healthcare is 10 years behind other industries in terms of adopting technology, so it will certainly take time to leverage AI at a high level. “I would never go to a doctor and say ‘here’s some awesome tool that will tell you the likelihood of a patient getting a disease;’ they are simply not ready for that. The entire system has to ease their way into it, and you do that by finding innovators,” he says.
These innovators could be in the payer, pharma, or provider industry, and the key is to find those innovators and get them to buy in, Bhan says. “You just need to proceed slowly, since doctors are a conservative lot in general, and for the right reason, since it is important not to make mistakes and validate why you make certain decisions. This is not like picking up an iPhone and starting to use it. You need to put the patient first. There’s a lot of complexity,” he says.