Editor’s Note: Part 1 of this article, which covered how AI is being applied in healthcare right now, can be read here.
Although the use of artificial intelligence (AI) in healthcare is still very much at a premature level, prognosticators are quite bullish on how AI platforms could be incorporated in the future to improve patient care. Indeed, a 2016 study by market researcher Frost & Sullivan revealed that the market for AI in healthcare is projected to reach $6.6 billion by 2021, representing a 40 percent compound annual growth rate.
The study specifically noted that “Clinical support from AI will strengthen medical imaging diagnosis processes. In addition, the use of AI solutions for hospital workflows will enhance care delivery. Overall, AI has the potential to improve outcomes by 30 to 40 percent while cutting treatment costs by as much as 50 percent.” Researchers attested that AI is already being leveraged at a high level in other sectors, so it’s only a matter of time before “AI systems are poised to transform how we think about disease diagnosis and treatment.” They added, “By 2025, AI systems could be involved in everything from population health management to digital avatars capable of answering specific patient queries. On a global scale, in regions with high underserved patient populations, AI is expected to play a significant role in democratization of information and mitigating resource burdens.”
While the idea is to have AI systems learn and understand new medical functions, and in turn empower doctors to make better evidence-based decisions at the point of care, there has been significant discussion about whether or not the technology’s potential is so powerful that it could one day actually replace human doctors. Indeed, the issue has been written about in major media outlets, with one article in Fortune even quoting athenahealth CEO Jonathan Bush as saying, “The human is wrong so freaking often, it’s a massacre. Nobody ever goes after the radiologist—they’re wrong so often we don’t blame [th]em."
However, most healthcare observers will refrain from going as far down that road as Bush did. Many even will say that there is no chance AI will ever replace doctors. They attest that the job of artificial intelligence and machine learning is to mimic human cognitive functions, and to eliminate repetitive work for doctors—not eliminate the doctors themselves.
Jason Bhan, M.D., a family physician who is the co-founder of New York City-based AI company Prognos, cautions folks to not get too far ahead of themselves. “A lot of people are talking about replacing the doctor, but I am not at all convinced. It’s actually more like ‘beat the doctor,’ or ‘help the doctor in a friendlier way,’” he says. Bhan notes that as he’s going through his patient’s chart, what he doesn’t want is the computer to tell him what to do. “No doctor would be thrilled by that,” he admits. But, he adds, “We understand how to take care of our patients and we do want to be helped. That’s where there’s a huge opportunity for AI to help clinicians in their decision making.”
Bhan brings up an example of looking at a patient chart, where he can draw from his years of clinician experience and predict that the patient has a significant chance of getting diabetes in the next few years. “But machines can look at those patients, bounce it against millions of other patients like that, and say this patient has an 80 percent chance of developing diabetes in the next few years. That really changes my management,” he says. “With the clinical data and the lab data, you can hone that timeframe down into something that’s actionable. That’s where we see AI going.”
Meanwhile, senior executives from consulting firm Sapient Healthcare note that the CIOs they talk to within provider organizations, as well as the physicians in the trenches themselves, have expressed some concern that AI could replace physicians, but the consultants are working to quell those fears. “The real story is that AI will augment your [work] and let you do more interesting things. And there’s truth to that,” says Larry Lefkowitz, Ph.D., chief scientist at SapientRazorfish, a company under Sapient that launched this year. “Also, looking at the strengths and weaknesses of [AI], the technology can be very complementary. An example of that could be physicians and researchers using tools to get their hands on information more readily to help them make the decisions. In those cases, the system isn’t making the decision and the researcher doesn’t want to spend loads of time trying to find the right information, so you have a win-win,” says Lefkowitz.
Peter Borden, managing director at Sapient, notes that people are using the term “augmented intelligence”—meaning that AI is not replacing people, but rather trying to make things more effective. “But that fear of how it will affect people’s lives has to get figured out,” Borden says. “As strong as the business case might be for an organization, if the people internally don’t know how it will affect them, it won’t get adopted.”
Get the latest information on Health IT and attend other valuable sessions at this two-day Summit providing healthcare leaders with educational content, insightful debate and dialogue on the future of healthcare and technology.