Artificial intelligence (AI) has been a hot topic lately. Much has been said about its promise to improve our lives, as well as its threat to replace jobs ranging from receptionists to radiologists. These wider discussions have naturally led to some interesting questions about the future of medicine. What role will human beings have in an ever-changing technology landscape? When AI becomes a better "doctor," what will become of doctors? How will patients and medical professionals adjust to these changes?
While it is, of course, hard to make accurate predictions about the distant future, my experience both as a doctor and now CEO of a software company that uses AI to help doctors deliver safer care, gives me some insight into what the intermediate future will hold for the medical profession.
Medicine is one of the great professions in every culture in the world—an altruistic, challenging, aspirational vocation that often draws the best and the brightest. Doctors spend years in training to make decisions, perform procedures, and guide people through some of their most vulnerable points in life. But medicine is, for the most part, still stuck in a pre-internet era. Entering a hospital is like walking into a time capsule to a world where people still prefer paper, communication happens through pagers, and software looks like it’s from the 1980s or 1990s.
But this won’t last; three giant forces of technology have been building over the last few years, and they are about to fundamentally transform healthcare: the cloud, mobile, and AI. The force least understood by doctors is AI; after all, even technophobic doctors now spend a lot of time using the internet on their smartphones. Even so, AI is the one that will likely have the biggest impact on the profession.
A lot of people believe that AI will become the primary decision maker, replacing human doctors. In that eventuality, Dr. AI will still need a human “interface,” because it is likely patients will need the familiarity of a human to translate the AI’s clinical decision making and recommendations. I find it an intriguing thought—going to the doctor’s office and seeing a human whose job it is to read the recommendations of a computer just to offer the human touch.
But to understand what the future could hold, we must first understand the different types of problems that need to be solved. Broadly, problems can be split into simple, complicated, and complex ones. Simple and complicated problems can be solved using paradigmatic thought (following standardized sets of rules), something computers excel at. What makes complex problems unique is that they require judgment based on more than just numbers and logic. For the time being, the modern machine learning techniques that we classify as “AI” are not well suited to solving complex problems that require this deeper understanding of context, systems, and situation.
Given the abundance of complex problems in medicine, I believe that the human “interfaces” in an AI-powered future won't simply be compassionate people whose only job is to sit and hold the hand of a patient while reading from a script. These people will be real doctors, trained in medicine in much the same way as today—in anatomy, physiology, embryology, and more. They will understand the science of medicine and the decision making behind Dr. AI. They will be able to explain things to the patient and field their questions in a way that only people can. And most importantly, they will be able to focus on solving complex medical problems that require a deeper understanding, aided by Dr. AI.
I believe that the intermediate future of medicine will feel very similar to aviation today. Nobody questions whether commercial airline pilots should still exist, even though computers and autopilot now handle the vast majority of a typical flight. Like these pilots, doctors will let "auto-doc" automate the routine busy work that has regrettably taken over a lot of a clinician’s day—automatically tackling simple problems that only require human monitoring, such as tracking normal lab results or following an evidence-based protocol for treatment. This will let doctors concentrate on the far more complex situations, like pilots do for takeoffs and landings.
Dr. AI will become a trusted assistant who can help a human doctor make the best possible decision, with the human doctor still acting as the ultimate decision maker. Dr. AI can pull together all of the relevant pieces of data, potentially highlighting things a human doctor may not normally spot in an ocean of information, while the human doctor can take into consideration the patient and their situation as a whole.
Medicine is both an art and a science, requiring doctors to consider context when applying evidence-based practices. AI will certainly take over the science of medicine in the coming years but most likely won't take over the art for a while. However, in the near future, doctors will need to evolve from being scientists who understand the art of medicine to artists who understand the science.
Dr. Gautam Sivakumar is the CEO of Medisas