UPMC’s Dr. Andrew Watson Pushes the Envelope on Clinical Language Understanding

April 23, 2014
| Reprints
A slow-to-react provider community has begun to embrace natural language processing
UPMC’s Dr. Andrew Watson Pushes the Envelope on Clinical Language Understanding
Andrew Watson, M.D.

The goal of clinical decision support (CDS) is to aid decision making of healthcare providers by providing easily accessible health-related information at the point and time it is needed. As such, natural language processing (NLP) technology is instrumental in using free-text information to drive CDS.

The University of Pittsburgh Medical Center (UPMC) is one healthcare organization that has been proactive in the use and development of natural language processing, with the goal to improve the quality and efficiency of care, first across UPMC— the largest non-governmental employer in Pennsylvania, with more than 62,000 employees—and then more broadly across the healthcare industry.  Currently, most medical data is captured as unstructured free text and cannot be easily analyzed.

Recently, Andrew Watson, M.D., CMIO and medical director for the Center for Connected Medicine at UPMC— a center that promotes a new model of healthcare that integrates information technologies to put patients at the center of care—spoke with HCI Assistant Editor Rajiv Leventhal about UPMC’s strategies around NLP, as well as the overall state of the health IT industry following the Healthcare Information and Management Systems Society (HIMSS) conference in February. Below are excerpts of that interview.

What are strategies UPMC uses to unlock its data? How is voice recognition and NLP helping with that?

Eighty percent of our data used to be unstructured text, so using speech-to-text solutions and NLP are our main attack methods. We had to get our data going into the input mechanisms, and we have put our entire system onto voice recognition and NLP. As a result, we have seen cost savings north of $12 million per year in transcription costs since you’re not paying transcription companies anymore, as well as all the downstream benefits that we never talk about. Getting structured data is the beginning of our analytics program. You are converting the process to something that is more consistent, reproducible, and specific. And regarding coding as well, if you dictate congestive heart failure (CHF) for example, the product that we use will actually prompt you to say is it systolic or diastolic, as implications are very different for those two diagnoses. You can’t just enter a blanket term like abdominal pain, because there are entirely different disease processes.

There is also a trend in chronic disease management and multidisciplinary clinics. I work with Crohn’s disease patients, and they are some of the most expensive patients in the U.S. They might see a GI, nutritionist, psychiatrist, and surgeon. So if they see one, and you don’t use speech-to-text, all of a sudden, the other parts of the care team don’t quickly see what the other specialists are doing.  It dilutes the visit and you lose about half of the visit quality. If you use speech-to-text, everyone gets the text all the way though. So there are clear, tangible benefits.

Do physicians now understand the importance of more accurate documentation?

The provider community by and large is a slow-to-react community; it always has been when it comes to cultural change. It’s a community that is excellent with science and fairly good with technology, but the overall culture of our craft is based in the guild system, and we’re slow to react. As a country, we have struggled with EHRs, so new developments associated with EHRs—such as advanced coding—have a negativity associated with them. It comes down to a matter of understanding and working with physicians to understand that a) we have to do this for patients’ longitudinal care records and care management and b) it’s critical to just taking care of patients, since part of the art of medicine is the e-art of medicine. You have to do it, it’s no different than hand washing, wearing your seatbelt or stopping smoking.

So would you say engaging in voice recognition and NLP is now essential?

If you don’t have accurate or timely information, you can injure patients, they can get readmitted, and they can die. In medicine, the absence of information is harmful, and you need real-time data for medical decision making. Without that, you’re running a risk. It’s like driving a car in that you need to know if it’s night out, raining, or snowing.

There is also something else that we don’t think about—a lot of times my staff would transcribe and print without speech-to-text, so they had to print the letters, which we would edit with a pencil. Then they would re-type it and send it. Figure that one out! One of our staff members had a brace on because she sprained her wrist filing charts. Shame on me for not realizing that my staff would sometimes spend five to six hours a day filing charts. It’s just not a good use of time when you can be talking to patients.

UPMC is a healthcare giant. For those health systems that aren’t at your size, what would your advice to them be going forward in the new healthcare?

First, I.T. does not have to be expensive, but it has to be well-implemented and understood. If you are a rural hospital that has 30 beds, just because something seems high-tech, it should not be feared. There is always a sense of that, which is a fallacy. Having clear provider peer-to-peer education is critical for adoption. In smaller hospitals, cultural change is even harder, so it’s incumbent on those who can transform the industry to get educated providers in there on the ground to help these folks transform.

At the HIMSS conference, was there an underlying theme that you were seeing or hearing from health IT executives?

Page
of 2Next
Topics