In health IT circles, from the federal government level down to the physician practice level, the conversation about reducing the ever-increasing clinician burden has ramped up in the past several months to the point that it’s become one of the most highly-discussed industry issues. While doctors complaining about electronic health records (EHRs) is nothing new, and has been covered ad nauseam in both the trade press and the mainstream media, their levels of frustration have gained significant momentum of late.
Those who attest that EHRs create more work for physicians, rather than less, frequently point to a study published in the fall of 2016 in the Annals of Internal Medicine that got a massive amount of attention amongst health IT folks. Researchers for this study concluded that for every hour physicians provide direct clinical face time to patients, nearly two additional hours is spent on EHR and desk work within the clinic day. And, outside office hours, physicians spend another one to two hours of personal time each night doing additional computer and other clerical work. In an accompanying editorial published in the journal, Susan Hingle, M.D., from SIU (Southern Illinois University) School of Medicine, wrote, “[Christine] Sinsky and colleagues confirm what many practicing physicians have claimed: Electronic health records, in their current state, occupy a lot of physicians' time and draw attention away from their direct interactions with patients and from their personal lives.”
What’s more, Hingle also noted in her editorial that half of the practices studied had documentation support services (dictation or a documentation assistant) available to physicians. To this end, findings from another recent study revealed that dictation and natural language processing (NLP)—a technology that allows providers to gather and analyze unstructured data, such as free-text notes—may be helpful in reducing these burdens.
This research, published last year in JMIR Medical Informatics, examined the use of NLP on time spent on clinical documentation, data quality, and EHR usability. Researchers looked at 118 documented notes and tested four different clinical documentation approaches among 31 physicians in three specialties: a purely NLP approach; a purely standard approach using the keyboard and mouse; and two hybrid approaches. The researchers concluded, “In this study, the feasibility of an approach to EHR data capture involving the application of NLP to transcribed dictation was demonstrated. This novel dictation-based approach has the potential to reduce the time required for documentation and improve usability while maintaining documentation quality.”
The lead researcher for this study, David R. Kaufman, Ph.D., associate professor, department of biomedical informatics at Scottsdale-based Arizona State University, wrote at the time of publication that “The process of documentation in EHRs is known to be time consuming, inefficient, and cumbersome. The use of dictation coupled with manual transcription has become an increasingly common practice. In recent years, NLP–enabled data capture has become a viable alternative for data entry. It enables the clinician to maintain control of the process and potentially reduce the documentation burden. The question remains how this NLP-enabled workflow will impact EHR usability and whether it can meet the structured data and other EHR requirements while enhancing the user’s experience.”
Kaufman, in a more recent interview with Healthcare Informatics, says that his team was hoping to see that the NLP approach led to increased efficiency without detriment in document quality. “It defeats the purpose if it’s fast but you only get 70 percent of the quality; that’s certainly not a good tradeoff in healthcare,” he says. “So we are asking the question if the concept is viable, does it result in potential improvements, and can you retain quality? I think the answer is provisionally yes,” he says, noting that further research in a clinical setting—rather than a simulated one—is needed.
For this study, Kaufman and his colleagues used MediSapien, a medical transcription NLP platform from Islandia, N.Y.-based ZyDoc. The NLP-NLP approach that was tested took a median of 5.2 minutes for cardiologists to document the note; 7.3 minutes for nephrologists; and 8.5 minutes for neurologists. The standard-standard approach that was tested took an average of 16.9, 20.7, and 21.2 minutes respectively. The hybrid models both took an amount of time somewhere in the middle.
Kaufman, a cognitive psychologist by training who says he’s mostly interested in computer interaction and human factor issues, says that there are various conventional ways around manual entry to documentation, such as using macros or copy-and-paste methods, “but they all have their problems and none are satisfying [techniques].” Says Kaufman, “NLP isn’t new to EHRs but we’re at a point where it’s coming of age as a viable alternative [to manual entry]. In the last five years, we have seen a transformation.”
How Can NLP Assist?
Some of the most pioneering healthcare organizations are now figuring out how to effectively use NLP to preserve the patient narrative in the note for all care team members involved. Indeed, as there’s an explosive growth of unstructured clinical data available in EHRs, this is where NLP “shines and serves a need,” says Amy Czahor, vice president, optimization and analytics services, RecordsOne, a Naples, Fla.-based healthcare solutions company.
“NLP and machine learning can help when you need to get that unstructured free text and you’re able to make sense of it,” she says, offering a few examples: “How many times was a specific metric about a patient recorded? How many times did this surgeon say a specific word in an operative report rather than look for specific measured outcomes from an EHR? So it allows doctors to be able to dictate and tell the story in their own words without a template, because ultimately, NLP can go back through that free text.” Adds Czahor, “From an aggregate standpoint, NLP allows us to identify common words that have cross meanings and aggregate those into trends and identify priorities. From a clinical perspective, it allows us to identify gaps in documentation, or hints about real terms that we need in the record to be able to code it, and [then] go back code and query.”
Elizabeth Marshall, M.D., director of clinical analytics at Linguamatics, a U.K.-based NLP-based text mining software provider, feels that structured data does a very good job of telling the “what” of a patient’s story, as in what has happened to them (what the patient’s conditions, procedures, and labs are, for instance), but this structured data is limited when it comes to telling the “why,” as the why is predominantly hidden in unstructured form. “To me, this is a wonderful contribution of NLP,” says Marshall, a former health research scientist for mental health at the Veterans Administration.
For example, she continues, if the patient has documented uncontrolled diabetes, this can be well represented in structured form. “But, why is it uncontrolled? Maybe the patient simply doesn’t want to take the medications, or perhaps he or she is unable to get to the pharmacy, or maybe he or she has a form of cognitive impairment and forgets to take his or her meds. What we need to know to answer that question is, what’s the underlying issue? Social determinants of health play a major role in this and they are often trapped in clinical notes. Knowing the reason why is the first step to addressing the problem, and unstructured data may be the primary place to find those answers,” Marshall says.
Elizabeth Marshall, M.D.
However, NLP isn’t just a simple word search, as some may think, she adds. “It’s far more than that; NLP can look for linguistic patterns and concepts, and this is important in medicine because there are many ways to say the same thing using different words.” For example, Marshall explains, a patient mentioning a disease or pre-existing condition during a clinical visit doesn’t necessarily mean he or she has the disease. “Context is key, and NLP is able to find the context that would be missing in a basic word search. So, does the patient have this condition? Or is it a family history, and that’s why it was mentioned? Or is it mentioned in a negated form, which means the clinician ruled the condition out?” she asks.
At UPMC, Striving for Better Accuracy
At the 20-plus-hospital University of Pittsburgh Medical Center (UPMC) health system, there is a feeling of immense pressure on clinicians—perhaps more than ever before, attests Rasu Shrestha, M.D., chief innovation officer, UPMC. Indeed, notes Shrestha, physicians’ caseload indexes continue to increase, and oftentimes the severity of these patients’ conditions continues to increase. “And the options we have around how we treat these patients continue to go up, too. What all this means is that there is a lot of pressure amongst clinicians to up their game, not just doing the clerical work that’s required around the care process, but also meeting the demands of their large caseloads. And at the same time you cannot falter; if one thing goes wrong, there’s a human life at the other end,” Shrestha says.
All of this mounting pressure is a core reason why leaders at UPMC turned to NLP years ago to aid with a critical pain point that needed to be addressed—looking at disease burden and documentation, and getting “way more accurate in how we were looking at disease burden,” says Shrestha.
It was about five years ago when UPMC started to look at a number of different NLP algorithms and engines, and eventually ended up with one from Health Fidelity, a startup at the time in Silicon Valley, who UPMC ultimately invested in. Shrestha notes that the company’s NLP engine was formidable, but their initial product was not; so through UPMC Enterprises, the venture arm of the health system, a product called HCC Scout was created. The product specifically utilizes NLP and big data to identify documentation that is in the clinical record to support the coding of specific conditions relevant to the risk adjustment model.
Rasu Shrestha, M.D.
Shrestha notes that HCC (hierarchical condition category) coding—used by insurance companies to determine patients' future medical needs—“is really critical [when talking about] reimbursement and ICD-10 codes.” Indeed, UPMC implemented the HCC Scout product, with its engine that looks for documentation that is relevant for the risk adjustment coding, at one of its hospitals, and in the first year alone, the hospital saw upwards of $29 million in annual revenue that was captured, he says.
Shrestha explains that the NLP engine specifically looks at the documents as they are being generated by the clinicians, and with the specific learnings that have been put in place and with all of the coding work that’s done, it is able to pick up terms or terminologies in those documents that are being created, and then it segregates those terminologies. And the coder might not have access to coder’s medical records in a single consolidated view, but the NLP engine does, he says.
“It can look not at just the specific data point that’s being put in by the clinician, but it also has access to the entirety of the patient’s medical record and the consolidated view from various providers,” Shrestha says. He adds, “So it can then correlate that and really impact the accuracy of the coding itself. It makes specific recommendations; there are also automated transactions that happen as a result of the logic within the HCC Scout product itself. We have seen that as a result, it dramatically impacts the coding efficiency, it is able to improve the quality of the documentation itself, and is able to decrease the level of burden amongst clinicians.”
As with most technologies, NLP is far from perfect, and some skeptics point out that due to healthcare having so many gaps in its data ecosystem, there is a limit to the effectiveness NLP can have compared with other industries. And in the clinical documentation sphere specifically, NLP technology can have its fair share of “screw-ups.”
RecordsOne’s Czahor brings up a humorous anecdote in which the NLP engine actually recognized a certain term in the note as “menopause”—even for men. Another example she gives is when “glass” mapped in the NLP engine in one instance to a methamphetamine, since at one point in time that was a street term for the drug. “So we couldn’t figure out why methamphetamine abuse was coming up for a patient wearing glasses,” Czahor says. Mistakes like those, she notes, serve as motivation for more precision and accuracy on the back end. “Without the work on the back end, these are the things that could make their way out into the real-time patient care environment. Imagine how upset a patient who wore glasses would be if something like methamphetamine abuse came on a claims form?”
There is also the issue of physician pushback, which is often the case when technology enters the healthcare picture. Shrestha says that when clinicians, coders, and administrators start to see the benefits of a well-designed product, then it makes all the difference. The key to all this, he notes, is “designing the technology by leveraging the principles of human-center design, using agile development methodologies where there is an iterative approach to software development, and then doing it all hand-in-hand with the clinicians, experts from health plans and health services group, and the coders, rather than be disconnected in a sterile room outside the health system.”
Moving Forward in a Value-Based World
It’s hard to deny that the shift towards value-based care starts in the clinical record. As Shrestha says, “The accuracy demands as we move from volume to value [have increased]; there is an inordinate level of emphasis on it.”
Indeed, accurate documentation will be especially important in meeting requirements of value-based care and in the need to support risk adjustment and patient stratification. “As you shift from volume to value, you will start to see more NLP leveraged towards quality aspects and quality reviews, and the elements that tie into value-based purchasing from a risk adjustment perspective. Clinical validation is a big topic right now for HIM [health information management] and CDI,” says Czahor. She predicts that NLP be leveraged for validation purposes as well, so when a doctor is saying acute respiratory failure, for example, if the healthcare organization has a standard definition or clinical consensus for what that is, not just looking for when it’s present and not documented, but for when it is documented, is it clinically valid? “Does the patient meet the clinical parameters for severe sepsis or those high-targeted diagnoses? You can’t hire a clinical staff team necessarily to go out and clinically validate all these charts, but you can leverage artificial intelligence, NLP and machine learning to do that initial evaluation and find the outliers,” she says.
What’s more, at UPMC, Shrestha says that organizational leaders there are understanding that meaningful use was “the flavor of the day” that has been done in the past several years, but now the focus is not just on quantity, which was what the meaningful use program measured, but rather quality, which is what MACRA (the Medicare Access and CHIP Reauthorization Act) is measuring. He says, “There has been a dramatic shift in how the industry is looking at the value of more accurately documenting the quality of the care we are providing and then translating that into better outcomes, coding, and better revenue eventually as well.”