An influential 2007 paper published in the Journal of Biomedical Informatics identified several “grand challenges in clinical decision support.” The authors’ list included:
• improving the human-computer interface;
• disseminating best practices in CDS design, development, and implementation;
• summarizing patient-level information;
• prioritizing and filtering recommendations to the user; and
• creating an architecture for sharing executable CDS modules and services.
Five years later, those remain daunting challenges. Last week, I saw a great presentation at the Children’s Hospital of Philadelphia (CHOP) addressing a topic that touches on a few of these areas: how to get the most relevant clinical guidelines into precise, actionable terms in the right place in the work flow.
Richard Shiffman, M.D., associate director of Yale University's Center for Medical Informatics, described how software tools can help improve the clarity and detail level of clinical guidelines and make them more useful as decision support tools.
“There’s a great challenge in representing guideline knowledge electronically -- taking that textual narrative and turning it into computer-based decision support,” Shiffman said. He noted that one study of efforts to encode the very same guidelines for immunizations into different clinical decision support systems found that developers produced very different recommendations for the same patient. “That’s scary,” Schiffman said. “It says that starting from the same spot, and ending in three very different spots, we could really run into trouble in this clinical decision support business.”
Shiffman and his colleagues working on the GLIDES (GuideLines Into DEcision Support) Project have tried to create a systematic and replicable way to translate that guideline knowledge from one organization to another. Both CHOP and the Geisinger Health System are involved in the research effort.
Guidelines cam be problematic when they use passive voice and ambiguous, vague and underspecified language that can be interpreted in more than one way, Shiffman said. He and colleagues have developed templates to lead guideline developers through a series of steps to address the questions: (1) under what circumstances? (2) who? (3) ought (with what level of obligation?) (4) to do what? (5) to whom? (6) how and why? In line with Institute of Medicine recommendations, it also requires developers to appraise evidence quality, benefits, and harms.
Shiffman’s presentation about improving the translation of guidelines to clinical decision support screens that physicians see also provided an interesting insight into clinicians’ reluctance to use CDS. The GLIDES team tried out its translation tools with asthma guidelines in two practice settings at Yale New Haven Health System: a pediatric pulmonology clinic and a pediatric primary care clinic. “It turns out that even though we had input from the pulmonologists all along, they didn't like it,” Shiffman said. “They are specialists, and they didn’t like being told what to do.” They used it as a documentation tool, most often at the end of the day when the patients had already left and the recommendations couldn’t be followed, he added.
Primary care physicians, on the other hand, didn’t mind being told what to do, and liked the CDS tool developed from the asthma guideline, he said. “We didn’t give up. Our next step was to try to get pulmonologists to use it by interfering with their workflow.” Patients had been filling out demographic information on a piece of paper. Physicians were recording their notes on this same piece of paper. The team developed an iPad application so the patient entered their information on the iPad and the information went wirelessly into the Centricity EHR and in order to see it, the physicians had to open up the decision support. “That has changed their use of the system from zero to more than zero, but not 100 percent,” Shiffman said.
The pulmonologists also studied cases where there was a difference between what decision support recommended and what the physicians did. In 90 percent of the cases, they were in agreement, he said. In 5 percent the physician was right, because there were extra factors involved in the decision-making. But they also found that in 5 percent of the cases, what was being recommended was more appropriate than what the physicians were doing.