When physicians follow computer alerts embedded in electronic health records, their hospitalized patients experience fewer complications and lower costs, leave the hospital sooner, and are less likely to be readmitted, according to a study of inpatient care released on Aug. 15. by researchers affiliated with the Cedars-Sinai health system in Los Angeles and with Optum, an Eden Prairie, Minn.-based information- and technology-enabled services company (though the division of Optum that had been the Advisory Board Company).
As the press released published on that day noted, “The research examined alerts that popped up on physician computer screens when their care instructions deviated from evidence-based guidelines. The alerts were based on an initiative called Choosing Wisely, which identifies common tests and procedures that may not have clear benefit for patients and should sometimes be avoided. For example, an alert might pop up on the screen if a physician orders a CT scan when it’s unnecessary and likely won’t improve the patient’s outcome. The alert would serve as a reminder that the order could expose the patient to unnecessary radiation and costs. The Choosing Wisely alerts were backed by the American Board of Internal Medicine Foundation and created by various physician subspecialty societies.”
Speaking of the study, Scott Weingarten, M.D., M.P.H., chief clinical transformation officer at Cedars-Sinai and a senior author of the study, said in a statement quoted in the August. 15 press release, “Sometimes the best care for certain patient conditions means doing less. We have seen that real-time aids for clinical decision-making can potentially help physicians reduce low-value care and improve patient outcomes while lowering costs.”
The release went on to note that “Many leaders in the healthcare industry have targeted unnecessary care as a means of improving patient safety while cutting wasteful spending. One 2010 estimate from the Institute of Medicine found that “unnecessary services” contribute to about $210 billion in wasteful healthcare spending in the United States each year. The study, conducted by investigators from Cedars-Sinai and Optum Advisory Services, was published in The American Journal of Managed Care. It examined data from inpatient visits at Cedars-Sinai Medical Center from October 2013 to July 2016 in which one or more of the 18 most frequent alerts was triggered.”
What’s more, the release noted about the study, “For 26,424 of the inpatient visits studied, the treating physician followed either all or none of the Choosing Wisely guidance. In 6 percent of visits, physicians in the “treatment group” followed all triggered alerts; in the remaining 94 percent of visits, physicians in the “control group” followed none of the triggered alerts. An alert was triggered, for example, if a physician tried ordering a sedative for a sleepless older patient or an appetite stimulant for an older patient who was ill and losing weight. Sedatives can put seniors at risk for falls, bone fractures and car accidents, and appetite stimulants can put seniors at risk of fluid retention, stroke and death.”
Further, “The authors found a significant difference in health outcomes and costs between the two groups. For patients whose physicians did not follow the alerts, the odds of complications increased by 29 percent compared to the group whose physicians followed the alerts. Likewise, the odds of hospital readmissions within 30 days of the patients’ original visits was 14 percent higher in the group whose physicians did not follow the alerts. Patients of these physicians also saw a 6.2 percent increase in their length of stay and an additional 7.3 percent – or $944 per patient – in costs, after adjusting for differences in patient illness severity and case complexity.”
And the release quoted Harry C. Sax, M.D., executive vice chair of surgery at Cedars-Sinai and a senior author of the study, as stating that “Sometimes doctors order tests that they think are in the patient’s best interest, when research doesn’t show that to be the case. Unnecessary testing can lead to interventions that can cause harm. This work is about giving the right care that patients truly need.”
Shortly after the public release of the study, Anne Wellington, managing director of the Cedars-Sinai Accelerator—which, according to its website, is “is transforming healthcare quality, efficiency, and care delivery by helping entrepreneurs bring their innovative technology products to market”—and who was a coauthor of the study, spoke with Healthcare Informatics Editor-in-Chief Mark Hagland about the study’s results, and its implications for healthcare leaders. Below are excerpts from that interview.
Tell me a little bit about the group that came together to embark on this study?
There was a group of us working from three organizations—Cedars-Sinai, where we had a lot of the patients in the study and the physicians; Stanson Health, where I was working at the time, which had been providing the alerts; and the Advisory Board Company, now Optum, and that included Andy Heekin. Stanson Health—Scott Weingarten was one of the founders, and we had had the foundation of the EMR-embedded clinical decision support, though the solution from Zynx [the Los Angeles-based Zynx Health]. We also had support from Stanson Health.
What was the origin of the study?
We had created a library of alerts based on the Choosing Wisely Initiative from the ABIM that was targeting low-value care, in order to advise physicians about tests and treatments commonly overused. We translated those into alerts and a decision support program. We wanted to evaluate the use of those guidelines and alerts, so we created the content to essentially scan the patient chart, and when there was a match between the guideline and the situation, and the provider was about to potentially violate that guideline, we could give them information on the fact that some specialty society had a guideline around that situation. When we looked at some of the inpatient stays, we saw that in stays where the physicians followed those guidelines, the patients had fewer complications, lower costs while admitted; shorter lengths of stay, and a lowered likelihood of readmission.
Can you explain a bit about the mechanics of the study?
We had introduced the Choosing Wisely Alerts into the system in October 2013, and they’ve been live in the system since then. Stanson and some of the physicians at Cedars have been monitoring them; they’re still active today. From October 2013 through the end of July 2016, we examined all the patient encounters where the patients were admitted and the providers had experienced one or more alerts. We looked at 18 high-volume alerts, and looked at Choosing Wisely as a full concept. And for those patient encounters where the providers might have seen multiple alerts, what was the overall impact of those alerts?
Can you speak to the results? Particularly the four metrics around the 7.3-percent reduction in cost of care, 6.2-percent decrease in length of stay, the 29-percent improvement in terms of complications, and the 14-percent reduction in readmissions?
Right, so we compared inpatient encounters, for those four metrics, where providers agreed with and followed the recommendations through the CDS, compared with where providers ignored or overrode those recommendations.
What are your thoughts on the qualitative significance of those results?
I think one sort of fine point to put on this that we want to be careful about is that there’s a group of physicians practicing within Choosing Wisely guidelines, who didn’t see alerts, because we would trigger alerts if a provider was about to fall outside the guidelines, so there were some who always practiced within the guidelines. So it was about adhering to the alerts rather than overriding them—not so much always practicing within them or not. That’s a nuance. Indeed, we focused on encounters in which physicians followed all of the alerts, or followed none of them. We also looked at any physician who followed a patient. So all the physicians involved in the encounter had to follow all the alerts, or none.
So they adhered to the guidelines across all the alerts, or none?
Yes, those were the two categories we analyzed. Where it was mixed, we excluded those from our analysis.
Is there anything to say about the mixed situations, in which physicians followed some, but not all, of the alerts provided to them?
We looked at nearly 30,000 overall encounters, a small percentage, 1,400, fell into that mixed group, and I don’t have any specific explanation for those.
What is your view of the Choosing Wisely program specifically, in this context? How effectively do guidelines work, in practice?
I think this is exciting, because when we look at guidelines generally, they are rapidly evolving forward. And simply publishing guidelines and releasing them out into the world, doesn’t necessarily help those who need them. So clinical IT can form a bridge between all those who work within the EHR (electronic health record) and all those who are preparing these guidelines. This provides good research on the best way to provide high-quality care to patients.
What are your qualitative thoughts on why physicians might reject use of the guidelines? What it might mean?
I have two thoughts. The Choosing Wisely initiative itself, the guidelines they issue, they’re pretty clear that they want to foster further conversation between providers and patients, so when they recommend against certain tests and treatments, they want to engage with providers about things that are commonly overused, but they don’t go so far as to say, this is never appropriate for this type of patient or population. On the technical side, we can assess a patient based on the data in the HER and offer a technical recommendation. But the person closest to the patient and who can see the reality and not just the data captured, is the physician. So sometimes, based on the specific recommendation in the chart, they may opt to continue with the treatment.
And of course, clearly, the Choosing Widely program isn’t attempting to substitute for clinical judgment?
That’s correct. The comparison Scott likes to make is that you’re sort of trying to use this as a kind of blind spot monitor. It’s not a substitute for judgment; it’s just something that can help alert you to something that might be worth further consideration.
What would you say to CIOs, CMIOs, and other healthcare IT leaders, about what’s been learned here?
The study shows that clinical decision support that’s carefully concepted and deployed, can reduce low-value care, when we look at the metrics around the cost and value of care measured in the study. And looking at the deployment of it, making sure the decision support is monitored and evaluated—to understand that it’s being adhered to and valued and is helpful to providers, is part of the key to having those positive impacts.
These guidelines should theoretically work in any decent EHR, correct?
It’s hard to put a definition on decent EHR. So that’s a little bit challenging to answer. But I would say that using data elements available in most EHRs, it’s possible to embed similar guidelines, in most EHRs.
Is there anything you’d like to add?
I think the study has been really exciting. Personally, I love to see applications where technology can fill in and do what computers do best—evaluate a lot of information and do it in a very quick and efficient manner. So it’s exciting to pair the power of these technology solutions, with the insight and care of the physician, and see positive outcomes for the patient.