Skip to content Skip to navigation

Can Quality Reporting, Clinical Decision Support Go Hand in Hand?

March 18, 2014
| Reprints
Medical University of South Carolina groups measures, then builds reporting into the underlying care guidelines in the EHR

In his reporting from the HIMSS conference in Orlando, my colleague Rajiv Leventhal described the mood of exhaustion providers feel about the array of regulatory changes and federal programs they have to report on, including meaningful use, ICD-10, PQRS, HIPAA, and ACO measures.

Rajiv noted that at the ONC Town Hall on Feb. 24 at HIMSS, a member of a Kentucky regional extension center asked whether there were any effort under way to align all of the regulatory requirements on physicians so it’s not one on top of the next. “When can we simply practice medicine?” the attendee asked in frustration.

I mention this because I saw a presentation at HIMSS that addressed this problem head on.  Executives from the 700-bed Medical University of South Carolina (MUSC) described how they are working to develop a single, comprehensive organizational blueprint for meeting quality measure requirements.

One hint that MUSC has spent some time thinking about this topic is that they have an executive with the title “manager of regulatory analytics.” The woman who holds that title, Itara Barnes, described how, like most large healthcare organizations, MUSC faces an extensive list of measures required by certification and accreditation bodies, federal and private payers, public health reporting programs, and its own organizational quality initiatives. Previously, reporting response efforts operated in silos. She said a complicating factor is that measure concepts are used in multiple programs but applied with differing specifications, versions of specifications, and submission mechanisms.

MUSC decided to step back and review measures across care settings and group them by family (a group of clinical quality measures related to a single care process or disease state, such as diabetes) to identify common structured data requirements and assess their impact on measurement.

“Where measures are used for multiple programs or have multiple versions of specifications, we are defining one comprehensive workflow that collects data used for all programs,” Barnes said.

MUSC considers data collected through all relevant activities and points in the care process, not just the single point targeting the measure’s clinical quality action. “This is a team sport,” Barnes said. “Everyone has a place in this process. When they understand where they touch the measure and how they can influence the outcome, we get more buy-in.”

And rather than keeping the focus on meeting reporting requirements, MUSC is building the reporting into the underlying evidence-based care guidelines in the EHR.  “We are developing comprehensive evidence-based clinical decision support tools to drive cultural transition and compliance and to integrate data capture into workflow in a meaningful way,” said Elizabeth Crabtree, director of evidence-based practice and an assistant professor at MUSC.

The organization’s Clinical Decision Support Oversight Committee works to design and implement CDS tools to drive evidence-based practice with data capture for reporting built-in, resulting in quality measures inextricably linked to care. “We have shifted our focus on measurement to look at the processes of care. That way, providers are more engaged in quality measures,” Crabtree said. “They are not just focused on reporting and regulations. It becoming meaningful, and more of a feedback loop.”

“I see embedding data capture for reporting in evidence-based order sets as icing on a cake,” she said. “You wouldn’t like the cake without it. They go hand in hand in a nice fashion.”






Thanks for highlighting the exhaustion and burnout of clinicians--this is a looming challenge for the entire country. In 2004, I had the title "Director of Quality Measures" at a large academic health system. Same objective--tackle the quality measures head-on before they spiraled out of control. Except, a decade ago, we didn't have the array of acronyms, measure developers, and formats for quality--we had one home-grown measure construct from the Wisconsin Collaborative for Healthcare Quality (and a couple of HEDIS groupies that didn't want to let go of the past).

Lessons learned before we had EHRs (they were still called EMRs), before we had HIEs (they were still called RHIOs...and they, too, didn't have a business model), before we had eCQMs (they were called "performance measures")--it's not about the measure. It was never about the measure. Reporting of measures to external stakeholders...that must become a byproduct of using the data.

The goal for every practice in the US should be to get out of the Health IT implementation business and get into the performance improvement business. The far harder job facing health systems is performance improvement. Changing culture, building systems of care, constructing new compensation programs--that's the hard lifting that needs to be done.

Michael Barbouche
Forward Health Group, Inc.

Dear Michael,
Thanks so much for your insightful observations in response to this blog post. Interestingly, I saw a presentation a few days ago by Mark Van Kooy, M.D., of consulting firm Aspen Advisors on much the same theme. Dr. Van Kooy basically said people who focus on the faults of the EHR system and vendor are missing out on the opportunity to zero in on performance improvement. He gave examples of pairs of health systems using the same exact software, one struggling, the other flourishing. The difference? Clinical IT governance, change management, and a focus on performance improvement. Some hire a senior-level performance improvement executive. Very successful approaches are available, he said, and the variation is not due to the EHR platform.