You often hear people complain about how the bureaucracy of the meaningful use program has hampered innovation. Much less often do you hear people credit meaningful use with giving their organization the discipline to do great things with their data. But during a Dec. 3 webinar hosted by HIMSS, Ray Manahan, director of government programs for Providence Health & Services, credited MU with getting the 34-hospital organization to start collaborating around data.
“When did we as an organization start to hone in on teamwork and processes around data? I would say it is fairly new,” Manahan said. “We started about five years ago. Meaningful use really was it. Getting us to collaborate can be difficult because we are so large.”
Providence, which has 4,500 providers in 10 regions spread across five Western states, developed a scorecard to track how well its hospitals and providers were doing on MU measures and create a simple snapshot for senior executives, with green, yellow and red indicators. Its hospitals hit 100 percent compliance with MU in 2014, he said. (Although he quickly added that modifications going on with CMS will complicate things for Providence. “We are expecting some reds and yellows because the complexities will grow, especially in areas of interoperability,” he said.)
What lessons did they learn from the MU data? Manahan said the data must be on time, clear and transparent. Calling out areas of risk was important, he said. It must produce specific “asks” and clarify ownership, understanding those asks will require extensive collaboration.
Manahan said, “We took that success with meaningful use and asked: how can we expand this to other programs?” Providence has used the same scorecard framework to address hospital-acquired conditions, value-based purchasing and readmission reduction, tracking a host of weighted quality measures in each area.
These programs have more layers of complexity, he said. Whereas meaningful use is about EHRs and relied on vendors for help, these inpatient programs require more intensive work from quality teams in the hospitals. And getting reports to senior executives on these programs with the look and feel of the MU scorecard “was more difficult than I thought,” he said. “These programs change constantly. CMS may decide to include a new measure or remove one,” he said.
He stressed that Providence established its own benchmarks for these programs, and then had a dialog internally on how to track performance based on benchmarks it established.
He said one key to success has been the creation of a clinical quality measure crosswalk, which is a system-wide tool that lists all clinical quality measures across provider and hospital quality programs.
Providence tracks about 300 quality measures each for ambulatory and inpatient settings. Tracking them all helps them understand whether they need to build something into the Epic EHR, or whether they are pursuing a quality measure because of a penalty associated with it, or because it drives value. “If you have not done an inventory of clinical quality measures, I highly encourage you to do so,” he said.
Manahan said another key to success is finding champions and getting them on board early. He admitted that some regions of Providence don’t like the scorecard system. “Some are all for it; others are hesitant,” he said. “You will find your champions — clinicians or epidemiologists who walk the talk with the data. They can help you deliver the message.”