Leaders at the Rochester, Minn.-based Mayo Clinic have been making tremendous progress lately in an area of great interest across U.S. healthcare: they have been building an enterprise-wide data analytics program. And that was the subject of a presentation on Sep. 28 by Dwight Brown, Mayo’s director of enterprise analytics. Brown spoke on the topic at the Health IT Summit in New York, sponsored by Healthcare Informatics.
Brown offered his presentation, “The Mayo Clinic, Data Mapping and Building a Successful Advanced Data Analytics Program,” to healthcare leaders gathered at the Convene conference center in New York’s Financial District. Joining him was Sanjay Pudupakkam, principal and owner of the Wellington, Fla.-based Avatar Enterprise Business Solutions, which partnered with Brown and his colleagues at Mayo in building their enterprise analytics foundation.
Describing the origins of the work to build an advanced, enterprise-wide data analytics program, Brown told his audience, “We undertook this initiative between five and six years ago, focusing on clinical workflow. Why did we look at this? With healthcare payment reform, it is important to have a good grasp on data centers and to be able to perform data analysis. To remain competitive and viable, patient care organizations need to be able to use data to positively affect the quality of care, contain costs, and manage and administer for quality,” Brown said. “It was also important to ensure the reliability and integrity of the data we had. It’s not enough to put the data in place; you have to have good clinical workflow, or you’ll never get the data set you need.”
Looking back on the situation at the start of the initiative, Brown told his audience, “The problem we ran into is that our internal and external quality measurement was growing too fast; we had all kinds of measures the government was requiring us to report, and the majority of those forms of data were having to be abstracted manually. It was non-discrete and required manual chart extraction. And that,” he said, “was problem for the Mayo Quality Management Services Department. We had had to grow by over threefold over a period of three years,” and even that growth was not keeping pace with the accelerating demand for data reports and analysis.
“So we employed Sanjay to come help us,”Brown said, referring to Pudupakkam. “We had kind of an idea of what we needed to do, but needed help. We needed to automate the manual processes to free up our Quality Management staff. It was way too difficult, and way too difficult to train people in manual chart abstraction processes—was taking a year to train them. We also needed a centralized quality measures meta data repository. And we needed a roadmap.” Emphasizing the size and scope of the initiative they were plunging into, he added, “This was not just a one-time project.”
The core of the challenge, Brown noted, was this: a huge number of quality measures that Mayo leaders needed to respond to and support. Indeed, he said, the analysis performed by the core leadership team he gathered around himself, which included a quality administrator, an associate quality manager, and Brown’s quality measurement system team, found that the organization was having to work with 275 quality measures altogether.
The external quality measures included those related to:
> Meaningful use
> VTE (venous thromboembolytic prophylaxis)
> Leapfrog Group
> Hospital outpatient measures
> Core measures
> AHRQ (Agency for healthcare Research and Quality) safety indicators
> Minnesota community measures
> Minnesota-based inpatient psychiatric measures
> Minnesota public reporting measures
Internal measures included those related to:
> Hospital standardized mortality ratio
> Mortality and morbidity
> Arizona surgical-based reporting
> Infectious diseases
> Adverse events
> Specialty councils
> Adverse events
“The challenge,” Brown told his audience, “was this: how do we automate or semi-automate so many measures? What data elements are needed to support these measures? What data elements are common and which are different among these measures? Are these data elements even being captured consistently across the EMR? Which data elements are discrete and which ones are non-discrete? How do we prioritize the measures for automation?”
An additional hurdle was the fact that Brown and his colleagues struggled with the reality of having to work with several different electronic health record (EHR) systems. “At the time,” he said, we had two instances of Cerner and one of IDX.” So, he said, “First, we had to take a measure decomposition approach” to the initiative. As a result, “Each project grouping was bundled into measure groups. That included PQRI, meaningful use, routine internal reporting, routine regulatory reporting, quality measures for specialty consults, and ETS to MIDAS Conversion & our DON Mart,” he said; and, he added, “we had to de-duplicate measures.” The end result? “We found that we had 500 unique data elements across 40 source systems, for those 275 measures—which now number over 320.”
The key challenges that Brown and his team uncovered and addressed including the following:
Ø Multiple, disparate data sources (over 40 sources of data altogether)
Ø Data elements were difficult to locate—there was no “single source of truth”; in fact, there was an inconsistency in how the data was being entered
Ø Multiple sources for a single data element. There were data elements that were discrete in the EHR, but were not being used; for some data elements, there were up to five different sources
Ø A mix of what turned to be discrete data elements, non-discrete, and mixed
Ø Over 75 percent of data needed was found in text fields
Ø Identifying measures and data elements walking through the abstraction process at each site
Ø The team also identified and recommended changes for data entry and chart abstraction standardization and consistencies
“Meaningful use really precipitated this,” Brown told his audience. “We had to provide the quality measures and report them on an automated basis. Here’s one example, around one meaningful use measure. For patients with a certain BMI [body mass index], you’re supposed to provide weight counseling for those patients. But physicians would say, I’m going to talk with the patient, but document that in my text note. So when patients would check in, we would document their weight and height. And if they were over a certain BMI, the system would kick out a ‘fat sheet’ for that patient, without intervention from the physician,” making it unnecessary to require physicians in practice to add that data point into their physician notes.
So, what has been learned from all of this? “One of the key takeaways we’ve had so far,” Brown said, “is this: we used a big Excel spreadsheet to document all the data elements. We still use that; we call it the Sanjay Sheet. We use it to continue to look at the data elements.” So, the key learning there has been the awareness of the need to systematically and strategically approach the management of data collection, analysis, and reporting. In that regard, he noted, “We have quality informatics specialists now.” Brown and his colleagues have taken several staff members, primarily nurses, and have trained them to be able to analyze quality measure-related data at a granular level. “Sanjay helped to train these folks,” noted, speaking of Pudupakkam. “Among the questions we had to address: how do you look for the data elements? How do you look for the meta data? How do you track the data? They really brought the knowledge and expertise to understand available data,” he said of the Avatar consultants.
And, asked by an audience member how he and his colleagues acquired quality informatics specialists, Brown said, “Did we essentially build the set of skills for the quality informatics specialists? We utilized our nurse abstracters, and we had a few with an interest in informatics, so they went back for training; one actually went back and got a master’s in informatics. We also had other informatics specialists within the organization. So we did some of that, and combined it with the workflow we had created with the meta data repository. And altogether, we built a set of two or three individuals who could then lead the initiative. We had a couple of people who had that education who could be the team leaders and work together with the nurse specialists. They had all spent so much time with the medical record as it was, that it was a pretty easy leap for us.”
Looking at the big picture around the initiative so far, Brown said, “This has been a foundational initiative that has laid the groundwork for future IT applications and analytics projects.” In addition, he said, “We our focus on clinical workflow processes forced compliance on the data entry of vital data elements.” What’s more, he said, success has required a “clear, focused methodology.”
The results have been very gratifying, Brown said. “At the outset of this initiative, 75 percent of the data elements” required for quality measure reporting “were in text fields; by the end of 2015, only 50 percent were.” The methodology used also provided a jump start on the meaningful use eCQMs, or electronically reported clinical quality measures, he noted.
Asked by an audience member as to whether the initiative has enabled the improvement of self-service, Brown said, “Yes, absolutely. In the past,” he conceded, “we in analytics did the ‘dump and run, don’t ask us what the data means’” approach. “Now, it’s much more collaborative. And, getting data into a good format for end-users remains an issue. Ultimately, we want to help enable more extensive self-service. But it’s in evolution right now, because we don’t have every metric in some sort of self-service portal; but it’s certainly moving in that direction.”
And, asked by an audience member what role culture has played in enabling the initiative, Brown said, “We’re a large organization. And we’re a committee-driven organization. And so to a certain extent, we were actually working against the culture of the organization, in that we needed to move quickly on it. Once people saw that we had good success in these metrics, we were able to get a lot more cooperation” from end-users across the organization around analytics and reporting issues. “But really, it was us moving against our typical culture,” which is a hard-driving, fast-moving one, he conceded. “But we had to show our success in order to not have somebody come down and say, that’s it, you’re done. When you have success happening, it’s really hard for people to sunset things.” In the end, he said, the initiative has proven how successful an initiative like this can be, by virtue of its being strategic, comprehensive, collaborative, and forward-thinking.