Skip to content Skip to navigation

A Look at Mayo Clinic’s Daring Enterprise Analytics Leap

September 30, 2016
by Mark Hagland
| Reprints
Mayo Clinic’s leaders have made tremendous progress around quality measures reporting

Leaders at the Rochester, Minn.-based Mayo Clinic have been making tremendous progress lately in an area of great interest across U.S. healthcare: they have been building an enterprise-wide data analytics program. And that was the subject of a presentation on Sep. 28 by Dwight Brown, Mayo’s director of enterprise analytics. Brown spoke on the topic at the Health IT Summit in New York, sponsored by Healthcare Informatics.

Brown offered his presentation, “The Mayo Clinic, Data Mapping and Building a Successful Advanced Data Analytics Program,” to healthcare leaders gathered at the Convene conference center in New York’s Financial District. Joining him was Sanjay Pudupakkam, principal and owner of the Wellington, Fla.-based Avatar Enterprise Business Solutions, which partnered with Brown and his colleagues at Mayo in building their enterprise analytics foundation.

Describing the origins of the work to build an advanced, enterprise-wide data analytics program, Brown told his audience, “We undertook this initiative between five and six years ago, focusing on clinical workflow. Why did we look at this? With healthcare payment reform, it is important to have a good grasp on data centers and to be able to perform data analysis. To remain competitive and viable, patient care organizations need to be able to use data to positively affect the quality of care, contain costs, and manage and administer for quality,” Brown said. “It was also important to ensure the reliability and integrity of the data we had. It’s not enough to put the data in place; you have to have good clinical workflow, or you’ll never get the data set you need.”

Looking back on the situation at the start of the initiative, Brown told his audience, “The problem we ran into is that our internal and external quality measurement was growing too fast; we had all kinds of measures the government was requiring us to report, and the majority of those forms of data were having to be abstracted manually. It was non-discrete and required manual chart extraction. And that,” he said, “was problem for the Mayo Quality Management Services Department. We had had to grow by over threefold over a period of three years,” and even that growth was not keeping pace with the accelerating demand for data reports and analysis.

Dwight Brown

“So we employed Sanjay to come help us,”Brown said, referring to Pudupakkam. “We had kind of an idea of what we needed to do, but needed help. We needed to automate the manual processes to free up our Quality Management staff. It was way too difficult, and way too difficult to train people in manual chart abstraction processes—was taking a year to train them. We also needed a centralized quality measures meta data repository. And we needed a roadmap.” Emphasizing the size and scope of the initiative they were plunging into, he added, “This was not just a one-time project.”

The core of the challenge, Brown noted, was this: a huge number of quality measures that Mayo leaders needed to respond to and support. Indeed, he said, the analysis performed by the core leadership team he gathered around himself, which included a quality administrator, an associate quality manager, and Brown’s quality measurement system team, found that the organization was having to work with 275 quality measures altogether.

The external quality measures included those related to:


>  Meaningful use

>  Stroke

>  VTE (venous thromboembolytic prophylaxis)

>  Leapfrog Group

>  Hospital outpatient measures

>  Core measures

>  AHRQ (Agency for healthcare Research and Quality) safety indicators

>  Minnesota community measures

>  Minnesota-based inpatient psychiatric measures

>  Minnesota public reporting measures


Internal measures included those related to:

>  Hospital standardized mortality ratio

>  Mortality and morbidity

>  Arizona surgical-based reporting

>  Infectious diseases

>  Adverse events

>  Specialty councils

>  Adverse events

“The challenge,” Brown told his audience, “was this: how do we automate or semi-automate so many measures? What data elements are needed to support these measures? What data elements are common and which are different among these measures? Are these data elements even being captured consistently across the EMR? Which data elements are discrete and which ones are non-discrete? How do we prioritize the measures for automation?”

An additional hurdle was the fact that Brown and his colleagues struggled with the reality of having to work with several different electronic health record (EHR) systems. “At the time,” he said, we had two instances of Cerner and one of IDX.” So, he said, “First, we had to take a measure decomposition approach” to the initiative. As a result, “Each project grouping was bundled into measure groups. That included PQRI, meaningful use, routine internal reporting, routine regulatory reporting, quality measures for specialty consults, and ETS to MIDAS Conversion & our DON Mart,” he said; and, he added, “we had to de-duplicate measures.” The end result? “We found that we had 500 unique data elements across 40 source systems, for those 275 measures—which now number over 320.”

The key challenges that Brown and his team uncovered and addressed including the following:

Ø  Multiple, disparate data sources (over 40 sources of data altogether)

Ø  Data elements were difficult to locate—there was no “single source of truth”; in fact, there was an inconsistency in how the data was being entered