Skip to content Skip to navigation

At Cleveland Clinic, Embedding Data Analytics Into the Core Culture

April 30, 2015
by Mark Hagland
| Reprints
Eric Hixson and his colleagues are deepening and broadening Cleveland Clinic’s journey into strategic data analytics to support clinical performance improvement

Eric Hixson, Ph.D. is senior program administrator in the Business Intelligence Department, which operates within the Medical Operations division at Cleveland Clinic, the integrated health system based in Cleveland. Hixson is a leader in a team of about 85 data professionals, who perform numerous functions for the entire Cleveland Clinic. Hixson and his colleagues do data warehousing and data management; and they manage a structured data repository, called the Clarity Repository, which sits behind the electronic medical record, and facilitates data mining, data acquisition, and report generation from information originating in the EMR.

Hixson spoke recently with HCI Editor-in-Chief Mark Hagland regarding some of the current work that his team has been engaged in. On May 19, Hixson will deliver a presentation entitled “Analytics Strategy: Enablement, Innovation, Transformation,” in which he will discuss his team’s work at Cleveland Clinic, and its implications, as part of the Health IT Summit in Boston, sponsored by the Institution for Health Technology Transformation, or iHT2 (a sister organization of Healthcare Informatics under our corporate umbrella, the Vendome Group, LLC). Below are excerpts from their interview.

Your team is engaged in a whole range of analytics work for your colleagues at Cleveland Clinic. Tell me a bit about the Clarity Repository, to begin with?

Certainly. The Clarity Repository is a structured data repository that sits behind the electronic medical record. It permits data mining, data acquisition, report generation of information originating in the EMR, in a way that makes it easier on the analysts and also insulates the live online system clinicians are using from direct patient care from activity and analysis. It doesn’t slow it down. The EMR is optimized for direct patient care; the repository is optimized for reporting and data mapping.

What are the latest things you’re working on?

We’re where a lot of organizations are right now in that we have access to more and more information and data from multiple domains, information and data that are patient-generated, machine-generated, clinical operations-generated; and the organization is increasingly viewing that data as an asset. So what we’re increasingly challenged to do as an organization is to identify the value of that asset and leverage it to the maximum extent to really influence decision-making to provide care at lower cost and higher quality. So getting our arms around the breadth of what’s available and then identifying the use case of information—just because I have access to a dozen more domains of information, and we can analyze anything we can get hold of, is nice—but if I also don’t have a meaningful question to answer and a conversation or decision to influence, then it’s interesting academically, but has limited operational value.

So our focus really is understanding what we have, organizing our organizational structure to be able to provide that to users, either in a self-service-type environment, or via pre-processed reporting; but doing it in a way that will further their usage of information, not just putting it into a report. We talk about analytics execution, and this is really built on a data culture that’s been developing for a long time and nurtured by senior management. But our executing successfully is really focusing now on what we need to do. What are the important areas that need information? And where does data fill a gap, to influence decisions, direct or indirect, about patient care? So our focus really is on what the outcome is that I’m trying to influence, not so much on the product itself. So, what does it do, rather than what it is? That is our focus.

What have been the biggest challenges and opportunities on this journey so far?

One of the biggest challenges has simply been the scale and breadth of what we’re working on. We now have access to an incredible amount of information. So how do we manage it in a cost-effective and efficient way, so that when individuals ask questions, we can optimally facilitate their use of data, in as close to a self-service mode as possibility? We have different internal audiences, with different analytics capabilities. So even in the context of a self-service environment, we’ve really got a continuum of users, of sophistication of questions, and of the sophistication in the ability to use the answers. So we’re trying to do this efficiently, using tools that fit into their workflow. So that’s one large challenges.

Another challenge is actually a function of our success, in that the organization is collectively becoming much more data-savvy; they’re asking very good, sophisticated questions, that requires sophisticated analysis. And to keep up with that demand is a challenge, but it’s the right kind of challenge to have.

This seems partly like an IT governance question, and partly like an IT operations question, correct?

I would agree, but the two questions are inseparable. You don’t get good operation and use of data without good governance; and you don’t get good governance without good operational management. So I view those as a necessary pairing. And the execution flows down the organization from the executive level. And that vertically aligns: what are we here to do? Our organization is considering very significant expenditures in terms of infrastructure, and how is that optimally going to be deployed, and how will we demonstrate value in terms of hardware, software, and staff? And you don’t do that well without governance? And if you don’t have the expertise and people and tools to do it, you have governance, but nothing to deliver. So structure and strategy are inseparable.