Skip to content Skip to navigation

Can EHR Data Identify Patients Who May Be Falling Through the Cracks of Your Healthcare System?

August 5, 2016
| Reprints
Researchers describe challenges to patient safety improvement efforts

We hear lots of talk about the potential of big data and learning health systems, but for those to become more than buzzwords, health systems have to devote more resources to using their clinical data for process improvement and patient safety research.

I had the opportunity this week to interview two patient safety researchers from the Michael E. DeBakey VA Medical Center in Houston: Hardeep Singh, M.D., M.P.H., chief of the Health Policy, Quality & Informatics Program in the Center for Innovations in Quality, Effectiveness and Safety, and Elise Russo, the center’s research coordinator. They were among the co-authors of a paper just published online titled, “Challenges in patient safety improvement research in the era of electronic health records.” 

As they explained to me, their earlier work in the VA system had been to study diagnostic errors and cases where abnormal test results get lost in follow-up.

“We have used EHR data in order to identify patients who may be falling through the cracks of the healthcare system,” Singh said. So if a patient goes to primary care and then gets admitted to the hospital unexpectedly within 10 days, they look for patients who may have had a diagnostic error on the first visit. They call these trigger algorithms. The second example is more like a prospective trigger — when someone has an abnormal test result, and you would expect that to be followed up on, but it doesn’t happen. They might have a chest x-ray that shows a nodule in the chest. “At the VA, we code the data so it is already flagged,” Singh said. “If you don’t see a CT scan within 30 days, or a bronchoscopy or biopsy within 30 days, the computer knows that there is no action, that can get flagged. We have used longitudinal data to look for diagnostic and follow-up errors.”

“The work we have done is in research mode,” he added, “but we are trying to operationalize it within the VA, and hopefully one day outside the VA as well.”

The paper they wrote described their efforts to do similar patient safety research in three health systems outside the VA using other commercial EHRs, including Epic. But they found several roadblocks, including at the policy level, a lack of structured data, and in working with IT staff.

In their paper they noted that researchers must be able to access and review EHR data to conduct patient safety research. “However, we found superfluous restrictions on remote data access for researchers. This was best illustrated at Site A, where the organization's internal research oversight team would not provide approval for remote access to the organization's EHR despite approval by the local institutional review board (IRB).”

“As outside researchers, we encountered a lot of these problems,” Russo said. “Security was the major concern we encountered. We figured that since we had IRB approval, they should let us access what we have approvals for. But at pretty much every site, that was not the case, even when we had approvals. There was always something blocking us, including rules that we had to have an employee of the institution on our team to help with chart reviews. Obviously a lot of this was related to their fears about data breaches.”

Lack of structured data

Singh said the lack of structured data they encountered was “a bit of a shock to us.” As they wrote in their paper, the sites had variable amounts of structured EHR data (i.e., lack of “normal” or “abnormal” codes for test results), and “often the same field was structured at one site and unstructured at another site, making cross-site automated comparisons difficult or impossible.”  All three sites met Stage 1 Meaningful Use requirements. “However, at all three sites we were informed by IT staff that there was no method for the computer to automatically identify significantly abnormal radiology, pathology, microbiology, and certain clinical laboratory results,” they wrote.

 “Our point is if you are going to collect all this data electronically, you have to structure which lab is abnormal,” Singh said. “Some of the structure has to be around abnormal stuff, so you can look for it,” he said.

IT personnel issues

Despite clinical leadership buy-in, the patient safety researchers reported experiencing barriers to working with IT personnel because of their competing operational priorities at all three sites. “We found that organizational IT personnel at all sites were significantly resource-constrained and had many competing priorities, particularly related to MU implementation and EHR upgrade-related issues. This resulted in delays in understanding several data-related issues and in getting EHR queries operationalized,” they wrote.

In the “key lessons” section of their paper, they suggest that “all organizations (not just those with “research” as part of their mission) should dedicate additional IT personnel and implement near real-time clinical data warehouses with easy-to-use report writing capabilities to support quality improvement and patient safety improvement efforts. This would allow current IT staff to focus on operational activities. Unfortunately, our experiences reveal that the IT workforce for health care is often ill-prepared, lacks the necessary tools and resources, and is deficient in the clinical and workflow insights and experience necessary to address both research and non-research tasks related to extraction and analysis of EHR data.”