As an early step in the development of a learning health system, the National Institutes for Health (NIH) is sponsoring large-scale pragmatic clinical trial demonstration projects that rely heavily on EHR data from multiple health systems. In order to promote transparency, reuse and data quality, informatics researchers and data analysts are working to identify best practices and advocate for cultural and policy changes related to using EHRs to identify populations for research.
Rachel Richesson, Ph.D., M.P.H., associate professor of informatics in the Duke University School of Nursing, recently gave an online presentation to describe the work of the NIH Research Collaboratory’s Phenotypes, Data Standards, and Data Quality Core group.
First Richesson described the clinical information system landscape that researchers face. “There is little that is standardized in terms of data representation in EHRs today,” she said. And what appears to be standard is not always so. Each health system has multiple sources of ICD-9 and ICD-10 codes, lab values, and medication data.
Also, EHRs have no standard representation or approach for phenotype definition — that is, a way to define populations with certain conditions such as chronic pain or uncontrolled diabetes.
Additionally challenging is that multi-site pragmatic clinical trials pull information from many ancillary systems as well as the EHRs into a single research database to support the study. A common process used in data warehouses is extract, transform and load (ETL). This has to happen for each organization contributing data to the trial, and there are many sources of error that can be introduced or sources can be missed completely. One trial studying colon cancer has had trouble identifying colonoscopies done outside the health center because they are embedded in PDF or narrative reports but not coded in data.
Besides co-leading the Phenotyping, Data Standards, and Data Quality Core, Richesson is also the co-lead of the Rare Diseases Task Force for the national distributed Patient Centered Outcomes Research Network (PCORnet), specifically promoting standardized EHR-based condition definitions (“computable phenotypes”) for rare diseases, and helping to develop a national research infrastructure that can support observational and interventional research for various types of conditions. Before joining the Duke faculty in 2011, Richesson spent seven years as at the University of South Florida College of Medicine directing strategy for the identification and implementation of data standards for a variety of multi-national multi-site clinical research and epidemiological studies housed within the USF Department of Pediatrics, including the NIH Rare Diseases Clinical Research Network (RDCRN) and The Environmental Determinants of Diabetes in the Young (TEDDY) study.
In her recent presentation, she gave a few examples of the use of EHRs in the Collaboratory trials:
• The Collaborative Care for Chronic Pain in Primary Care (PPACT) needs to identify patients with chronic pain for the intervention. This is done in different EHR systems using a number of “phenotypes” for inclusion – e.g., neck pain, fibromyalgia, arthritis, or long-term opioid use. Harmonizing that data has proven challenging. “They have had to monitor large groups of codes that represent these conditions, particularly after the change to ICD-10 to make sure there were no changes in coding behavior,” she said.
• The Strategies and Opportunities to Stop Colorectal Cancer (STOP CRC) trial needs to continually identify screenings for colorectal cancer from each site, so it must maintain a master list of codes (CPT and local codes) related to fecal immunochemical test orders across multiple organizations.
• The Trauma Survivors Outcomes and Support (TSOS) trial needs to screen patients for PTSD on Emergency Department admission. Yet the wide variety of clinical information systems used in the 24 sites’ emergency departments have varying ability to screen for substance-related disorders and mental health. (Richesson’s Ph.D. dissertation from the University of Texas Health Sciences Center in Houston involved the integration of heterogeneous data from multiple emergency departments.)
“These examples give an idea of how crucial EHR data is to the functioning of these trials and underscore the need to be active and iteratively reach out to IT staff to understand what data it collects and work flow,” Richesson said. “That is a universal experience of the projects in the Collaboratory.”
Transparency and Reproducibility of Pragmatic Clinical Trials
Ultimately, these clinical trial demonstration projects are going to be reporting their results in journals and describing the characteristics of the patients in the intervention and control groups. They will need to point to definitions for diabetes, hypertension, etc. Today there is a wide variety of phenotype definitions based on lab codes, medication codes, or any combination of the two.
“There are huge variety in how conditions could be defined using EHR data,” Richesson explained. “To support transparency and reproducibility in these pragmatic clinical trials, we want to be able to allow readers and consumers to identify what was the definition and how data was obtained and used. Having an explicit phenotype definition would certainly be useful in this area. Our Core has been working toward explicit reporting in these trials.”
Because there is a need for transparent reporting, the Collaboratory’s Phenotypes, Data Standards, and Data Quality Core group has put together a list of the elements it believes should be included in the reporting of pragmatic trials. Those elements should be available for public comment in the next month or so, she said. The group also believes that there needs to be a repository — perhaps at the National Library of Medicine — where researchers could put detailed specifications on how they collected data and defined each clinical phenotype (EHR-based condition definition) used. They also strongly recommend doing a data quality assessment, including a description of different data sources or processes used at different sites.
The goal, Richesson said, is to come up with a limited number of validated and sound definitions for clinical populations in pragmatic clinical trials. “Our approach has really changed over the last couple of years,” she said. “When we first started, we were thinking we would list the definitions and post them as standards: This is how you will define chronic pain, suicide attempts, etc.,” she explained. “Now we see this as one step in a bigger process to facilitate thoughtful use of different definitions. The idea is that we post what we have, and then we provide justification and guidance. We can describe the purpose of phenotype definition and have information describing the researchers’ thinking. We could point to other repositories hosting phenotype definitions. We could maintain this information and keep it updated and support dialog,” she added.
“Our Core is trying to provide easy access to definitions As we move toward learning healthcare systems, we really want to reduce the number of definitions out there and reduce unnecessary variation across phenotype definitions,” Richesson said, adding that research and clinical use cases should move toward using the same definitions. “The idea of evidence-based practice brings us to the conclusion that for health services research and comparative effectiveness research, using these same phenotype definitions used in research is a goal we should ultimately move toward. We want to be able to identify equivalent populations as we implement best practices and evaluate how an implementation is performing in real life.”