In a January webinar about the draft Trusted Exchange Framework and Common Agreement (TEFCA), Genevieve Morris, principal deputy national coordinator for health information technology, mentioned that ONC is working with HL7 and the SMART team at Boston Children’s Hospital on the idea of a FHIR-based population-level data API (application programming interface).
I found that idea intriguing. Then today I saw a tweet by Ken Mandl, M.D., who leads the SMART Platforms initiative, that a report has been released based on a meeting held in December 2017 to discuss the population-level data API.
The meeting brought together stakeholders including ONC Director Don Rucker, M.D., and representatives from payers, health systems, and EHR vendors. The issues they raised helped identify promising use cases, a path for future work as well as some limitations.
According to the report about the meeting, Dan Gottlieb of the SMART team at Boston Children’s Hospital presented an overview of a draft FHIR population-level data API proposal. The proposal builds on the summer 2017 SMART meeting, where the need to move large amounts of data between systems in a standard way was identified as a key need.
Several difficulties that parties face when sharing EHR data include:
• Extremely manual, labor-intensive processes;
• Use of custom data fields and proprietary data models;
• Variance in data extraction across systems; and
• Currently, FHIR is inefficient for large queries.
Gottlieb noted that there is a need for a standard way to share data that avoids the constant need for customization and gets the data in a file that can be written and read as a stream.
The proposal involves extending FHIR and lays out design goals and the initial scope, as well as the architecture, flow, and security. The proposed timeline calls for publishing version 1.0 of a FHIR Implementation Guide for population-level data in 2019.
Gottlieb said that open policy and technical questions need to be addressed, and it is important to form a community ecosystem to develop, implement, and refine the population-level data API.
“As these components become more plug and play, it will be nice to have the community start building an open source ecosystem that institutions can use when working with the data,” Gottlieb said.
One goal is to create automated communication between back-end services and EHRs/clinical systems. The idea is that systems should be able to communicate with each other without a user having to log in once a connection is set up.
In the meeting’s wrap-up, none of the attendees opposed the general technical direction that was articulated for a FHIR-based population-level data API. Several participants discussed the need to address specific details and others expressed support for moving forward to pilot and get experience.
The most interest among EHR vendors, payers and providers was in instances where population-level data are already being aggregated, transferred and extracted, but doing so is difficult, inefficient and costly. Some EHR vendors expressed reluctance to spend time and effort where they have already created interfaces, but are open to a population-level data API for new situations where there is not an existing interface.
Possible use cases mentioned were:
• Exporting population level data to automatically compute quality measures.
• Gathering data required under CMS alternative payment models in a more automated way could be done more quickly and much less expensively.
• Aligning various CMS data requirements with private payer data requirements.
• Using an API to allow information needed under CMS Innovation Center care models for specific diseases to be loaded into an app.
• Using population-level data to identify the most appropriate patients to enroll in care management programs.
• Combining data of different types, from different sources, such as claims data and laboratory data. This provides a greater ability to mine data for clinical purposes, such as providing early notice of pre-diabetic patients.
• Amassing a significant enough volume of population-level data to provide the ability to effectively use machine-learning technologies to analyze data. For machine learning to be valuable, enormous amounts of data are required.