Fine-Tuning an EHR ‘Flight Simulator’ | David Raths | Healthcare Blogs Skip to content Skip to navigation

Fine-Tuning an EHR ‘Flight Simulator’

May 7, 2015
| Reprints
Last year, more than 1,000 U.S. hospitals used the tool to self-assess operational EHR systems for safety performance

Most people would likely assume that if a clinician enters a medication order of a wrong dosage or something that would cause fetal harm in a pregnant woman, the electronic health record would issue a warning or not allow the order to be placed. But as patient safety expert David Classen, M.D., pointed out in a recent ONC-hosted webinar, there is enormous variation in the capability of operational EHRs to pick up on medication errors that could cause fatalities.

Classen, an associate professor of medicine at the University of Utah and chief medical information officer of Pascal Metrics, has served on Institute of Medicine patient safety committees and helped develop the National Healthcare Quality Report and Patient Safety Data Standards. 

As an advisor to the Leapfrog Group and the National Quality Forum, Classen was involved in the development and implementation of a CPOE/EHR Flight Simulator that has been used to evaluate hundreds of inpatient and ambulatory EHR systems after implementation.

The EHR Flight Simulator simulates the operational safety performance of EHRs in actual use, he explained. “This tool was developed 10 years ago and focused on targeting the harm, and actual adverse events, and encouraging quality improvement.”

When the Flight Simulator was first run with 62 hospitals, on average they picked up only 53 percent of medication orders that would cause a fatality, he said. Overall performance varied from 10 percent to 82 percent of problematic cases being detected by the EHR system.

“If you looked at the data broken out by EHR vendors represented in the test, you saw more variability within vendors than between vendor groups, he said. “That strongly suggested to us that performance in terms of safety has more to do with local implementation than the vendor software itself. And that has been borne out by other research.”

The researchers are looking for things the EHR could do to prevent actual harm to patients, he added. “We were fortunate enough to have multiple databases where people had linked harm to patients all the way back to the order in the EHR. Based on that, we were able to build a test that actually evaluates the operational EHRs’ ability to prevent safety problems.”

Last year more than 1,000 hospitals in the United States took the Web-enabled test, he said. Here is how it works: Simulated cases are entered into an operational EHR. Each of these simulated cases has some serious problem, such as an order that is clearly an overdose, and researchers look to see if the EHR picks it up. “One would assume that of course it does, but in actual operations we find a high degree of variance,” Classen said.

Once they have loaded the scenarios into their systems and reported on how the EHR responded, hospitals get feedback on how they did based on categories of evaluation such as therapeutic duplication, and drug-drug interactions, dose limits based on a patient’s diagnosis or on age and weight, or dosages based on laboratory studies such as renal function.

Although this data has yet to be published, Classen said that between 2009 and 2013, hospital performance on the simulator has clearly gotten better. “They have learned and improved, which was one of the key things we were hoping they would use this test for,” he said, “and they have.” Hospitals have done well at picking up allergies and cross allergies and drug-drug interactions, he noted but certain other categories have not seen improvement, such as dosage adjustment for renal function.

“That is interesting, given all the focus on meaningful use,” he said.  “That may explain why we have been unable to make major improvements in safety as we have installed all these systems, because they may not be performing as we hoped they would. To get the impact from health IT, we need to understand that it is just one part of a larger socio-technical environment. To be successful we not only have to have the best technology, we have to have a socio-technological approach. Just hoping that technology alone was going to improve safety — that was probably not going to come to pass. There are many other factors. What this test is documenting is that we need to continually monitor and improve these systems, and it must go on in operational systems. It can’t just go on in vendor systems on the shelf.”

Classen called improvement a shared responsibility, much like in the aviation industry where safe operation is dependent on both Boeing producing a safe plane and United flying it safely. “We think that is the approach that should go on in healthcare as well,” he said.

He mentioned that he and David Bates, M.D., have received funding from AHRQ to expand, update and broaden this test to other categories including usability.

The update will make the test compatible with the latest versions of the leading EHR vendor products and the latest hospital formularies. In addition a new database will be created to host and administer the test. The investigators will then work with four study hospitals using four different leading major vendor applications to refine the test further. In addition, they will track how many hospitals nationally are using the system, how they perform on the test, and also the extent to which they improve with time.

Classen also noted that an ambulatory version of the test has been developed, but it has not been released yet.


The Health IT Summits gather 250+ healthcare leaders in cities across the U.S. to present important new insights, collaborate on ideas, and to have a little fun - Find a Summit Near You!


See more on

betebet sohbet hattı betebet bahis siteleringsbahis