Skip to content Skip to navigation

A Tragic Air Crash Helps Define HCIT Safety Needs (Part 1)

July 20, 2012
| Reprints

A Tragic Air Crash Helps Define HCIT Safety Needs (Part 1)     

Healthcare Safety Lessons from the Inter-Tropical Convergence Zone

I have long been an advocate of relating aviation safety to our needs in Healthcare IT.  In recent years I’ve been gratified to see that a number of my colleagues and knowledgeable healthcare IT advocates have also adopted this approach.  We do this to clarify our challenges to improve patient safety, quality of care, and cost control by incorporating real world needs to effectively evolve technology.

To this end, I am about to present the first installment of a 3-part series using the lessons learned from the tragic crash of Air France Flight 447 (AF447) as they relate primarily to Clinical Decision Support.  I invite your comments along this journey so that together we can evolve this blog.  That said, let’s begin.

There have been literally dozens of articles and thought leader blogs written about safety lessons from aviation for healthcare.  One that I found particularly interesting is an article (here) by Laura Landro on the Health Blog of The Wall Street Journal that considers aviation an “inspiration for improving patient safety.”  

Further, on the Healthcare Informatics website, David Raths posted a blog entitled “Does Healthcare Need an NTSB?”  In it, David points out that “IOM has recommended creating a coherent structure for reporting health IT-related errors,” and raises the question, “... I wonder if there is an equivalency in terms of what pilots and mechanics learn from airplane mishaps and what clinical teams would learn from medical error investigations.”  His blog is a thought-provoking read. 

Earlier this month, the final crash report on AF447 was released.  The implications for HCIT safety, usability and hazard governance are profound.  The crash occurred on June 1, 2009.  All 228 people onboard were killed, and it took three years to unravel the mysterious components of the story.  

Several of the elements of this tragedy are particularly salient to the Clinical Decision Support capabilities and the paired human systems we all see as factors necessary to improve health, healthcare and costs.  These factors are:

          1.  Sensor failure precipitating a lethal cascade

          2.  Sudden autopilot withdrawal

          3.  Team competence dynamics

          4.  Black box incident reconstruction

          5.  Real time management

          6.  Physics and physiology, the Coffin Corner

          7.  Safety regulation

          8.  Privacy and individual rights

In essence, the sequence of events that led to the crash of AF447 began with a failure of the air speed sensor system.  Of note is that this simple system—keep that in mind, simple system—uses a series of what are called Pitot static tubes mounted externally on the aircraft’s fuselage to determine air speed.  The plane’s forward motion forces air into the tubes and the pressure it creates is computed to determine air speed. 

This simple process triggered the disaster because all three of the Airbus A330 Pitot tubes froze.  Therefore, the sensor system reported back to the plane’s autopilot that the air speed was zero.  In turn, the autopilot informed the cockpit crew it could no longer perform its function and shut down automatically.

All of this occurred at high altitude and high speed where the aircraft’s stall speed (the speed at which it quits flying and begins falling through the sky) is very close to the actual speed of the plane.  Further complicating this deadly situation was an inaccurate report by a ground station of a severe storm AF447 had entered, which likely caused the tubes to freeze in the first place.   

A stall at high speed and altitude is very serious.  In the aircraft industry, it’s known as the “Coffin Corner.”  Managing such an emergency requires a few more degrees of sophistication than the challenges pilots are normally subjected to in typical flight simulators.  Succinctly, the following list, complied from the official report, provides insight to the deadly chain of events:

     PF = Pilot Flying; PNF = Pilot Not Flying 

The immediate factors leading to the stall that befell AF447 were so severe the pilots were unable to recover, resulting in a high impact crash with the ocean below.  In fact, many fight crews faced with the same situation would likely not be able to recover either.  That conclusion is at the crux of this issue.  However, the broader issues, what took place months before and months after the event, are perhaps even more prescient to the grand strategic vision for employing HCIT to improve the pursuit of better health, healthcare and cost control.

In Part 2 of this blog, we will begin in detail to explore the eight factors of this crash that relate to Healthcare IT issues and challenges.  If you would like to read more about AF447, I recommend a CNN article that captures the most salient points.

Your thoughts are welcome so together we can evolve this blog topic.

Joseph I. Bormel

CMO and Vice President

QuadraMed Corporation





Although you note that you'll be dealing with the eight points mentioned here in subsequent posts, I must admit I'm a little puzzled by the first two.

The first focuses on a failed sensor system on the aircraft. That's a fact the board of inquiry discovered. Fine. But are you then concluding that at the present time CDS within EMRs is failing, too? I think that's a stretch considering the systems are still evolving and none is close to maturity.

Connecting the second point CDS is also a bit thin. I know that the concept of CDS is not new but in reality, it is in its infancy in terms of real world development. Existing software may not have the capability to come up with the support a doc might need or should have, but that's a question of intuity. I think we're quite a long way from that kind of sophistication. Am I missing something?

Obviously, I'm looking forward to the next installment. As a sidebar, I've attended a few of your past presentations using aviation analogies going back about six or so years to a facinating talk you gave on High Reliability Organizations. It's good to see other HCIT professionals using aviation to underscore what it is we need to do to reach our goals.


Jack, Thanks for your comment.

Regarding the sensor failure example in healthcare, think about how we administer medications that lower heart rate. There are often several checks to ensure that the patient heart rate is not so low that we would want to not give the next dose. Now imagine that the heart rate information is missing or wrong, so we give the drug that results in a patient death. Missing accurate data that we were counting on can create a hazard. That's analogous to the sensory failure.

Similarly, if I were giving the medication, assuming that the EHR would stop me from giving the medication if the heart rate was too low, but for some reason it doesn't stop me, my reliance on that decision support would have similarly contributed to the harm.

Both risks could be mitigated in theory. In busy practice, the process needs to be designed with a pre-occupation with failure (i.e. by identifying missing but needed critical data, assuring it's collection before medication administration), continual testing and simulation with the system, and containment if the harm reaches the patient (in the form of an antidote or increased monitoring.)

So, just as in aviation, making the cockpit smarter for the expected cases does not necessarily make it smarter in general. If we inadvertantly encourage providers to be less attentive to these details, we will have CDS-associated unintended consequences.