Healthcare Safety Lessons From Air France 447 | Joe Bormel | Healthcare Blogs Skip to content Skip to navigation

A Tragic Air Crash Helps Define HCIT Safety Needs (Part 2)

August 1, 2012
| Reprints

Healthcare Safety Lessons from the Inter-Tropical Convergence Zone

In Part 1 of this series, we reviewed the crash of Air France Flight 447.  I noted that from the final report of the tragedy, I developed eight factors that contributed to the loss of everyone aboard that I believe can be directly related to Clinical Decision Support in healthcare IT.  To refresh your memory, here is the list:

   1.  Sensor failure precipitating a lethal cascade

   2.  Sudden autopilot withdrawal

   3.  Team competence dynamics

   4.  Black box incident reconstruction

   5.  Real time management

   6.  Physics and physiology, the Coffin Corner

   7.  Safety regulation

   8.  Privacy and individual rights

In this installment we will expand the first four factors in-depth and how they relate to HCIT Clinical Decision Support (CDS). 

1.  Sensors: Adequate input data reliant on “sensors,” in this case Pitot static tubes, which capture air speed.  It was known that the Pitot probes required heaters to ensure operation during freezing temperatures.  Unfortunately, these heaters were scheduled to be installed at a date that was too late to prevent the crash of AF447. 

The application of this concept to healthcare IT is broad and complex.  For one, it includes information that is needed but never arrives, perhaps because the integration of information is incomplete.  This is a common situation today.  Let’s also consider information that arrives too late.  This is also common, occurring when such information is entered in another system that communicates in a batch mode after some final event occurs, such as a signature or validation.  It also includes information that may arrive on time but is wrong, as was the case with AF447. 

To reason over information when you know it contains errors takes a lot of added sophistication, because you don’t know where those errors are located.  There are ways to do this using a “belief” function, and you can learn more by reading about the Dempster-Shafer Theory used in some of today’s HCIT CDS systems.

The bottom line here is simple.  As we rely more on HCIT, the risk of making mistakes due to missing, late, and wrong information will increase.  Our practices will need to improve to mitigate the risks.

2.  Autopilot Withdrawal: The use of an automatic system (autopilot), which ceased functioning when starved of adequate input data.  

The kinds of “laws” or flight control modes used for autopilots in aviation have no real parallel in healthcare IT today.  This is the case because, with a few notable exceptions, we haven’t automated routine decision making in a comparable way.  However, in chatting with noted healthcare safety expert David Classen last month, he explained a concept called “Clumsy Automation” he and others have written about for more than a decade. 

One characteristic of this kind of automation is it can make easy situations easier to manage, and harder situations even more difficult to manage when there are problems.  There are more discussions and relevant recommendations in David and Dr. Peter Kilbridges’ article from the Journal of the American Medical Informatics Association. For clarification, consider this working definition from the Harvard Journal of Law and Technology:

“The term ‘clumsy automation’ was coined by E.L. Wiener to denote the role awkward systems often play in provoking human errors in such technologically complicated areas as commercial aviation.  Awkward interfaces occasion error by increasing rather than diminishing the cognitive workload of human operators at times when they are preoccupied with other tasks demanding attention.  Operational failures often stem from interfaces that are not compatible with the finite cognitive capacity and competence of a technological system's human overseer.”

The bottom line here is simple, too.  As we rely more on HCIT to synthesize and make recommendations, there will be situations where “doing the right thing” falls through the cracks.  Our practices will need to build in enough time and human expertise to reason over the decisions we make.

3.  Teams: The critical role of teams to solve problems, and the fact that most of the junior pilots were blamed for “not acting swiftly enough.”  In the final crash report considerable attention was focused on the issues caused by the “PF” or pilot flying, and communications with the broader team. 

The issues of teamwork during medication ordering, dispensing and administration, or teamwork in the operating room are well known.  The AF447 crash reminds us that the most junior members of our professional team may be in the driver’s seat when things go wrong.  Therefore, HCIT design, implementation and simulation need to explicitly consider what is “swift enough” for the common scenarios in healthcare, especially when the safety index is narrow.

Therefore, the bottom line here is that, as we rely more on HCIT, staffing needs to be situationally appropriate where possible.  Our practices will need to focus more on matching provider skill levels with patient acuity, and ensuring timely communication of vital information to the most appropriate members of the care team.

4.  Black Box: The role of the black box and comparable logs in general to sort out what happened.  

Can we, in healthcare, expect to use log files to adequately deconstruct sentinel events after they have occurred?  Should we routinely capture (and create transcripts of) the comments of our care teams? 

Pages

Topics

Comments

I agree that there are many parallels between HC IT patient safety and the airline industry situation and efforts. Lessons there to be learned and some wheels HC need not reinvent. Some of the air industry solutions could inform HC solutions; but some of the apparent parallels are really not parallel: as you point out an Airbus auto pilot system is not the same as a EHR.
Looking forward to Part 3.

Jack and Anon,
Thanks for your comments.
Regarding the parallels between aviation and HCIT, the question itself is a bit too broad.

Aviation is a broad domain encompassing air traffic control, cockpit systems, and various ground/air automated systems.

Similarly, HCIT includes support and interaction for direct care providers (a subset of a flight cockpit), as well as those systems that are one or more steps removed from that, including traditional departmental systems like Laboratory, as well as scheduling, acuity, HIM, and revenue cycle. Most if not all of these systems have support for workflows that, at a minimum, assure reflexive behaviors, and automate the population of queues based on rules and analytics.

The most related patient systems to flight cockpits in the Air France story are those that reason over physiologic data, including indwelling catheter-based blood pressure sensors, ventilators, and electrical waveforms like Telemetry/EKGs. Providers in ICUs are used to data errors from these sources. Leads come off, bubbles occur in lines, and other factors routinely cause what might be termed "data integrity" problems. It's uncommon for providers to be confused, in part because they are so common.

But, as the story highlights, the world is changing. More data is routinely being captured and displayed. More remote monitoring is being used. And more complex care is being delivered. And, as illustrated by the communication between the pilot flying(PF) and the pilot not flying (PNF), the risks of team fragmentation can be just as lethal in healthcare delivery as in aviation.

The bottom line is that the parallels are more than strong enough to warrant learning from Aviation. And at all of the levels elaborated in this post.

Joe,
This blog is coming together well. After reading Part 1, I did write a comment questioning your second factor concerning the autopilot. However, I think you’ve done a well explaining this in detail in this post.

If I have this straight, in short, you’re saying that HCIT CDS has not yet evolved to the point where there can be a true comparison to the autopilot, but that’s really not a problem at this time. What we can learn as our systems receive enhancements is that we can avoid the fallibilities of autopilots to thereby avoid tragic consequences in the future. Am I on track so far?

Further, I find that the last sentence in this section is a superior lead to the third factor about teams. In fact, as I read on, I thought how much it appears these two factors are actually interdependent. Is this presumption correct?

If I’m at all moving in the right direction, then I question the comment by Anonymous. It seems to me that first, we all know there are massive differences between an Airbus and HCIT. You pointed out in your description of factor two that the autopilot was not, at least at this time, a direct fit to HCIT CDS, so why restate the obvious?

Anonymous also wrote there are some of the apparent parallels that are not parallels at all.

Again, according to this post, the autopilot example would be if HCIT CDS was more mature, but that’s not important. Your point is that we need to learn from the autopilot so we don’t become a direct parallel and make similar deadly mistakes.

Since you’re the only one I know of who is pointing to “parallels,” where else does Anonymous think you’ve misstated them in your blog? Obviously, I found this comment, at best, superfluous.

Keep up the good work and post Part 3 soon. Thanks,

Jack

Jack and Anon,
Thanks for your comments.
Regarding the parallels between aviation and HCIT, the question itself is a bit too broad.

Aviation is a broad domain encompassing air traffic control, cockpit systems, and various ground-to-air-to-ground automated systems.

Similarly, HCIT includes support and interaction for direct care providers (a subset of a flight cockpit), as well as those systems that are one or more steps removed from that, including traditional departmental systems like Laboratory, as well as scheduling, acuity, HIM, and revenue cycle. Most if not all of these systems have support for workflows that, at a minimum, automate the population of queues based on rules and analytics.

The most related patient systems to flight cockpits in the Air France story are those that reason over physiologic data, including indwelling catheter-based blood pressure sensors, ventilators, and electrical waveforms like Telemetry/EKGs. Providers in ICUs are used to data errors from these sources. Leads come off, bubbles occur in lines, and other factors routinely cause what might be termed "data integrity" problems. It's uncommon for providers to be confused.

But, as the story highlights, the world is changing. More data is routinely being captured and displayed. More remote monitoring is being used. And more complex care is being delivered. And, as illustrated by the communication between the pilot flying(PF) and the pilot not flying (PNF), the risks of team fragmentation can be just as lethal in healthcare delivery as in aviation.

The bottom line is that the parallels are more than strong enough to warrant learning from Aviation. And at all of the levels elaborated in this post.

The reason the pitot tubes on the Air France Airbus were not replaced sooner was the high cost ($90,000 each, each aircraft has three pitot tubes). The icing issues associated with these sensors were known for many months prior to the crash and economics dictated the decision to go slowly with their replacement across the fleet of Air France planes. I'm sure there are many parallels in the healthcare system.

Knowing the importance of the accurate measurement of air speed, the air craft had three pitot tubes installed for redundancy. In this case all pitot tubes failed simultaneously, hence the inability to directly measure speed with the pitot tubes. I am not aware of a similar redundant configuration in the instrumentation used to measure physiologic parameters in patients. Therefore clinicians must recognize that there is the potential for false data and they must make their clinical decisions with that in mind.

Unlike the assertions in Part 1 of this series - that the failures that occurred in the flight would have most likely had a similar catastrophic outcome with other flight crews - I have read articles indicating this is not necessarily true. There are many other instruments in the cockpit that could have provided valuable data to ascertain air speed, and the overall attitude of the aircraft. Pilots call this process "cross checking" - using other instruments to verify the data you are receiving (or not receiving) from any one particular instrument. The fact that they did not perform any cross checking is indicative of training shortcomings (the lack of cross checking was confirmed with voice recordings from the cockpit). In healthcare, I would hope that there would be a similar ability to cross check data from other instrument outputs and physical observation of the patient to confirm conclusions that are being made and the subsequent clinical decisions that are initiated.

Another indication of training short comings is that the pilot flying the plane was pulling back on the flight control stick, trying to pull the nose of the plane upwards, when he should have been doing the opposite, pushing the nose down to gain air speed. He continued to provide the wrong inputs despite the fact that he received 75 loud audible warnings from cockpit alarms indicating the plane was stalling.

An area you haven't touched on is what I like to call the man-machine user interface. Decades of research have been spent on optimizing the layout and design of the instruments in the cockpit of planes. Colors of indicators, fonts, image contrast, movement, location and position, etc. have been evaluated with the hope of creating the optimal ability for the machine to indicate to its human operators the condition of the air craft and its flight performance. Now consider the design and layout of the various screens of information that are presented to clinicians by our healthcare software applications. I would argue that many of the systems in use today have poorly designed user interfaces, that make it easy to miss data that is presented or make it difficult to find the data you are looking for. There is room for significant improvements in the user experience and the man-machine interface clinicians must use to interact with the healthcare IT systems that have been installed. Perhaps healthcare application software vendors could consider designing a highly engineered and optimized dashboard of key indicators that would be common across all platforms so that clinicians could rely on some level of standard presentation of key data regardless of the particular EMR they are interacting with.

Much more could be said here, but then I'd need a blog of my own.

Ted Vaczy

Ted,
I just returned from vacation (I didn't fly to the beach!) and was thrilled to read your reply. Thank you. It was a treat to see all of our common connections in LinkedIn! The world is getting more "Social" by the day!

A couple of observations to share regarding your comments:

- Part Three of the series is out:
http://www.healthcare-informatics.com/blogs/joe-bormel/tragic-air-crash-helps-define-hcit-safety-needs-part-3
I'd love your take on it, especially the "NTSB for healthcare" dimension.

- Training and Human/User Interaction was clearly a central problem as you shared. The fact that the pilot and co-pilot joy sticks were not ganged (the movement of one reflect in the other pilots) is a known design difference between the major classes of aircraft involved. It did contribute to the team not knowing what other key members were doing.

- Thanks for the pricing information of the Pitot heaters. I didn't have that data. My research on the three, independent probe heat computers, PHC (final report, page 35) stated that no malfunction was found. From the 2011 New York Times article by Wil Hylton, page 6: " Five days later, when Flight 447 took off in Rio, the probes were still in an Air France warehouse, and none of them had been installed. All three pitots on Flight 447 were the Thales AA. [older technology, associated with 2 close calls on Air Caraibes]" So, Air France apparently had already paid for, or at least received the upgraded Pitot tubes, just not installed them.

- Lastly, the inference that other crews, potentially better trained could have made the same error was based on Captain Sullenbergers statement to that effect. He was focusing on cockpit design issues (the link is in my original post).

Since writing this post, I have been reading Nobel prize winner [Economics] Daniel Kahneman's book, Thinking Fast and Slow. It's clear, based on very solid cognitive research, that highly trained and experienced pilots can become blind, even to stall warnings. This is related to how System One and System Two function to solve problems.

As elaborated through police training examples by Gladwell in Blink, the solution to the cognitive realities of human "fast" thinking is to institute processes that effectively turn on System Two functioning, and create slow time and activation for System Two, necessary to overcome System One's shortcomings and ensuring the "Cross Checking" occurs, as you very appropriately reminded us all. In healthcare, team rounds is one mechanism to overcome individual blindness. Kahneman's book actually provides evidence that there's a case for EHRs to deliberately switch to harder-to-read fonts and displays, as they do cause users to slow down and process information more carefully.

Thanks again for your terrific comments.

Pages