Healthcare Safety Lessons from the Inter-Tropical Convergence Zone
In Part 1 of this series, we reviewed the crash of Air France Flight 447. I noted that from the final report of the tragedy, I developed eight factors that contributed to the loss of everyone aboard that I believe can be directly related to Clinical Decision Support in healthcare IT. To refresh your memory, here is the list:
1. Sensor failure precipitating a lethal cascade
2. Sudden autopilot withdrawal
3. Team competence dynamics
4. Black box incident reconstruction
5. Real time management
6. Physics and physiology, the Coffin Corner
7. Safety regulation
8. Privacy and individual rights
In this installment we will expand the first four factors in-depth and how they relate to HCIT Clinical Decision Support (CDS).
1. Sensors: Adequate input data reliant on “sensors,” in this case Pitot static tubes, which capture air speed. It was known that the Pitot probes required heaters to ensure operation during freezing temperatures. Unfortunately, these heaters were scheduled to be installed at a date that was too late to prevent the crash of AF447.
The application of this concept to healthcare IT is broad and complex. For one, it includes information that is needed but never arrives, perhaps because the integration of information is incomplete. This is a common situation today. Let’s also consider information that arrives too late. This is also common, occurring when such information is entered in another system that communicates in a batch mode after some final event occurs, such as a signature or validation. It also includes information that may arrive on time but is wrong, as was the case with AF447.
To reason over information when you know it contains errors takes a lot of added sophistication, because you don’t know where those errors are located. There are ways to do this using a “belief” function, and you can learn more by reading about the Dempster-Shafer Theory used in some of today’s HCIT CDS systems.
The bottom line here is simple. As we rely more on HCIT, the risk of making mistakes due to missing, late, and wrong information will increase. Our practices will need to improve to mitigate the risks.
2. Autopilot Withdrawal: The use of an automatic system (autopilot), which ceased functioning when starved of adequate input data.
The kinds of “laws” or flight control modes used for autopilots in aviation have no real parallel in healthcare IT today. This is the case because, with a few notable exceptions, we haven’t automated routine decision making in a comparable way. However, in chatting with noted healthcare safety expert David Classen last month, he explained a concept called “Clumsy Automation” he and others have written about for more than a decade.
One characteristic of this kind of automation is it can make easy situations easier to manage, and harder situations even more difficult to manage when there are problems. There are more discussions and relevant recommendations in David and Dr. Peter Kilbridges’ article from the Journal of the American Medical Informatics Association. For clarification, consider this working definition from the Harvard Journal of Law and Technology:
“The term ‘clumsy automation’ was coined by E.L. Wiener to denote the role awkward systems often play in provoking human errors in such technologically complicated areas as commercial aviation. Awkward interfaces occasion error by increasing rather than diminishing the cognitive workload of human operators at times when they are preoccupied with other tasks demanding attention. Operational failures often stem from interfaces that are not compatible with the finite cognitive capacity and competence of a technological system's human overseer.”
The bottom line here is simple, too. As we rely more on HCIT to synthesize and make recommendations, there will be situations where “doing the right thing” falls through the cracks. Our practices will need to build in enough time and human expertise to reason over the decisions we make.
3. Teams: The critical role of teams to solve problems, and the fact that most of the junior pilots were blamed for “not acting swiftly enough.” In the final crash report considerable attention was focused on the issues caused by the “PF” or pilot flying, and communications with the broader team.
The issues of teamwork during medication ordering, dispensing and administration, or teamwork in the operating room are well known. The AF447 crash reminds us that the most junior members of our professional team may be in the driver’s seat when things go wrong. Therefore, HCIT design, implementation and simulation need to explicitly consider what is “swift enough” for the common scenarios in healthcare, especially when the safety index is narrow.
Therefore, the bottom line here is that, as we rely more on HCIT, staffing needs to be situationally appropriate where possible. Our practices will need to focus more on matching provider skill levels with patient acuity, and ensuring timely communication of vital information to the most appropriate members of the care team.
4. Black Box: The role of the black box and comparable logs in general to sort out what happened.
Can we, in healthcare, expect to use log files to adequately deconstruct sentinel events after they have occurred? Should we routinely capture (and create transcripts of) the comments of our care teams?