Skip to content Skip to navigation

Are there HCIT lessons in Volcanic Ash Clouds?

April 28, 2010
by Joe Bormel, M.D.
| Reprints

What have airlines learned from dealing with volcanic ash when, last week, much of the European airspace was shut down? (If you're a listener rather than a reader, click here.) Are there lessons for HCIT? Yes.
 

IATA: International Air Transport Association Governments were slow to understand the world-wide impact of the shutdown and based decisions to close airspace on theoretical models with little data collected or few tests done, complained Giovanni Bisignani, director general and chief executive of the International Air Transport Association, a Geneva-based airline-industry group.
source link

I'm not trying to be exhaustive, but here are some highlights. I've provided some source links for those of you who might be interested, if only to read the titles.

Highlights:

  • The decision to shut down all of the airspace was an over-reaction, probably driven by over reliance on weak computer models, and a failure to consult with airlines and government agencies (including government agencies in the US).
  • The problem of volcanic ash and jet air travel is far from new. Alaska Air and KLM both have more than two decades of relevant experience. Bottom line: collect the right data early and continually, validate your models, and you can safely fly around volcanic ash.
  • The congested airspace in Europe, hundreds of airports and 28,000 flight per day make it more imperative to build, maintain and validate airspace models in real time, and use them.
  • "Your data is bad and my patients are sicker." This volcanic ash was different (involving glaciers), but crisis response was contrary to best practices.

The Lessons For HCIT:
 

  1. Train the decision makers ahead of time; this is critical to avoid ignorant responses. A pre-occupation with what can happen and how systems fail if critical to HCIT system design. Not a new lesson for HCI readers, but timely validation. Training requires planning, simulators, testing and often external expertise. Are you testing your data backups by restoring them on another system?
  2. Computer modeling is becoming essential in our complex world. Accurate and early input data is crucial. Using the data is crucial. Getting experience with using the models is critical. Those of us who use our car's GPS, even when we don't need it, are practicing this lesson. You cannot know the false-positive rates if you only use the technology during a crisis.
  3. Learn from others; this takes patience, time and discipline, but it's essential. Alaska Air (think Mount St. Helens eruption in 1980 ), deals with volcanic ash on a regular basis and has for years. Apparently, it's not just Americans being too parochial to learn from the Europeans!

Losing all four engines and living to tell about itIn 1989, for example, a KLM Royal Dutch Airlines 747 encountered an ash cloud while descending through 25,000 feet towards Anchorage, Alaska. The pilots added power to try to climb out of the cloud, but that only made engine damage worse. The wide-body jet lost all four engines and about half its instruments failed. Pilots restarted engines just one minute before impact and landed safely.

The crisis response is rarely pretty, whether to a natural act like a volcano, or to a social or technology breakdown in HCIT. Having a well-designed set of rules, policies and information reporting systems in place and matured are essential to avoiding panicked-based over-responses. That's what I think volcanic ash clouds teach us, or at least remind us.

Source links:
 

How One Airline Skirts the Ash Clouds- Scott MCCARTNEY's The Middel Seat Column in WSJ, April 21, 2010

Volcanic Lessons: Build Better Models, Collect Data Faster - Scott MCCARTNEY's blog

podcast: http://podcast.mktw.net/wsj/audio/20100421/pod-wsjmidseat/pod-wsjmidseat.mp3

The ash cloud that never was: How volcanic plume over UK was only a twentieth of safe-flying limit and blunders led to ban

By David Rose, Matt Sandy and Simon Mcgee

Pages

Topics

Comments

Thanks for your comments IA.

A dimension of "Trade-offs" is asymmetrical consequences. No matter what the probabilities are, some policy makers were undoubtedly considering that any risk of a avoidable Airbus or Boeing crash was too much for them to make (as an individual or an agency).

There's another dimension and difference between practicing medicine and practicing engineering. Last year, my dad, in his late eighties, received several carotid ultra sound studies for a history of a TIA eight years prior. He's been asymptomatic - the trade-off are multi-fold. Medicolegally, if he does have a stroke and his physician didn't explore his carotids proactively, they could be held liable. Economically, some are advantaged by performing a carotid endarterectomy no matter how low the probability of stroke or how great the probability of benefit.

It's also the difference between safety in aviation (error = passenger and pilot death) versus medicine (error = patient death, not {physician | nurse | IT director | executive administrator} death).

In short, I think you're exactly right. These decisions are a question of trade-offs that can be, and often are largely independent of probabilities. This, of course, makes it even more imperative for organizations (institutions to use your word) to have processes in place that make those constraints explicit and rational. Otherwise, irrational conflict avoidance will continue, shrouded by a misuse of probability theory.

Thanks again for your insight and application of Volcanic Ash lessons for HCIT.

Thanks IA and Jack for your fundamental observations.

One of the core assumptions we have is that, if we make integrated, high quality (codified) data available at the low cost of zero clicks (usability) and speed (sub-second screen flips) of well-organized, dense, contextually-appropriate data constituting relevant information, we will, as an unavoidable and natural consequence, have better decision making.

Icelandic volcanic ash cloud response provides clear evidence that, even when data is collected and wrapped in coherent decision support, boundary conditions (defined by complexity science) must be pre-scripted to use that data effectively, in systems involving human behavior. Data, decision support and probablistic decision making policies that are inadequately constrained are not enough.

Jack, I think you're point about coincident death (all passengers on one jet at one place and time) is an important distinction from patient safety, avoidable death scenario.  As we discussed above, it certainly does have a very different media signature as you elegantly pointed out.

I think, however, it's premature to praise or condemn the decision to completely shutdown the airspace completely.  If the fact pattern is concordant with that of the many cases of Alaska Air, where the cloud can be clearly identified and skirted, then the reaction was out-of-proportion to the risk.  Even factoring in the 250 passenger risk multiplier you offered as a theoretical compounder.  And even factoring in the twenty-eight thousand flights per day multiplier. 

Telegraph View The decision was based on a computer model operated by the Meteorological Office's Volcanic Ash Advisory Centre, which suggested there was a cloud of ash covering northern Europe. This prompted a warning from the Met Office, which triggered the wider European ban, via Eurocontrol, the Brussels-based air traffic control centre. However, the model is no more than that - a mathematical model. There was no empirical evidence to back up its findings. Yesterday, the European Commission suggested that the American method of dealing with such episodes, whereby airlines decide whether to fly based on facts and supported by risk assessment, might offer a better approach.


It's not for us as interested bystanders to pass judgment on the decision to shut down; I do think that it's valuable to examine the process of the decision making, and use it to be more judicious about managing with HCIT.

Thanks Kate. There were two thoughts in my mind throughout writing this. Disaster Recovery (DR) and Evidence-based Management (the broader kin to Evidence-based Medicine). This recent example illustrates that human systems and economic models don't favor that other old adage "Be Prepared."

Many CIOs cannot seem to be able to afford maintaining some critical redundancies in their technology platforms, nor services like independent validation of their DR strategy and media. This is not an indictment; in too many cases I'm aware of, there's a broad knowledge and respect gap between the four pillars of HCIT:

  • the business executives,
  • the technology and systems executives including the CIOs and IT directors,
  • the clinicians, and
  • the project management professional.


Thanks for validating the messages, including the importance of maintaining validation and the related use of monitoring systems.

Stacie,

Thanks for your comment. I, too, had not considered the centralized versus distributed nature of the decision making involving life and death risks.

You bring out the concept of shared decision making, between the patient and provider. The IA comment above about decisions having a probability and a trade-off dimension could have been made by an international expert in healthcare decision making. They like to point out that most patients have zero experience in the necessary metrics and techniques to make those decisions. They involve metrics like Quality-Adjusted-Life-Years. Two decades ago, when prostate cancer surgery was less refined, it often resulted in impotence and incontinence, often with no change in longevity. When presented that way, the rates of surgery would have probably been much lower.

Although its two decades later, comparable failures of ethical shared decision making in healthcare are among the drivers for healthcare reform.

This issue of centralized versus distributed nature of decision making brings up the issue, is there a Master Builder issue here?

A master builder, as described in Atul Gawande's latest book, The Checklist Manifesto, is "... a supreme, all-knowing expert with command of all existing knowledge ... ", and entitled to behave as an autonomous decision maker?

If not, to manage the decisions, you need a "Submittal Schedule" - a checklist of communication tasks, that is married to the project task list. Apparently, as described on page 65, PMO offices in the construction industry have evolved to recognize that complexity in their industry requires recognition that individual, central decision making does not work when multiple specialists are required and unpredicted variation is a common occurrence.

The argument is that the Master Builder model, common in healthcare delivery, should be replaced. Even moving from physician as Master Builder to physician-and-patient as the Masters is discordant with the decision making work and location of the necessary expertise.

You can see, as a PMP, why you might find this evolution in construction project management of interest to you.

Joe,
What a great post! Very thought-provoking. I found your example of what happened in 1989 interesting because you stated that Alaska Air and KLM had so much experience with ash. Don't know if you did this on purpose, or if it was merely coincidence, but the example certainly triggered my thinking — human reaction/evaluation versus available technology.

But for the first time, I think that the comments to you post from "Insightful Anonymous" are more than somewhat distressing. The statement that the actions at the time of the problem were "sloppy" is at least uninformed based upon the available information, and in general insulting to those interpreting the technology in real time. What the Europeans did, in light of what was logically considered a crisis, was absolutely on target.

Inconvenience and economic impact do not in any way trump the potential for death in a situation in which those who may be threatened have no control. I'm quite proud of the fact that the Europeans chose preserving life over profitable expediency.

When individuals in the healthcare industry unintentionally kill people in the United States, agreed, by the old term "medical misadventures," they do so one-by-one on a very quiet basis, more often than not unnoticed by the media. Even when the number may total 100,000 or more annually. But when an airliner crashes and takes say 250 or more human beings to their untimely deaths at a moment in time, this is news, this calls for action. Heads will roll.

All that considered, the fact is commercial air travel is far safer than a required visit for simple surgery at your local hospital. What a pity. It's a matter of priority . . . it's a matter of valuing life . . . and personal responsibility, regardless of technology.

Perhaps I'm being far too philosophical or naive with this comment. But it seems to me that we're losing the sense of person, the sense of value of life as we press hard into a technology-based healthcare industry.

I fully support what the Europeans did in light of this situation. In fact, I had a close friend who was stranded for many days in London due to the "crisis." I can assure you, he was severely impacted by the cost of being grounded. However, he told me upon coming home that whatever the cost, it was better than taking a chance on dying if the engines of his plane flamed-out. At a significantly lower lever, I feel the same way about the inconvenience of going through airport security.

The models for volcano ash take into consideration the potential for disaster. It's up to human beings to determine if the risk of flying into them is worth the possible "unintended" consequences. How often, as we all in healthcare salivate over federal EMR/EHR funding, do we take into consideration how well the tech really works, and the consequences of our actions without thoughtful human intervention?

Jack

This is an interesting parallel that I hadn’t previously considered.  In contemplating the difference in risk/benefit analysis between airline travel during a disaster and healthcare delivery:

  • The monumental difference in healthcare delivery is that we generally have good procedure risks/benefits statistics which are transferable across locations and when considering a procedure for a person, the risks/benefit can be assessed INDIVIDUALLY for decision-making.  This shifts the risk/benefit analysis challenge to the patient and provider.  Does the provider believe s/he knows best and prefers to make the decision for the patient without fully explaining all the risks/benefits?  If the provider wants to, is able to spend the time, and is capable of relating the explanation, does the patient want to and have the capacity to understand the explanation and actively participate in the decision process?  There is no 'right' answer here, it is more about alignment of the patient/provider.


  • With the compounding effect of 250+ people per flight and 28,000 flights per day, decisions about flying cannot be made individually.  Thus responsibility shifts to the airlines and the governing bodies who have a duty to put the general safety of the passengers and crews first.  Since they did not have their own data and had not tested the Alaska modeling for valid and reliable transferability, they were unable to determine how to keep the passengers and crew safe in flight.  It seems there was no alternative but to ground the flights in this situation.

It seems indisputable that to ensure adequate safety for flights and healthcare delivery during a disaster, there must be a plan and it must be established, communicated, and tested to prove it valid and reliable until those who would have to execute the plan could do so successfully while under the duress of a disaster.
 
Stacie DePeau, MBA, PMP

Decisions involve both probabilities and trade-offs. The HCIT comments focus on probabilities. But also in advance need to be elicited tradeoff thresholds. What is the maximum number of flights/people grounded to "pay" for a 1/1,000 added risk due to volcanic ash?

Similarly for alerts in hospitals --- each hospital should define their maximum false alarm rate per adverse event averted (a clinical "willingness to pay").

Similarly for record matching algorithms --- what's the maximum error rate the institution is willing to tolerate. These are questions of preference and value, not just of probability.

I am told that to IT people, this language is foreign. But IT folks, and all engineers, know that all design involves designing against constraints. We should help institutions specify those constraints.

The Importance of Assessment / Evaluation

The parallel to the use of theory versus assessment in HCIT is most clear with CPOE.  See Health Affairs, 29(4), pp.655-663 (2010-04-01).



I've covered this issue in more depth here, in "Big Decisions - Certification and Evaluation?
Do meaningful use, qualified provider, and certification hit the mark?
"

Joe,
Thank you for tackling this critical issue. I definitely believe that there are definitely takeaways from the volcanic ash incident and after-effect that can be applied to HCIT.
The most significant point was "collect the right data early and continually, validate your models." If there is data that can help prevent disasters, or facilitate the cleanup/recovery period and help avoid losses, then we are foolish not to take advantage.
The other point that really stuck with me was: "You cannot know the false-positive rates if you only use the technology during a crisis." I can't tell you how many hospital executives I've spoken with on the topic of disaster recovery who have admitted that testing isn't as high as it should be on their priority list.
I really hope that those in high positions can apply the lessons learned by European airports to their own organizations. What's that old adage again about those who don't learn from the past?
Great job, Joe!

While I believe the reaction in Europe was sloppy, poorly targeted, and rumor-driven instead of data-driven, and I even believe that similar errors afflict HIT, I don't have a lot to say about the connection.

People behave that way in all areas of human activity.

The choice to turn off all the airspace had been a good over-reaction, most likely powered through more than reliance upon fragile pc designs, along with a failing to see along with air carriers as well as government departments (such as government departments in america).

Pages

Joe Bormel

Healthcare IT Consutant

Joe Bormel

@jbormel

...