When I started this series (link to Part One), I began with the question of whether ARRA/HITECH, and specifically implementing certified solutions and evolving through Stage 1 of Meaningful Use, would improve care. As shown in the graphic, this was then separated into three sequential questions: 1) does HIT operationally deliver the improvement (see text in the "A Promise of HIT" above); 2) would improving the informational quality of the content delivered play a small or large role; and 3) when doctors and other providers, "working at the top of their license," receive data, information, decision and knowledge support, does that translate into better care?
Spoiler alert! If you haven't read the earlier parts of this series, I'm about to ruin the suspense for you. Easily a dozen different investigators, working independently, have documented in peer review literature that the rate of cognitive errors resulting in harm maybe as high as 15 percent. My earlier posts outline the types of errors and what to do about them, with valuable, authoritative links. To a person, these folks are frustrated. Since the general percentage of perfect care delivered is rarely better than 50 to 80 percent (see the oft quoted Elizabeth McGlynn et al work (NEJM 2003: The Quality of Health Care Delivered to Adults in the United States), or hospitalcompare.hhs.gov for the more recent numbers behind these estimates for hospital care), the reasons for this, and the role of HCIT may need to be rethought. Or at least, reprioritized.
Attention to cognitive errors, and the potential for improvement with HCIT, is always a second, third, or last place item on the performance improvement agenda, as well as the research funding agenda. Addressing cognitive errors is not on anyone’s critical path to attesting to nor delivering sustainable Meaningful Use. This is elaborated in Parts Two and Three of this series, with references and recommended solutions to the problem of cognitive errors from the leading experts.
In this post, we are going to look briefly at how people (including doctors) make decisions and what that means in regard to making errors. Then, in Part Five, the final post in this series, we are going to pull all that together in terms of the "Four Individual Determinates."
Based on my reading and discussions with several of the leaders in the clinical cognition field, there are three ways we make decisions. Two of them are both necessary and require different kinds of support, be that human coaching, process redesign, or HCIT-enabled cognitive enhancement:
1. Conscious, rationale, and often procedural approaches to problem solving.
2. Semiconscious, automatic thinking that may involve pattern recognition, gut, intuition, where the brain is clearly involved but has little or no ability to explain how it reached the conclusion.
3. Instinctual or reflexive thinking of the sort that animals can do that doesn’t involve higher, more evolved parts of the brain. We're not going to discuss this category any further here.
Often, the kinds of “deep smarts” (link to Dorothy Leonard’s work) displayed by experts in their fields contain significant amounts of semiconscious mode. These kinds of smarts are very difficult to document, convert to computer algorithms, or transfer/teach to students, subordinates or peers. Many of the experts we escalate tough cases to, and who seem to know what's going on within moments of walking into a room, display deep smarts. One critical question that comes out of this is, how early or late in a diagnostic process should be refer a patient for a consultation with a specialist?
The other challenge with semiconscious conclusions is that they aren’t always correct, simply because they are the result of a semiconscious process. In his best selling and highly recommended book, “Blink,” Malcolm Gladwell elaborates this kind of thinking, using terms like thin slicing. He reviews the work of researchers applying the implication of this brain functioning in especially fascinating ways to marital counseling and law enforcement. The implications for industries outside of healthcare have led to strict adherence to procedures, sometimes in the form of checklists, intended to reduce the danger of police, the military, pilots, nuclear power plant operators and others from wrongly going with their gut in making decisions under irreducible time pressure. With a few well known examples in HCIT, like drug-allergy checking, there are very few routine disciplines practiced and ritualized by IT to address these safety concerns in healthcare.
One example, from the article "Patient Care, Square-Rigger Sailing, and Safety," by Steven J. Henkind, MD, PhD and J. Christopher Sinnett, MA, MBA from JAMA 2008 is extremely telling. The authors compare how change-of-shift hand-offs are accomplished by the Coast Guard, as compared with shifts in the hospital. In the USCG, shifts are staggered. A hand-off document is produced before change-of-shift that is reviewed by the incoming officer, and then presented by the new guy to the end-of-shift guy for validation, thereby assuring accurate communication has occurred.
How many hospitals and EDs today use such a practice? Few. How many go one step further and do acuity-based staffing? Even fewer. The avoidable death statistics are provided in the Henkind JAMA article. Given the nature of their staffs (a high percentage of trainees) and unforgiving work, I would peg the Coast Guard at around Seven Sigma in safety. The numbers are there. What's your calculation?
The closing point to this fourth post on blind spots is this. People cannot reliably keep more than a few elements in their heads at the same time. We also consistently over-estimate our ability to remember things without writing them down. For most of us, our ability to use or guess at probabilities is, to put it generously, underdeveloped. We need to take shortcuts, use semiconscious thought processes, and use a bit of guesswork to get through our days in a reasonably efficient manner. But studies of and work in other industries have shown that we can and must structure our work and use information technology more than we do today if we are truly going to improve care.
The ARRA-certified solutions, Meaningful Use targets, and even improvements in usability that are barely on the drawing board today only address, at most, a third of the work that needs to be done in parallel to function at the level of the other industries referenced. And that’s not because healthcare is more complex, although it might be more complex than say financial services.
It’s because people, across industries, make decisions using emotional drivers, deep semiconscious, non-analytic processes, as well as applying procedures (read checklists) when they’re available. Focused attention to these cognitive issues is essential. Single-focused attention to EHRs, semantic interoperability, document exchange of clinical summaries and transaction-based decision support, while necessary, are far from sufficient to improve care. Laying clinical knowledge management on top of an EHR only compounds the challenge. We will pick at that point in Part Five!
CMO & VP, QuadraMed
This Post: http://tinyurl.com/BlindSpots4 Prior: BlindSpots part one, two, and three
"The more you sweat in training, the less you bleed in war."
Commonly heard in the Marine Corps during physical training — a variation of a statement by Gen. George Patton, “The more you sweat in peace, the less you bleed in war,” made after World War II when he was quoting a Chinese proverb.