Skip to content Skip to navigation

IT and HIEs are important, but can they stop a shooter?

January 14, 2011
by anonymous
| Reprints
The role of IT and health information exchanges (HIE) as preventative measures

I was rather appalled this week to receive an email inviting me to discuss the role of IT and health information exchanges (HIE) as preventative measures to counteract events such as happened in Tucson, Arizona last week with an IT company executive.

IT and HIEs offer great promise to healthcare, but I am sorry: I do not think that either IT or an HIE can do much, if anything, to predict or stop heinous acts such as the shootings of innocent people.

For a start, how would IT and an HIE identify a person most likely to commit such an act? How would it differentiate between a person who merely behaves erratically and one who plans to kill others? It would have to depend upon past behavior and although some disturbed people leave a solid trail of clues, many do not. So much for algorithms.

But let’s say that an HIE could pinpoint with a high degree of probability someone very likely to hurt or kill someone, who would have access to that information? Privacy laws strictly safeguard personal health information, particularly mental health information. Without the express permission of the patient, not even other care givers have access to that information. People with seriously delusional views of the world are unlikely to agree to have their health information available to other care providers, let alone law enforcement personnel.

I sincerely wish there were a way to identify and intercept a person planning to harm others. What happened last week in Tucson was absolutely evil. Unfortunately, there is no surefire way. Not even healthcare IT.

Topics

Comments

Two new words to add to my vocabulary, Joe. And they accurately describe a reality in a realm where we cannot know allever.

Charlene,
I am thrilled that you wrote this post.

It is in the same spirit of my Thanksgiving post this year, bringing in a story about turkey's from Nassim Nicholas Taleb's best seller, The Black Swan. In short, whether prospectively (i.e. identifying a a shooter before they shoot), or retrospectively, (e.g. finding the shooter after they have shot, assuming they weren't identified), it's a stretch to say that can be done through HIEs, other instances of data mining, and exploratory data analysis in general.

Aside from the fairly obvious problems of high expected false positives and false negatives, there is the problem you cited. The conclusive data is unlikely to be in there.

Why is that? Well, very few people talk about it. But, it was explained to me in different ways by two brilliant individuals.

1) Intentionality (as defined and discussed here: http://en.wikipedia.org/wiki/Intentionality ). This was first brought to my attention over a decade ago by Dr David McCallie. David is one of the few, true academic and industrial informaticists. Simply said, you cannot tell by looking in an EMR, audit logs, and other database approaches what was the intention of a doctor when they ordered anything. When they ordered, for example, that Levofloxacin, were they thinking "Community Acquired Pneumonia" or simply empiric coverage until cultures came back and symptoms were reassessed in 24 hours. In most cases, even today, you cannot reliably know the intention of the person entering the information (in this case an order). Yes, with well structured CPOE and Problem Lists (both of which have only been required with ARRA/HITECH's MU Core definitions), and ideally pre-supposing problem-based workflow, we will be able to get closer to intention.

If we cannot assess "intention" in these well structured settings in available process data, will we have the patient data with someone who becomes a shooter?

2) Ice Sculptures. This idea is elaborated in Taleb's book. If you look at the puddle after an ice sculpture has melted, you cannot distinguish whether that puddle was the result of a sculpture or the original block of ice, or a pile of ice cubes. Said more simply, the information is lost at the puddle stage. There is often not enough data in an HIE or a puddle to reconstruct history, be it clinical or psychiatric. To provide a clinical example, at the time of autopsy for a missed aortic dissection, we cannot go back and know whether the patient had or didn't have femoral pulses, if those were explicitly recorded, in the past.

I have attended conferences where the presenters are extremely sober about these issues. One example of this is http://omop.fnih.org/ - The Observational Medical Outcomes Partnership (OMOP) is a public-private partnership designed to help improve the monitoring of drugs for safety.   The key is systematic and standardized collection as well as powerful analysis tools and methods.  A great example is here,   The researchers, Paul Stang, Bram Hartzema, and Patrick Ryan, demonstrate SAS programs OSCAR and the NATHAN extension.  They work in part because the data sources are far more robust and systematic than typical of today's HIEs.  They also work because both the dependent and independent data of interest is what is being explicitly collected.  There is no attempt, at least by my read, to infer intention.

I have attended others, typically with presenters from other industries with a lot of passion for analytics who do not share that sobriety. Sorry, no examples for obvious reasons. For them, I wrote Swan 1, Turkey 0 !  The commenters did a better job of speaking to the overwhelming emotional passion that causes the wild enthusiasm for unfounded methods and conclusions.

anonymous (not verified)