Skip to content Skip to navigation

What Hospitals Don't Know Can Hurt You

July 1, 2008
by E2556BEF60524A5689D02EEBDAEFEDB6
| Reprints

Ninety three percent of CIOs surveyed by HIMSS feel that medication safety is an issue that technology can address (the number two answer was mentioned by only fifty four percent of respondents). Perhaps as a result, the sale of technological solutions to increase medication safety in hospitals (e.g., CPOE, bar coding, smart pumps) is booming.


However, most hospitals are missing a fundamental element of any improvement program - an accurate idea of the number of adverse drug events they currently experience. There are no authoritative statistics, but it's likely that more that 90% of US hospitals use self-reporting as the basis of their ADE incidence or medication error rates. There are many reasons why self reported ADE and error rates are understated, including time pressures, a lack of provider awareness that an error or event occurred, fear of personal consequences, etc. What's certain is that self-reporting doesn't provide good numbers. Research shows that self-reported ADE rates are typically 10 to 100 times lower than actual rates. Using self-reporting to measure ADEs is like viewing the world through a rolled up newspaper - only a small fraction of the whole is visible.


Surprisingly, there's a cheap, relatively easy answer to ADE detection and reporting that is much more accurate than self-reporting. The "ADE trigger tool", based on Classen's work at Intermountain Health Care, uses sample chart reviews to determine ADE incidence rates and characteristics. Its use has become more common since the Institute for Healthcare Improvement (IHI) introduced their trigger tool in 1999; since then, the IHI tool has been used by hundreds of hospitals. Its regular use requires a fraction of an FTE per hospital - seemingly a small cost to get an accurate handle on the #1 national patient safety issue. Without a clear idea of current ADE incidence and characteristics hospitals don't have a solid basis for improving medication safety.


Why are almost all hospitals still using self-reporting, when a cheap solution with 1,000 to 10,000 percent greater accuracy is available? Perhaps it's hospital politics (conflicts over ownership of medication safety reporting between Pharmacists, nurses, physicians, the quality department, etc.). It may be the required cultural changes (Trigger Tool reporting changes the focus from processes (errors) to results (events)). It may be that hospitals don't want to know the real numbers because they might then be liable for higher levels of harm, or responsible to rapidly improve the currently high levels of ADE incidence. Or it could be that key decision-makers are not aware of the advantages and low costs of trigger tools.


Whatever the reason, hospitals that buy million dollar technologies to prevent drug errors, while skimping on ADE measurement are penny-wise and pound foolish.


To learn more about the IHI's ADE Trigger Tool visit ihi.org; to read their initial report of ADE incidence at 80 hospitals follow this link:Â http://www.ihi.org/IHI/Topics/PatientSafety/MedicationSystems/Literature/AdverseDrugEventTriggerTool.htm

Topics

Comments

Very well stated, Doug. A few observations to share:

One of those few, authoritative sources on incidence rates you alluded to is, of course, MEDMARX:

http://www.usp.org/hqi/patientSafety/medmarx/

I have personally reviewed their experience which has shown consistent stable results over the last many years. For interested readers, there a very useful benchmarks and definitions there.

Regarding the Trigger Tool, you're right! It's applicable to readily available data. I am aware of a few organizations who used it, under the appropriate legal protections and procedural controls. The results generally proved to be 'socially sensitive,' the larger the organization, the more so.

The issue of looking or measuring errors continues to be challenging, as you outlined. This 1999 NYT article, I thought, captured it best:

http://query.nytimes.com/gst/fullpage.html?res9A06E7D61030F93AA25751C1A9...

In it: Dr. Donald M. Berwick, a member of the study panel convened by the National Academy of Sciences, said, ''The first sign of a serious endeavor to deal with errors is that the number of reported errors should go way, way up.''

Since 1999, the public dialogue has been very quiet, suggesting that we're still using that rolled up newspaper you described in your post, rather than a solid basis. My take is that it's so unsafe to talk about, we dont have any better aggregate data than MEDMARX.

For those who wish to learn more about one of largest, national approaches to Patient Safety, the Department of Veterans Affairs, referenced in the NYT article, has the most long-standing leadership role that I am aware of in this space follow this link: http://www.va.gov/ncps/index.html

E2556BEF60524A5...