Skip to content Skip to navigation

Is Mine Better Than Yours?

August 16, 2009
by Mark Hagland
| Reprints

It’s human nature to compare ourselves with others; and it’s American consumer nature to want to compare goods and services according to some kind of ratings standard or other. Indeed, a whole industry has sprung up to meet such needs, across broad areas of interest—think of Consumer Reports, J.D. Power, and even the guest-ratings capabilities within Hotels.com. So why should it still seem a surprising in healthcare?

Yet it amazes me that, while a number of pioneering hospital organizations around the country have plunged headfirst into the emerging new world of publicly comparative data, many others are clearly still lagging behind. The laggards had better watch out, though: every day, the tools for comparison are getting a bit better, and every day, new public, private, and public-private organizations get involved, or become more involved, in the whole process.

Take for example the Centers for Medicare and Medicaid Services (CMS). Last month, CMS released national statistics for the first time on readmission rates within 30 days following discharge for care for heart attack, heart failure, and pneumonia. Medicare Hospital Care data shows that 18.2% of patients admitted for pneumonia will be readmitted within 30 days of discharge; 19.9% of heart attack patients will be readmitted within 30 days; and fully 24.5% of heart failure patients will end up back in the hospital within a month (http://www.cms.hhs.gov/apps/media/press/release.asp?Counter=3477&intNumP...). And depending on how one looks at such statistics, they could be seen either as discouraging or perhaps simply illuminating.

Meanwhile, CMS is making it ever easier to compare hospitals with regard to a number of process measures, from giving patients the right antibiotic at the right time (within one hour prior to surgery), to the percentage of heart attack patients being given a beta blocker at discharge, all available at the Medicare Compare website (http://www.hospitalcompare.hhs.gov/Hospital/Search/Welcome.asp?version=d...|3.5|WinXP&language=English&defaultstatus=0&pagelist=Home). Just as an example (and please note, that’s all this is, an illustrative example), I’ve just clicked to compare outcomes at nine hospitals local to me in Chicago (I won’t name the hospitals, but you could go in and noodle around and figure them out, if you really wanted to do so). Under the measure, “percent of heart failure patients given discharge instructions,” the rates varied dramatically, from a high of 98 percent (two of them), to 97, 91 (two of them), 90, 80 percent, and down to 54 percent. How about “percent of all heart surgery patients whose blood sugar (blood glucose is kept under good control in the days right after surgery”? 97, 95 (two hospitals), 94 (two hospitals), 85, 84, and 82 percent, respectively (with one of the nine hospitals having no patients with that condition). Interesting, very interesting. Now, the blood glucose control measure for heart surgery patients is of particular interest to me, as my past research and reporting have indicated to me that this is a measure that strongly points to good patient care management generally.

Of course, some of the measures that are available now, are being criticized by some as being too simple to paint a full portrait of the complexity of care at various organizations. And certainly, over time, Medicare and other organizations will increase the sophistication of some of the measures being presented at the Medicare Compare website and elsewhere. But even now, I have to say that, through a careful series of comparisons, I can get some interesting insights into important indicators among the local hospitals I might choose to be admitted to for, say, non-emergency heart surgery, even now. To me, that very fact speaks to the need for hospital CIOs to work with the clinician and executive leaders in their organizations to begin to make significant improvements in their organizations’ patient care outcomes and patient safety outcomes, with strong clinical IT facilitating performance improvement work all along.

And though most healthcare consumers aren’t yet at the point where they understand that blood glucose control among surgical patients is potentially quite a good measure of patient care management, the time is not far off when a significant plurality of consumers do get to the point of using some of this easily available information to make certain decisions about their care. Indeed, “how’s my public outcomes data looking?” will soon—much sooner than some might suspect—be a very serious and market-moving question for hospital executives and clinician leaders.

Topics

Comments

your post makes me think of how some doctors are requiring that their patients sign agreements not to post online reviews of their performance. I certainly would never sign such a document, due to all that its presentation implies. I see these physicians as plugging a hole in a dam with their finger, not realizing the amount of water pressing against it. The bottom line for hospitals and physicians is to do good work and not worry about being measured, which you certainly will be very soon.

Mark,
I agree with your theme. Performance measures are always telling.

I agree that being critical of individual measures is itself simplistic as well.

What is important is understanding what's driving the gap between current state and desired state, as well as who is doing the measurement and what's their goal. Pristine motives are rare.

Ten years ago, I used the American Hospital Directory data to look at the top ten DRGs for large Detroit hospitals, focusing on their average length of stay, medicare reimbursement, and cost. Like your Chicago example, there were hospitals that were apparently much better run than those at the other end of the performance spectrum.

Then, something interesting happened. I happened to be sitting on a plane, next to the Senior VP for operations for several of those Detroit hospitals, including the best and the worst performers. The lumped DRG patients in each of those hospitals were not even remotely comparable. And yet, there were no tell-tales in the detailed AHD data. The case-mix was washed out or dilute by other DRGs from other service lines. Circumstances gave some hospitals undeserved black-eyes that did not portend inferior quality of patient care or management.  (This conclusion is very powerful in that the number of case involved in each group was large.  And, to your point, the proxies for quality, i.e. length of stay and costs, have improved to the current, additional measures you cited.)

The performance measures data were often confounded in other ways as well.

My point is simply this: Understanding the gap is as important as having measurement data. Dealing with that gap with compassion and creativity in the form of substantive action is the mark of great leadership.

"All other things being equal," better performance measures are better. I whole heartedly agree with your call for CIOs to work with other senior executives to drive performance worth crowing about. I would add to that a call to carefully collect and track the data that drives those gaps. Having data that you're sinking is far less valuable than having data on the location and extent of your leaks.

(For more on the readmission performance measure, see my post, A Leap Of Faith.)

Joe and Anthony,
Thank you both for your very thoughtful comments and analysis! Joe, I certainly agree that the performance measures being used are improving over time. There's no question that the first generation of measures was very primitive, and that, years from now, we'll look back on the measures being used right now and see them as terribly primitive. That being said, this kind of measurement is quite inevitable, I think. I also agree with you, Joe, that understanding the gap involved is as important as having measurements in place. And Anthony, I completely agree that I would never sign an agreement abrogating my right to indicate my evaluation of my physician's performance. It's a fascinating time, I think, as we've gotten out of the starting gate in terms of meaningful clinical performance measurement, but the measures haven't yet achieved the depth they need to have to achieve total buy-in from all the stakeholders. Thanks again, both of you, for your excellent comments.

Mark Hagland

Editor-In-Chief

Mark Hagland

@hci_markhagland

www.healthcare-informatics.com/blog/mark-hagland

Mark Hagland became Editor-in-Chief of Healthcare Informatics in January 2010. Prior to that, he...