Are Consumer-Directed Hospital Ratings Systems Confusing the Public—and Even Providers? | Mark Hagland | Healthcare Blogs Skip to content Skip to navigation

Are Consumer-Directed Hospital Ratings Systems Confusing the Public—and Even Providers?

March 14, 2015
| Reprints
The wildly divergent ways in which various entities currently sponsoring national hospital quality ratings programs are evaluating the quality of care are creating a miasma of confusion

The authors of an article in the current issue of Health Affairs have given all of us in healthcare, particularly anyone connected to or with an interest in, the measurement of care quality, something to think about.

As we reported, the article, “National Hospital Ratings Systems Share Few Common Scores and May Generate Confusion Instead of Clarity,” was written by J. Matthew Austin, Ashish K. Jha, Patrick S. Romano, Sara J. Singer, Timothy, J. Vogus, Robert M. Wachter, and Peter J. Pronovost.  And Austin and his colleagues spent some time carefully analyzing four well-known, consumer-directed hospital care quality ratings systems—those of U.S. News (is annual Best Hospitals list), HealthGrades, the Leapfrog Group, and Consumer Reports—and they proceeded to examine “the overlap among rating systems and how hospital characteristics corresponded with performance on each.”

What the researchers found was essentially a mishmash of wildly divergent approaches to evaluating hospital care quality. Just to compare and contrast the four approaches, the researchers noted that “Four well-known national entities released hospital ratings in 2012 and early 2013. In each case, the hospitals were rated at no cost to the hospital. U.S. News, a for-profit company known for its magazine and ratings of universities and graduate programs, has issued its Best hospitals list for 23 years. HealthGrades, a for-profit company that develops and markets quality and safety ratings of healthcare providers in addition to offering consulting services, has rated hospitals since 1998, releasing its annual Top 50 and Top 100 hospital lists, among many other things.”

Then, “The Leapfrog Group, a non-profit purchaser-based coalition advocating for improved transparency, quality and safety in hospitals, has supplemented its annual hospital survey since 2012 with assigning letter grades (A-B-C-D-F) to hospitals, reflecting how well they kept patients free from harm. And Consumer Reports… has issued a hospital safety rating… since 2012. In addition… the CMS Hospital Compare website, reporting programs sponsored by states and regional quality collaboratives… web-based consumer-driven rating systems… and hospital systems’ self-reported performance… serve as additional inputs into consumers’ decision-making.”

As if that landscape itself weren’t confusing enough, it turns out that all the different approaches have led to wildly different quality reports. So, the authors note, “Some research indicates that being named to U.S. News & World Report’s “Best Hospitals” list is associated with lower 30-day mortality, but other studies have found no association between the U.S. News list and readmissions, wide variation on a number of indicators, and discrepancies with other ratings systems such as the Centers for Medicare and Medicaid Services’ (CMS’s) Hospital Compare.”

What’s more, the authors found, “Hospital rating systems use a variety of methods for distinguishing ‘high’ performers from ‘low’ performers, often creating the paradox of hospitals’ simultaneously being considered best and worst depending on the rating system used. For example, 43 percent of hospitals classified as having below-average mortality by one risk-adjustment method were classified as having above-average mortality by another method.”

Now here’s where things get really interesting. The researchers wrote that “We examined four consumer-directed national hospital rating systems to identify whether they arrive at similar or different conclusions… There was disagreement across the four ratings, with each identifying different sets of high- and low-performing hospitals. In addition, Leapfrog rated a relatively large set of hospitals favorably compared to Consumer Reports, U.S. News, and HealthGrades.” How could this be? The methodologies used by the four entities are totally different, with “Leapfrog and Consumer Reports focus[ing] on hospital safety, although each defined safety differently.” Meanwhile, “U.S. News focused strictly on the “best medical centers for the most difficult patients,” whereas HealthGrades focused on general hospital quality over time. Leapfrog and Consumer Reports both used the whole hospital as their unit of analysis, but they included different types of measures in their ratings.”

The authors of the Health Affairs article go into much greater (and very valuable) depth on all this, but clearly, we’ve got a problem when, as they point out, of the 844 hospitals they looked at that had been ranked by all for systems, zero—that’s right, zero—were rated as “high performers” by all four systems; only three hospitals were rated as high performers by three of the four systems; and 85 were rated as high performers by two of the four ranking systems. Meanwhile, zero hospitals were rated as “low performers” on the three ranking systems that cited “low performers,” while just 15 were rated as low performers by two of the three systems citing low performers.

What’s more, only 10 percent of the 844 hospitals rated as a high performer by one rating system were considered high performers by any of the others.

In other words, Houston, we’ve got a problem.

And those methodological gaps don’t even begin to take into account other issues around these systems—that the U.S. News annual report, which now does include some actual outcomes measures as an element in its “Best Hospitals” rankings, for years relied entirely on reputation as its sole criterion, as judged by physicians and others whom the magazine surveyed. Until a few years ago, the publication hadn’t even included any objective data elements in its methodology. Meanwhile, HealthGrades charges hospitals to participate in its ratings program, which inevitably skews everything in favor of hospitals whose senior executives believe they will benefit from participation in the program and from subsequent marketing opportunities.

What a mess.

And yet, fundamentally, there is no escape from these programs for hospital leaders nationwide. Each of these entities has its reasons for publicly assessing hospital care quality, and also has a strong rationale for doing so. And there is zero likelihood that all of these entities, as well as the others currently assessing, rating, and ranking hospital care quality, will ever all combine into some standardized, unified nationwide system. So we’re stuck with this patchwork quilt of quality assessment.

What healthcare and healthcare IT leaders can do is to continuously work to improve their patient care outcomes, as well as to improve their transparency to the general public and to all the stakeholder groups around healthcare. They have the right to be frustrated with the problems and issues around quality ratings systems; at the same time, we’re more or less stuck with this situation for the foreseeable future, so as messy as the landscape is, the only way is forward, with the hope that continuous efforts at clinical transformation and clinical performance improvement will elevate many more hospitals into more favorable assessments of their quality over time on the part of these various systems.

 

 

 

The Health IT Summits gather 250+ healthcare leaders in cities across the U.S. to present important new insights, collaborate on ideas, and to have a little fun - Find a Summit Near You!


/blogs/mark-hagland/are-national-hospital-ratings-systems-confusing-public-and-even-providers

See more on

betebet sohbet hattı betebet bahis siteleringsbahis