Skip to content Skip to navigation

Are Consumer-Directed Hospital Ratings Systems Confusing the Public—and Even Providers?

March 14, 2015
| Reprints
The wildly divergent ways in which various entities currently sponsoring national hospital quality ratings programs are evaluating the quality of care are creating a miasma of confusion

The authors of an article in the current issue of Health Affairs have given all of us in healthcare, particularly anyone connected to or with an interest in, the measurement of care quality, something to think about.

As we reported, the article, “National Hospital Ratings Systems Share Few Common Scores and May Generate Confusion Instead of Clarity,” was written by J. Matthew Austin, Ashish K. Jha, Patrick S. Romano, Sara J. Singer, Timothy, J. Vogus, Robert M. Wachter, and Peter J. Pronovost.  And Austin and his colleagues spent some time carefully analyzing four well-known, consumer-directed hospital care quality ratings systems—those of U.S. News (is annual Best Hospitals list), HealthGrades, the Leapfrog Group, and Consumer Reports—and they proceeded to examine “the overlap among rating systems and how hospital characteristics corresponded with performance on each.”

What the researchers found was essentially a mishmash of wildly divergent approaches to evaluating hospital care quality. Just to compare and contrast the four approaches, the researchers noted that “Four well-known national entities released hospital ratings in 2012 and early 2013. In each case, the hospitals were rated at no cost to the hospital. U.S. News, a for-profit company known for its magazine and ratings of universities and graduate programs, has issued its Best hospitals list for 23 years. HealthGrades, a for-profit company that develops and markets quality and safety ratings of healthcare providers in addition to offering consulting services, has rated hospitals since 1998, releasing its annual Top 50 and Top 100 hospital lists, among many other things.”

Then, “The Leapfrog Group, a non-profit purchaser-based coalition advocating for improved transparency, quality and safety in hospitals, has supplemented its annual hospital survey since 2012 with assigning letter grades (A-B-C-D-F) to hospitals, reflecting how well they kept patients free from harm. And Consumer Reports… has issued a hospital safety rating… since 2012. In addition… the CMS Hospital Compare website, reporting programs sponsored by states and regional quality collaboratives… web-based consumer-driven rating systems… and hospital systems’ self-reported performance… serve as additional inputs into consumers’ decision-making.”

As if that landscape itself weren’t confusing enough, it turns out that all the different approaches have led to wildly different quality reports. So, the authors note, “Some research indicates that being named to U.S. News & World Report’s “Best Hospitals” list is associated with lower 30-day mortality, but other studies have found no association between the U.S. News list and readmissions, wide variation on a number of indicators, and discrepancies with other ratings systems such as the Centers for Medicare and Medicaid Services’ (CMS’s) Hospital Compare.”

What’s more, the authors found, “Hospital rating systems use a variety of methods for distinguishing ‘high’ performers from ‘low’ performers, often creating the paradox of hospitals’ simultaneously being considered best and worst depending on the rating system used. For example, 43 percent of hospitals classified as having below-average mortality by one risk-adjustment method were classified as having above-average mortality by another method.”

Now here’s where things get really interesting. The researchers wrote that “We examined four consumer-directed national hospital rating systems to identify whether they arrive at similar or different conclusions… There was disagreement across the four ratings, with each identifying different sets of high- and low-performing hospitals. In addition, Leapfrog rated a relatively large set of hospitals favorably compared to Consumer Reports, U.S. News, and HealthGrades.” How could this be? The methodologies used by the four entities are totally different, with “Leapfrog and Consumer Reports focus[ing] on hospital safety, although each defined safety differently.” Meanwhile, “U.S. News focused strictly on the “best medical centers for the most difficult patients,” whereas HealthGrades focused on general hospital quality over time. Leapfrog and Consumer Reports both used the whole hospital as their unit of analysis, but they included different types of measures in their ratings.”

The authors of the Health Affairs article go into much greater (and very valuable) depth on all this, but clearly, we’ve got a problem when, as they point out, of the 844 hospitals they looked at that had been ranked by all for systems, zero—that’s right, zero—were rated as “high performers” by all four systems; only three hospitals were rated as high performers by three of the four systems; and 85 were rated as high performers by two of the four ranking systems. Meanwhile, zero hospitals were rated as “low performers” on the three ranking systems that cited “low performers,” while just 15 were rated as low performers by two of the three systems citing low performers.

What’s more, only 10 percent of the 844 hospitals rated as a high performer by one rating system were considered high performers by any of the others.

In other words, Houston, we’ve got a problem.