Skip to content Skip to navigation

How do you measure SUCCESS?

March 1, 1998
by Polly Schneider
| Reprints

Faced with declining or flat revenues, shriveling reimbursements and escalating competition, hospitals and integrated delivery systems across the country are taking a harder look at a practice that is standard in most other industries: benchmarking. Managed care organizations too must begin to analyze internal and external data to survive, as medical costs continue to wreak havoc on HMO profits. Financial analysts are banking on the ability of HMOs to provide information to the public on costs, pricing and performance, and implement medical management programs to get ahead in the market. Yet the providers of care hold the key to data that is largely unmined, or in many cases, uncollected.

The good news: A wealth of companies offer products and services that can help organizations manage performance data and benchmark processes. The bad news: There’s little consensus on "best practices" or the methodology to measure them, and meaningful comparative data is scarce.

Benchmarking in its simplest definition is the discipline of using comparative data to examine and measure the processes within an organization. Such data may come from inside the organization, from marketplace studies or from state and national databases that track industries--but the goal is to identify areas for improvement. In healthcare that means lower costs, better quality and hopefully, healthier patients.

The intent of benchmarking is not necessarily to emulate other organizations, but more to stimulate "out of the box" thinking, according to Jack Partridge, vice president of the Chi Systems division of Superior Consulting, Southfield, Mich. "Innovative solutions utilized by other organizations serve as building blocks for new approaches and may, in fact, result in a new and improved solution. It is a classic approach to not reinventing the wheel."

According to healthcare decision support firm HCIA, Inc., Baltimore, if all U.S. hospitals performed as well as the 100 did in its annual survey of the nation’s top performing acute-care hospitals, the industry could save more than $24 billion, reduce inpatient mortality and complication rates by 22 percent and average length of stay by a half day, and increase profitability by more than 50 percent. The survey measures hospitals over nine operational and clinical performance indicators and compares them to other organizations in the same peer group (determined by number of beds and teaching or non-teaching affiliation). Results from the 1997 survey revealed that the South and the West produced the highest number of top-performing hospitals. Such results are probably more attributable to the effects of competition, managed care penetration and consumer demand in local markets than benchmarking, according to Dave Krone, senior director in HCIA’s Denver office. However, since the fourth quarter of 1997, industry analysts have been predicting that the easy cost-cutting measures in healthcare have already been achieved. Implementing benchmarking programs may be the path to any further progress in the financial health of the industry.

But cost is just half of the equation organizations need to examine. Benchmarks that look only at financial and utilization statistics such as length of stay and number of FTEs rankle the medical community, causing division not only between clinicians and administrators but also between the administrators and a public that is becoming increasingly concerned about medical management’s effect on the quality of care. Clinicians are data-driven and will use information that can help them improve practices, notes Cynthia Burghard, a research director specializing in medical management and clinical outcomes at The Gartner Group’s healthcare IT division, Wakefield, Mass. Still, Burghard issues a common warning: "Coming out with some external norm that physicians have to meet is a sure formula for disaster."

Culture clash
Before you address the practicalities of a benchmarking project--why you’re doing it, what you want to measure and how you’ll get the data--executives need to prepare for the inevitable resistance from the ranks. As one healthcare executive summarizes: "No one likes to be in the spotlight."

The difference in perspective is hard to get around: Clinicians want the freedom to make what they consider the best decisions possible for the patient, while administrators, no matter how committed they are to boosting quality of care and patient satisfaction, need to focus on the bottom line.

Mercy Health System, an integrated delivery system in Philadelphia, is in year two of a three-year contract with MECON, Inc., San Ramon, Calif., a software and consulting firm specializing in operational benchmarks for healthcare. As part of a broader continuous quality improvement program, Mercy hired MECON to help the IDS analyze its performance at both the department and system level. So far, there has been little improvement in individual departments there, and participation from the medical staff has been low, according to a Mercy administrator who spoke on condition of anonymity. "I suspect it’s a cultural thing," the administrator says. As a result, Mercy is now using hiring approvals as leverage to improve performance. Departments with unmet performance goals are in some cases having to wait to hire new staff until processes improve. The thinking, says the administrator, is that after the improvements take place an extra staff member may not be necessary. Call it what you will--encouragement, motivation, delayed gratification, bribery--in some organizations change is being force-fed.