Skip to content Skip to navigation

Should Quality Measure Results Determine Meaningful Use?

September 8, 2009
by David Raths
| Reprints

I have been intrigued by a few recent and perhaps related developments in the ongoing "meaningful use" saga. The first was the ONCHIT HIT Policy Committee Certification and Adoption Workgroup's recommendations submitted on Aug. 14. The committee suggested that HHS take over establishing certification criteria from CCHIT and focus them on meaningful use objectives involving interoperability, privacy and security, etc., at a higher level with less specificity about product features.

This may be an important and necessary shift in the direction of the certification regime. It's clear that just making sure that physicians and hospitals are purchasing feature-rich products is not going to guarantee that they are used to improve the quality of care. It may indeed make more sense to have a less specific certification program for software products and much more sophisticated procedures for measuring how providers are using the systems for quality and patient safety gains.

That leads, however, to a second development: an Aug. 26 letter from the Federation of American Hospitals to ONCHIT. FAH, an organization of investor-owned hospitals, argues that HHS will be overstepping ARRA's mandate if it takes the HIT Policy Committee's suggestion to use the results of quality measures, rather than just the capability to submit results, to determine who gets EHR incentive funding.

"It has been suggested that 'meaningful use' funding should be tied to provider performance on outcomes-related quality measures," the letter states. "However, our outside legal experts view the ARRA funding as tied only to accelerating the adoption and use of EHRs by providers and clinicians, and not to patient care achievements or outcomes that may be attained while using EHRs."

To make its point, FAH uses the example of readmission rates. The HIT Policy Committee recommended that HHS should adopt a measure for 2013 requiring a 10 percent reduction in preventable readmissions from 2012 to qualify as a meaningful EHR user. FAH notes that a provider's readmission rates are affected by several factors, many of which are not related to EHR use.

"If HHS expanded this policy beyond the submission (or reporting) of data, it will have the adverse impact of limiting provider adoption of EHRs because it will prohibit ARRA funding for those who do not satisfy the performance measures. This result would run directly counter to the reason Congress provided the funding."

This seems to be a key point of contention. Could a hospital be a widespread user of EHR and related systems but not be a "meaningful" user because it is not making enough progress on quality measures to meet HHS guidelines?

Does FAH have a good point here? I'd be interested to hear what members of the Healthcare Informatics online community think.

Topics

Comments

I think we need to ask if meaningful use will be of meaningful use in 3 years. To me, the very existence of Certification and Meaningful Use are a direct result of hundreds of vendors battling untold standards, connecting to hundreds of Rhios. Just having to certify it connotes an expectation of failure by those controlling the checkbook.

If it's my checkbook, I write the check on a single criteriathat you can connect to my national network. I leave it up to the provider to figure out if their system is of value to them.

I can't think of any other industry that waits until something has been paid for and implemented to make the determination of whether what they did was meaningful.

Does't this strike anyone as absurd?

Thanks to Paul, Pam and Joe for their thoughtful responses. I have heard CIOs of large hospital systems say things similar to what Pam's clients tell her: that they have done a lot of work on EHR implementation and reporting quality measures, but still don't think their systems would allow them to report many of the items described in the meaningful use matrix. I like Joe's point that adoption and performance improvement achievement are sequential and need to be separated out.

David,

Great blog topic and well presented!

The issue of where is the MU focus and where should be the focus absolutely illustrates a classic moving target. As Pam points out above, it's unreasonable to expect vendors to prepare and commit to a moving target. For the vendors, the challenge also includes delivering solutions to an install base that is heterogenous in terms of current capability and historical competency (often with skeletally short staffing). The target, by the way, has changing size and it surrounded by a cloud of large political and economic interests.

Vendor and Hospital Timelines

The vendors and hospitals are expected to hit that target in a process that denies the existence of the irreducible timelines of software GMP and implementation timeframes. It takes time to create and understand adequate requirements. That cannot be done concurrently adequate QA and beta testing. There's a word for rolling out untested software: lethal. Everyone has their own painful stories on that.

The timelines also presume that, once vendors deliver, providers can take new GA code and do their own (with or without vendor or third-party services) work to bring the new code into their live environment (with the necessary integrated testing, training, etc). As illustrated by the HIMSS EHR adoption model, this is much more than a single-step of adding new capabilities.

CPOE roll-out generally presumes that the order process from end-to-end is reliable. And, implicitly, that there are not paper steps or manual re-keying within the steps. And, doing this all quickly presumes that interfaces never fail, or only fail infrequently ... so no one has to regularly check.

The political process seems to be blind to the clear lessons of unintended consequences of too rapidly turning on new systems and rolling them out. See Anthony's CCHIT & the DMV, and my comments from David Classen's AHRQ-funded work on the importance of evaluating live systems (in addition to certifying them on the shelf).

So, David, to your question, does FAH have a good point? Well, that's a legal question and I've promised the lawyers that I wouldn't practice law if they dont practice medicine or informatics.

From a pragmatic and public safety perspective, Classen et al have presented more than enough evidence that HHS would be well served to study: what has been learned about Evaluation and Speed-of-Change. Systems that can catch 50% of harm-causing errors often function closer to the 10% level when implemented. This is known to be objectively true for the three largest commercial CPOE vendors. Similarly concerning findings have been published by homegrown, enterprise solutions.  Given the complexity of the problem being addressed, these results almost certainly generalize to all CPOE systems with decision support.

From that perspective, FAH is probably correct (in my opinion.) Adoption of EHR technology and Performance Improvement Achievement are sequential pieces of work. They need to be separated out. Publishing performance goals for 2015 in effect blurs rather than separates out the issue of upgrading our national infrastructure for healthcare IT.

Aside:
My observations on the readmissions issue in your post are here.  If 100% of patient discharges for the HHS tracked diagnoses contained documentation that evidence-based discharge criteria were met, a readmission rate of 10% or 20% or 30% would and should be immaterial, both from quality of care, and true cost of that care.  

This week I presented the MU criteria to a client, who by all accounts is far ahead in the quality metric space, albiet manual. They were overwhelmed. Another client, has worked 4 years to perfect and standardized extraction of 52 quality metrics from emrs. The vendors aren't prepared, and certainly with all of the MU criteria this just adds to the burden.

David Raths

Contributing Editor

David Raths

@DavidRaths

www.linkedin.com/in/davidraths

David Raths’ blog focuses on health IT policy issues ranging from patient privacy to health...