In January, the first reporting period began for the Medicare Access and CHIP Reauthorization Act of 2015’s (MACRA’s) Quality Payment Program, which is inclusive of two payment tracks that eligible Medicare clinicians can take part in that will determine their payment adjustments in future years. Early on in the program, most of these clinicians are expected to participate in the Merit-Based Incentive Payment System (MIPS) track as opposed to the Advanced Alternative Payment Models (APMs) track, according to estimates from the government.
While the intent of this quality outcomes-based program from the federal government is to reward Medicare physicians for demonstrating high-quality care, using technology meaningfully, and improving patient access and engagement, there are those who are skeptical about how it will work. In an interview last fall with Niam Yaraghi, Ph.D., a fellow in the Brookings Institution's Center for Technology Innovation, opined that because of the physician self-reporting involved in MIPS, the system is “an open invitation to cheating.”
Rita Numerof, Ph.D., co-founder and president of St. Louis-based consulting firm Numerof & Associates, where she has 25 years of consulting experience, believes that with the way the MIPS composite score is determined, there are concerns for providers regarding the reliability and validity of specific individual measures as well as weights used to create that composite. Dr. Numerof recently spoke with Healthcare Informatics Managing Editor Rajiv Leventhal about what she is hearing about MIPS so far in 2017, what potential issues could arise, and how the future MACRA landscape looks. Below are excerpts of that discussion.
What are some things you are hearing about physicians reporting to MIPS, just a few months into the first reporting period?
The frustration that I think underlies most of the physicians with whom we work with and talk to is that they are very concerned about the quality of the reporting measure itself. It’s not so much about the reporting per se, but an issue with the validity and accuracy of the reporting, and the amount of bureaucracy associated with it.
The idea behind MIPS is to be able to create one integrated rollup score that can be used to compare physician to physician within a city, a region, and across the state. And the effort was one of two alternatives to replace the problem with the SGR [sustainable growth rate]. So [physicians] liked the fact that they don’t have to worry about confronting the threat of losing a significant portion of their income each year, and that’s good news. With MIPS, you have a cost component, a quality component, a clinical practice improvement component, and the Advancing Care Information [ACI] category, which replaces the old meaningful use, and there are calculations that begin this year for a penalty or payout in 2019. If you look at some of the things that are included in the various components, you get a sense of the complexity and question the validity of the relative comparison.
Rita Numerof, Ph.D.
Can you give an example of this?
Say that [you and I] are both primary care physicians and a patient is deciding which one of us to see. I have chosen to report publicly on the areas that I’m doing a stellar job in and you are doing the same thing for the areas you’re doing great in, but there is no correlation between those [areas] at all, and there are other measures that we are also reporting on that may have no bearing to each other. We get [scores]; maybe I get a 4 and you get a 3.5, but does my score make me better? No, it depends on the elements that go into those scores, and were those elements even relevant to the patient’s choice on how to select a primary care doctor?
On the quality side, which in 2019 will be 60 percent of the total MIPS composite score, each clinician can select six out of 200 different quality measures to report on. So if we are both reporting on something different, how can that be a valid comparison? This is part of the legitimate concern on the part of a lot of doctors. It will be very time consuming and you ultimately don’t know how you will do in the end. So you can do a very good job doing your reporting, and if everyone improves more than you do, you will get dinged. It’s a zero-sum game at the end of the day; there will be winners and losers. There is also an expense for every doctor, which is somewhere between $8,000 and $10,000, and some have actually estimated a lot higher cost for [smaller] practices without support staff to do calculations and reporting. The only thing that is standard and legitimate by way of comparison is the cost piece, since that will be pulled directly out of Medicare claims data.
Does self-reporting also create a problem since clinicians can “game” the system?
Get the latest information on Health IT and attend other valuable sessions at this two-day Summit providing healthcare leaders with educational content, insightful debate and dialogue on the future of healthcare and technology.