In 2015, a National Academy of Medicine (NAM) committee recommended a set of 15 core metrics (“Vital Signs”) to help streamline quality reporting. That report continues to drive discussions about reducing the burden of quality reporting as more voices, including CMS, call for a parsimonious set of measures. On Oct. 26, NAM hosted a discussion on quantifying the problem and next steps.
The session started with presentations by researchers who conducted quantitative and qualitative studies on measurement burden.
Nancy Dunlap, M.D., Ph.D., M.B.A. professor emerita at the University of Alabama-Birmingham, led a 2016 study that surveyed leaders from more than 20 health systems of varying size. The number of mandatory quality metrics they have to report ranged from 284 to more than 500.
In addition to reporting for outside entities, they also participate in collecting metrics internally for quality improvement, she noted. The respondents said that the complexity of reporting is increasing, requiring increasing staff and that many of the metrics change annually, with slight definition differences in metrics required by different groups.
“Although all the providers use EHRs, the large majority responded that only a portion of the metric reporting is automated,” she said. Documentation by the physician is crucial for accurate attribution and capture of information, and often physicians must re-examine their records to clarify terms and ensure that the relevant wording is used to describe the care delivered.
Standardizing the documentation input into EHRs can be helpful, but with changes in definitions, Dunlap said, the data fields with the EHR must be changed, and training and document must be updated. Health systems report that on average 50 to 100 individuals are involved in this process. The range of cost of personnel was estimated at from $3.5 million to $12 million per year, with most in the range of $5 million to $10 million annually. In addition, institutions may spend a substantial sum to recruit and train these individuals.
David Gans, senior fellow for industry affairs at the Medical Group Management Association, followed by describing a 2014 web-based survey that sought to measure the cost from the physician perspective.
The survey, conducted by researchers from Weill Cornell Medical College and MGMA, focused on cardiology, orthopedics, primary care and multispecialty practices.
It collected time estimates for physicians and staff on six categories of activity related to external quality measures. They converted the time estimates into estimates of the cost to the practices of dealing with external quality measures.
“We found 15 hours per week estimated in total effort per physician dealing with external quality measures,” Gans said. The cost varies by specialty, but on average they reported spending about $40,000 per doctor annually. And although the physicians spent a substantial amount of time on collecting data, there was relatively little time spent using the information.” When asked if they use the measures for quality improvement, less than a third said yes.
“Unfortunately, what we found is that practices have poor opinions overall about quality reporting,” Gans said. Only 28 percent of respondents answered that the measures actually represent quality, and there were consistent concerns expressed that the measures are not relevant to their specialty.
• 81 percent said they spend substantial time and effort validating the information.
• 46 percent said there was a substantial burden due to multiple metrics measuring the same elements or different payers looking at slightly different aspects or time elements.
Gans summarized their sentiments this way:
• Quality measures do not adequately represent quality of care.
• Entering quality data decreases clinician’s productivity.
• Providing quality data to external entities is very expensive.
• Quality measures, methods of reporting, and reporting periods should be standardized.
• It should be possible for an EHR to automatically collect and report quality measures.
• Measures should be specialty-specific. Orthopedists, in particular, said that current measures are not suitable for them.
In summarizing the findings of both studies, Dunlap said that the respondents believe that:
• Externally reported measures should be kept to a manageable level.
• Measures should be regularly evaluated to ensure that they drive actual improvements in care outcome.
• Alignment and standardization of definitions among groups requesting metrics are needed.
• Metrics should be piloted and definitions finalized prior to widespread dissemination.
• EHRs should be designed to more easily collect and report metrics and we should move away from quality metrics derived from billing and administrative systems. Clinical metrics would be more useful.
Reaction From a Panel
In a reactor panel discussion, John Bernot, M.D., senior director of quality measurement at the National Quality Forum, began by noting that he is a practicing family medicine physician. “I get to experience quality measurement on the front lines. but also in the community working to reduce the burden,” he said. “I want to reiterate that the burden of measurement is real. It is a challenge to everyone in the field of quality measurement.”