Back in 2012, the Affordable Care Act (ACA) established the Hospital Readmissions Reduction Program (HRRP), which requires the Center for Medicare and Medicaid Services (CMS) to reduce payments to Inpatient Prospective Payment System (IPPS) hospitals with excess readmissions for certain medical conditions.
The HRRP was designed to make hospitals pay closer attention to what happens to their patients after they get discharged, but yearly data has shown that this has not been an easy task for patient care organizations. In 2016, the government penalized more than half of the nation’s hospitals—a total of 2,597—for having more patients than expected return within a month. Last year, the news didn’t get a whole lot better. According to Kaiser Health News, almost the same number of hospitals—this time 2,573—will be punished by Medicare for failing to lower their rehospitalization rates. On the financial side, according to KHN, Medicare will withhold $564 million in payments over the next year; the maximum reduction for any hospital is 3 percent.
To this point, hospitals and health systems have no shortage of motivation to keep patients from being readmitted unnecessarily. As officials at UnityPoint Health (UPH), an integrated health system with 29 hospitals, headquartered in Des Moines, Iowa, note, over the last few years, several of UPH’s hospitals did not meet their thresholds and were penalized to varying degrees.
Benjamin Cleveland, a data scientist at UPH, says, “Though readmissions often are influenced by factors largely outside of a healthcare system’s control, most systems conclude that discharge education, care coordination, and post-discharge intervention strategies offer the best chance at reducing their readmission rate.”
Cleveland adds that many readmission strategies are driven by three foundational components: which patients to focus on, what type of intervention should occur, and when the intervention should occur. “Which patients to focus on and when to target interventions lend themselves to the predictive modeling domain, while matching specific interventions with appropriate patients are best studied using controlled experiments,” he says. Rhiannon Harms, executive director of analytics at UPH, adds that analytics could answer the “who” and the “when” questions, which would then lead to care teams and providers being able to answer the “what” question around which type of intervention should occur.
Capturing the Comprehensive Picture
To this end, an enterprise-wide data analytics and artificial intelligence project—centered around preventing readmissions through patient-level “heat maps”—emerged at UPH, and was so innovative that it received first-place status in Healthcare Informatics’ 2018 Innovator Awards Program, Providers Division. Specifically, UPH Strategic Analytics sought to create a readmissions risk model that would be superior at identifying high-risk patients than other existing models—such as the commonly-used LACE [Length of stay, Acuity, Comorbidity and ED use] index and HOSPITAL [Hemoglobin, discharge from an Oncology service, Sodium level, Procedure performed, Index Type of admission (urgent vs. elective), number of Admissions in the last year, and Length of stay] score—as well as provide machine learning technologies to identify when along the 30-day spectrum a patient is most likely to be readmitted to guide care coordination efforts and ensure successful transitions.
These risk profiles would then be computed for every patient in the UPH system, and individualized to their own unique clinical and social scenarios to ensure a common “source of truth” for all parties involved across the care continuum, to coordinate the patient journey out of the hospital, UPH officials explain.
In the development of this project, Cleveland says that UPH partnered heavily with its local care teams to get feedback from everyone involved in the entire process—inclusive of primary care physicians, hospitalists, inpatient case managers, outpatient care coordinators, home health nurses, and more. “What do they see that affects readmissions? What do they think is tied to different patient risks that we should try to incorporate? Who do we talk to from an informatics perspective on how to best pull that data in the EHR [electronic health record]? It’s really a cross continuum, multi-disciplinary effort across our system,” he says.
The result of all of those discussions, organization-wide, says Cleveland, was that a core goal emerged: to capture a patient’s entire scenario comprehensively. “You want to capture the sociodemographic variables; all of our patients have different backgrounds,” he says. “And then you want to capture the severity of the condition they are being seen for today—inclusive of visits, medications, lab tests, and procedures that indicate severity. You also want to look at their past medical history to see what the [true] level of morbidity burden that they are carrying is, along with whatever the problem of the day is. And finally,” explains Cleveland, “You want to capture how they interact with their healthcare. Are there appointment no-shows, are they always late, are they going to their follow-ups? How many inpatient and ED visits have they had over the last few years? That’s how you get a comprehensive picture of the patient,” he attests.
In all, the team pulled in some 300,000 inpatient encounters and all of the associated variables with those encounters at each point in time, and then used machine learning algorithms to appropriately weight each of the variables. Overall, says Cleveland, morbidity burden and clinical severity within that visit were the factors that were most influential, followed by healthcare utilization and social demographics.
A Predictive Model Like No Other
Included in the feedback process and in the project’s development was Christopher Hill, D.O., medical director for clinical performance at UPH, who says that traditional models, such as LACE, might be based upon a DRG (diagnosis-related group) or a specific diagnosis. Bringing up the example of sepsis patients, traditionally, Hill says, “We may have looked at all sepsis patients as the same or maybe we looked at severe sepsis versus sepsis differently. But we did not look at the individual patient level where we take variables that statistically make a lot of sense, and we are seeing boots-on-the-ground activity around tying interventions to those variables and what we see on that heat map,” he says. “It’s quite different than traditional models because we are able to look at one patient’s individual risk which might be different than a cohort that we could have lumped [together] before,” he adds.
Speaking to the back-end technical side of the project, Cleveland says that continuously-learning ensemble algorithms are implemented and are supplemented with “a rich feature set to compute individualized predictions for all of our regions.” He adds, “The analytics engine operates in near real time, providing a robust risk profile across the care continuum to support not only the creation of a personalized post-discharge plan, but also simultaneously assessing the likelihood of plan success each day away from the hospital by tracking each patient’s appointment plan and alerting the care team to appointments that the patient is unlikely to show up for.”
Indeed, UPH officials note that one benefit of the heat map is that it can inform the ideal scheduling time of post-discharge follow-up appointments in order to address issues contributing to readmission before it is too late. And beyond that, Hill says that the heat map allows UPH leaders to continue to mature the prescriptiveness by which they can make decisions. For instance, there might be patients who value home care services or who might be better severed in a skilled nursing facility soon after discharge since they are at such high risk in the first week. “So we are continuing to mature that. That is the work we are actively working on—tying in all of the interventions, getting more perspective, and understanding how to put all that picture together, from the data to the appropriate interventions and [seeing] if it changed the outcome or their future chance of readmission,” says Hill.
Harris speaks further to the personalization aspect of the heat map, noting that prior to it being developed, there may have been a focus on getting all patients who were discharged into their primary care practice within seven days, as that would have been a measure that would have been followed by the care teams. “But now, a personalized view of the patient is [created], and maybe it’s realized that seven days is too late to intervene for that patient. We moved [away from] a one-sized-fits-all approach of follow-up within a week,” she says.
UPH leaders who were interviewed for this story also note how important it was to create a data-driven culture that would be accepted by everyone involved. Speaking to the change management piece involved, Betsy McVay, vice president and chief analytics officer, UPH, says that all of the care team groups were asked what was important to them and what would impact their workflow. “And we were purposeful about addressing the ones that were important to them. It continues to support our overall system work to being very data- and information-driven, and being embedded in helping to enable clinical outcome improvement,” says McVay.
Currently, all UPH regions are either using this model or are getting started with it and have the ability to do so, officials say. For the pilot site used to develop and adopt the readmission heat map, it has improved its risk-adjusted readmission index by 40 percent.
And moving forward, the project’s leaders have ambitious goals on how to continue evolving. From a technical standpoint, says Cleveland, “I would like to see how we can use analytics to inform the intervention that occurs with the patient. But I think our data and means for extracting data need to mature first.” He continues, “So for all of our readmitted patients, if I read the notes from the doctor and other care team members, are there indicators? And can I extract data in a meaningful way to actually find that signal that allows us to create the recommendations for those interventions?”
Meanwhile, Harris says she would like to continue to explore how they can use predictive modeling to create a roadmap for the patient ahead and how that supports the care team in helping patients navigate through their healthcare journeys. She says, “In this particular example, we have layered a length-of-stay model with the readmission risk model as well as our no-show model in the clinics to better understand who is unlikely to show up for their appointments. Some other predictive modeling might look at six months out for risk of admissions, so we have an opportunity to continue to layer these together to better provide a view of what the next six months or longer might look like for our patients.”