At Franciscan Health, an Analytics-Driven Initiative is Improving Patient Care and Reducing Costs | Healthcare Informatics Magazine | Health IT | Information Technology Skip to content Skip to navigation

At Franciscan Health, an Analytics-Driven Initiative is Improving Patient Care and Reducing Costs

June 11, 2018
by Heather Landi
| Reprints
The health system determined a $655,000 gap in care costs between its best- and average-performing physicians
Click To View Gallery

Many patient care organizations are operationally focused on improving clinical and financial performance to succeed in a value-based environment. One of the primary ways to drive performance improvement is to leverage data and analytics to address care variations in clinical practice.

Franciscan Health, a 14-hospital health system based in Mishawaka, Indiana and serving patients in Indiana, Illinois and Michigan, is driving results in this area by utilizing a technology solution to analyze the system’s rich data to assess performances and, ultimately, reduce costs. Back in late 2012, Franciscan Health executive leaders began a system-wide effort to address clinical quality improvement.

“The leadership really looked at where the direction is headed as far as fee-for-value and trying to identify ways to tackle care variation and clinical quality improvement initiatives, and they wanted it to be an effort across the system,” David Kim, director of strategic and decision support at Franciscan Alliance, says. “They created what we call clinical operations groups, and they are all physician-led committees that are headed by the chief medical officer and/or the vice presidents of medical affairs (VPMAs) for each of the facilities and regions that we have.”

The first key step to this work was top leadership prioritizing the effort, drafting guidance and getting the right people to the table, Kim says. In addition to being physician-led, the clinical operations groups also are multidisciplinary teams, including leaders from nursing, pharmacy, case management and social work, “across the whole patient care continuum,” Kim says. “We were trying to get major departments together to tackle some of the areas, whether it was utilization, patient flow, performance and quality measurement. It’s now been in existence for six or seven years, and it’s been a concerted effort to have everybody focused on an on-going basis.”

Working with Skokie, Ill.-based Kaufman Hall, a provider of enterprise performance management software and consulting services, project leaders used the company’s Peak Software platform to analyze utilization, quality and cost data and internal and external benchmarks. Specifically, the team looked at four key pieces of data that indicate performance: lengths of stay (LOS), readmissions, risk-adjusted mortality rates and adjusted direct costs.

Webinar

Experience New Records for Speed & Scale: High Performance Genomics & Imaging

Through real use cases and live demo, Frank Lee, PhD, Global Industry Leader for Healthcare & Life Sciences, will illustrate the architecture and solution for high performance data and AI...

“The idea was to tackle care variation, looking at resource utilization, as well as looking at performance improvement for length of stay, readmissions and mortality rates and some of the quality metrics that we get monitored and measured on by CMS (the Centers for Medicare & Medicaid Services) and on pay-for-performance areas,” Kim says.

David Kim

He continues, “Each region and facility was given some flexibility, as to challenges specific to them, so, in other words, they would prioritize different conditions, but across the board, we started off with targeted conditions like heart failure, pneumonia and sepsis. Those were common challenges across all the facilities, so those were some of the early wins of trying to build some momentum by targeting a few conditions rather than chew off too much at once.” He adds, “That started to have a halo effect, improving one condition, especially heart failure, for example, effects a large volume; it has a halo effect, in terms of improving other conditions.”

Kim notes that the Peak software platform includes clinical performance benchmarks at the national, state and hospital level. “The platform was very flexible in terms of giving us an ability to target and customize and provide ‘apples-to-apples’ analysis,” he says. “Their system helps us to group, to customize and profile; having that flexibility was one of the key components in trying to drill into some of these high-level opportunities. Choosing the right content as well as the analytic engine to drill down was really paramount in our process.”

The analytics tool allowed project leaders to integrate data sources, perform custom analytics and access a large library of benchmarks. The IT team was able to leverage the health system’s Epic electronic health record (EHR) to mine data for detailed internal process metrics. “In order to put that into perspective, it was important to have another engine and comparison point with benchmarks,” Kim says. “We can compare ourselves historically, that’s one thing. We may pat ourselves on the back if we improve by half a day or so, but if we’re still a day off the benchmark that lays the groundwork to push ourselves a little bit further and not just settle with historical improvement.”

He continues, “Risk adjustments are a part of that too, especially when you work with physicians; they always come up with explanations, such as my patients are sicker or I have a more challenging population to work with. So, the analytic tool has done some risk adjustment for us. So, we know that it’s apples to apples that we’re comparing heart failures at various levels of acuity, pneumonia patients at various levels of acuity, and knowing that patients are very different, we had to treat those things condition by condition, rather than trying to roll them up and then have some challenges with identifying where some of the opportunities are.”

From Data Analysis to Actionable Insights

The aim of this “apples-to-apples” analysis was to produce actionable data that could be used to eliminate or decrease performance gaps. As a result of this analytics work, the data helped to identify high-performing physicians, or what the clinical operations groups refer to as “best performers,” and then also revealed dramatic variances between the health system’s best-performing and average physicians.

Specifically, the data indicated that the best performing doctors had a zero percent mortality rate for heart failure patients, compared to 5.5 percent among the health system’s lowest performers. What’s more, the average LOS was 39 percent lower among the best-performing physicians, and 30-day readmission rates were 42 percent lower. In addition, among the best-performing doctors, direct costs were 25 percent lower.

“These ‘best performers’ were statistically better than the benchmark in length of stay, statistically better than the benchmark in mortality rate, better than our system average for readmissions; that’s an area we didn’t have national benchmarks for, so we used a system average, for example. And, then, after grouping them that way, we then analyzed their cost per patient and realized there were some significant differences in terms of the costs as well,” Kim says.

Upon an even deeper analysis, the team found that when accounting for reduced respiratory treatments, fewer lab tests and shorter time spent in an intensive telemetry bed across 4,996 patient cases for two years, the best-performing physician’s total cost of care was $654,609 lower than average-performing physicians. Franciscan Health has since leveraged these findings to help its lower-performing physicians bring their practice in-line with their best-performing colleagues, with the goal of not only improving patient care but also reducing overall costs.

“By segmenting performance and creating tiers of performance by our attending physician groups, we were able to find not just the best performers, but also identify the outliers, and that’s equally impactful because you can then approach those groups of physicians and tackle the ‘why.’ What’s causing them to be significantly worse in each of these areas? And then try to address that aggressively as you would try to identify examples of who you want to emulate and figure out best practices among best performers,” Kim says.

Data and analytics played a vital role in bringing these key insights to light so that action could be taken. “Physicians by nature are competitive, so when they know they are below average, or an outlier, that does grab their attention. Once the physicians understand the information and know that it has been risk-adjusted, verified and validated, then they are more eager to engage on, where should I improve?” And, he adds, “Nine times out of 10 they know they can improve, they just need some data to back up that perception.”

Project leaders also recognized the importance of targeting frontline caregivers in this performance improvement work. New positions, called physician advisors, were created, and these positions function as the “right-hand men and women” for the chief medical officers or VPMAs. Physician advisors review the care variation data and consult with the “outliers” to focus on improvement.

The clinical operations groups also created interdisciplinary care coordination rounds (ICCRs), or daily rounding teams. “We have multi-disciplinary frontline staffing rounding on patients, identifying those that were beyond the benchmark in terms of expected length of stay, so we put in a lot of additional practice, if you will, into hard-wiring some more preventative work. The aim was to tackle the issues as they come rather than continually look at these retrospectively. All of that, along with better engagement with the physician advisor position, all helped to take the data and translate it into some meaningful results and response,” Kim says.

As a result of this data-driven performance improvement initiative, Franciscan Health has realized $25 million in cost savings, since 2012, as a result of a reduction in length of stay and utilization. “For example, we reduced the use of red blood cells, as a result of changing protocols, verifications and number of units being implemented so that we reduced a lot of waste and unnecessary transfusion of units of blood,” Kim says. The clinical performance improvements also helped to reduce hospital stays, saving more than 25,000 risk-adjusted days over the past five years, which drove the bulk of the cost savings.

Kim notes that while technology is foundational to this work, getting the right stakeholders to the table to help drive compliance and standardization is vital for success.

“As a data person, it’s important to have strong content, strong engines to help you analyze, risk adjust and drill into the data so you can dispel any anecdotes or confirm anecdotes. To me, it’s an iterative process, starting with the overall high-level metrics and getting feedback from all the stakeholders, nursing, physicians, clinical departments and support staff, to help us identify opportunities to drill into them. You don’t just want analysts in a back room crunching numbers, but having data being presented and having them better engaged in talking about data and why that drives change, in terms of what you measure is what you can change. I think all that became very tangible for us when we created these groups so that they review the information and better understand how to interpret and how to drill down into those opportunities,” he says.

While these efforts have driven significant results so far, the performance improvement work continues, Kim says. “We’re still working on improving and hard wiring changes and processes. Sometimes it takes another system push to reinvigorate efforts. We’re doing the same thing now with identifying new benchmarks and really trying to transform some of the processes that we have today.”

He adds, “It’s always a back and forth in terms of identifying new benchmarks, knowing that, nationally, everybody improves on length of stay and readmissions. Even though we may improve, we know it’s a moving target. Many penalties are based on a curve; if you’re in the bottom quartile, you have to work twice as hard to get out of that area since everybody is trying to improve in the same initiative.”

Moving forward, Franciscan Health leaders are focused on using to data to shift from retrospective analysis to real-time care practice, he says. “A buzzword I hear now, and we certainly use it too, is predictive analytics, being able to better manage populations and using data to give us indicators as to, is this patient highly likely to be readmitted?”

“To me, that takes a lot of integration in the future,” he adds, “and that’s an area that we are constantly striving for, that integration of data, both for retrospective as well as predictive analytics. Retrospectively, we’re still connecting the dots on the cost of a complication, the cost of worsening performance, and linking also to patient satisfaction. That’s a key component that a lot of people are realizing—that highly satisfied patients typically are the ones that are hitting those benchmarks as well. Being able to link all of those areas and understand the correlations of each is a continued exploration for us,” he says.

 

 

 


The Health IT Summits gather 250+ healthcare leaders in cities across the U.S. to present important new insights, collaborate on ideas, and to have a little fun - Find a Summit Near You!


/article/analytics/franciscan-health-analytics-driven-initiative-improving-patient-care-and-reducing
/news-item/analytics/definitive-healthcare-acquires-himss-analytics-data-services

Definitive Healthcare Acquires HIMSS Analytics’ Data Services

January 16, 2019
by Rajiv Leventhal, Managing Editor
| Reprints

Definitive Healthcare, a data analytics and business intelligence company, has acquired the data services business and assets of HIMSS Analytics, the organizations announced today.

The purchase includes the Logic, Predict, Analyze and custom research products from HIMSS Analytics, which is commonly known as the data and research arm of the Healthcare Information and Management Systems Society.

According to Definitive officials, the acquisition builds on the company’s “articulated growth strategy to deliver the most reliable and consistent view of healthcare data and analytics available in the market.”

Definitive Healthcare will immediately begin integrating the datasets and platform functionality into a single source of truth, their executives attest. The new offering will aim to include improved coverage of IT purchasing intelligence with access to years of proposals and executed contracts, enabling transparency and efficiency in the development of commercial strategies.

Broadly, Definitive Healthcare is a provider of data and intelligence on hospitals, physicians, and other healthcare providers. Its product suite its product suite provides comprehensive data on 8,800 hospitals, 150,000 physician groups, 1 million physicians, 10,000 ambulatory surgery centers, 14,000 imaging centers, 86,000 long-term care facilities, and 1,400 ACOs and HIEs, according to officials.

Together, Definitive Healthcare and HIMSS Analytics have more than 20 years of experience in data collection through exclusive methodologies.

“HIMSS Analytics has developed an extraordinarily powerful dataset including technology install data and purchasing contracts among other leading intelligence that, when combined with Definitive Healthcare’s proprietary healthcare provider data, will create a truly best-in-class solution for our client base,” Jason Krantz, founder and CEO of Definitive Healthcare, said in a statement.

More From Healthcare Informatics

/news-item/analytics/machine-learning-survey-many-organizations-several-years-away-adoption-citing

Machine Learning Survey: Many Organizations Several Years Away from Adoption, Citing Cost

January 10, 2019
by Heather Landi, Associate Editor
| Reprints

Radiologists and imaging leaders see an important role for machine learning in radiology going forward, however, most organizations are still two to three years away from adopting the technology, and a sizeable minority have no plans to adopt machine learning, according to a recent survey.

A recent study* by Reaction Data sought to examine the hype around artificial intelligence and machine learning, specifically in the area of radiology and imaging, to uncover where AI might be more useful and applicable and in what areas medical imaging professionals are looking to utilize machine learning.

Reaction Data, a market research firm, got feedback from imaging professionals, including directors of radiology, radiologists, chiefs of radiology, imaging techs, PACS administrators and managers of radiology, from 152 healthcare organizations to gauge the industry on machine learning. About 60 percent of respondents were from academic medical centers or community hospitals, while 15 percent were from integrated delivery networks and 12 percent were from imaging centers. The remaining respondents worked at critical access hospitals, specialty clinics, cancer hospitals or children’s hospitals.

Among the survey respondents, there was significant variation in the number of annual radiology studies performed—17 percent performed 100-250 thousand studies each year; 16 percent performed 1 to 2 million studies; 15 percent performed 5 to 25 thousand studies; 13 percent performed 250 to 500 thousand; 10 percent performed more than 2 million studies a year.

More than three quarters of imaging and radiology leaders (77 percent) view machine learning as being important in medical imaging, up from 65 percent in a 2017 survey. Only 11 percent view the technology as not important. However, only 59 percent say they understand machine learning, although that percentage is up from 52 percent in 2017. Twenty percent say they don’t understand the technology, and 20 percent have a partial understanding.

Looking at adoption, only 22 percent of respondents say they are currently using machine learning—either just adopted it or have been using it for some time. Eleven percent say they plan to adopt the technology in the next year.

Half of respondents (51 percent) say their organizations are one to two years away (28 percent) or even more than three years away (23 percent) from adoption. Sixteen percent say their organizations will most likely never utilize machine learning.

Reaction Data collected commentary from survey respondents as part of the survey and some respondents indicated that funding was an issue with regard to the lack of plans to adopt the technology. When asked why they don’t ever plan to utilize machine learning, one respondent, a chief of cardiology, said, “Our institution is a late adopter.” Another respondent, an imaging tech, responded: “No talk of machine learning in my facility. To be honest, I had to Google the definition a moment ago.”

Survey responses also indicated that imaging leaders want machine learning tools to be integrated into PACS (picture archiving and communication systems) software, and that cost is an issue.

“We'd like it to be integrated into PACS software so it's free, but we understand there is a cost for everything. We wouldn't want to pay more than $1 per study,” one PACS Administrator responded, according to the survey.

A radiologist who responded to the survey said, “The market has not matured yet since we are in the research phase of development and cost is unknown. I expect the initial cost to be on the high side.”

According to the survey, when asked how much they would be willing to pay for machine learning, one imaging director responded: “As little as possible...but I'm on the hospital administration side. Most radiologists are contracted and want us to buy all the toys. They take about 60 percent of the patient revenue and invest nothing into the hospital/ambulatory systems side.”

And, one director of radiology responded: “Included in PACS contract would be best... very hard to get money for this.”

The survey also indicates that, among organizations that are using machine learning in imaging, there is a shift in how organizations are applying machine learning in imaging. In the 2017 survey, the most common application for machine learning was breast imaging, cited by 36 percent of respondents, and only 12 percent cited lung imaging.

In the 2018 survey, only 22 percent of respondents said they were using machine learning for breast imaging, while there was an increase in other applications. The next most-used application cited by respondents who have adopted and use machine learning was lung imaging (22 percent), cardiovascular imaging (13 percent), chest X-rays (11 percent), bone imaging (7 percent), liver imaging (7 percent), neural imaging (5 percent) and pulmonary imaging (4 percent).

When asked what kind of scans they plan to apply machine learning to once the technology is adopted, one radiologist cited quality control for radiography, CT (computed tomography) and MR (magnetic resonance) imaging.

The survey also examines the vendors being used, among respondents who have adopted machine learning, and the survey findings indicate some differences compared to the 2017 survey results. No one vendor dominates this space, as 19 percent use GE Healthcare and about 16 percent use Hologic, which is down compared to 25 percent of respondents who cited Hologic as their vendor in last year’s survey.

Looking at other vendors being used, 14 percent use Philips, 7 percent use Arterys, 3 percent use Nvidia and Zebra Medical Vision and iCAD were both cited by 5 percent of medical imaging professionals. The percentage of imaging leaders citing Google as their machine learning vendor dropped from 13 percent in 2017 to 3 percent in this latest survey. Interestingly, the number of respondents reporting the use of homegrown machine learning solutions increased to 14 percent from 9 percent in 2017.

 

*Findings were compiled from Reaction Data’s Research Cloud. For additional information, please contact Erik Westerlind at ewesterlind@reactiondata.com.

 

Related Insights For: Analytics

/article/analytics/drexel-university-moves-forward-leveraging-nlp-improve-clinical-and-research

Drexel University Moves Forward on Leveraging NLP to Improve Clinical and Research Processes

January 8, 2019
by Mark Hagland, Editor-in-Chief
| Reprints
At Drexel University, Walter Niemczura is helping to lead an ongoing initiative to improve research processes and clinical outcomes through the leveraging of NLP technology

Increasingly, the leaders of patient care organizations are using natural language processing (NLP) technologies to leverage unstructured data, in order to improve patient outcomes and reduce costs. Healthcare IT and clinician leaders are still relatively early in the long journey towards full and robust success in this area; but they are moving forward in healthcare organizations nationwide.

One area in which learnings are accelerating is in medical research—both basic and applied. Numerous medical colleges are moving forward in this area, with strong results. Drexel University in Philadelphia is among that group. There, Walter Niemczura, director of application development, has been helping to lead an initiative that is supporting research and patient care efforts, at the Drexel University College of Medicine, one of the nation’s oldest medical colleges (it was founded in 1848), and across the university. Niemczura and his colleagues have been partnering with the Cambridge, England-based Linguamatics, in order to engage in text mining that can support improved research and patient care delivery.

Recently, Niemczura spoke with Healthcare Informatics Editor-in-Chief Mark Hagland, regarding his team’s current efforts and activities in that area. Below are excerpts from that interview.

Is your initiative moving forward primarily on the clinical side or the research side, at your organization?

We’re making advances that are being utilized across the organization. The College of Medicine used to be a wholly owned subsidiary of Drexel University. About four years ago, we merged with the university, and two years ago we lost our CIO to the College of Medicine. And now the IT group reports to the CIO of the whole university. I had started here 12 years ago, in the College of Medicine.

Webinar

Experience New Records for Speed & Scale: High Performance Genomics & Imaging

Through real use cases and live demo, Frank Lee, PhD, Global Industry Leader for Healthcare & Life Sciences, will illustrate the architecture and solution for high performance data and AI...

And some of the applications of this technology are clinical and some are non-clinical, correct?

Yes, that’s correct. Our data repository is used for clinical and non-clinical research. Clinical: College of Medicine, College of Nursing, School of Public Health. And we’re working with the School of Biomedical Engineering. And college of Arts and Sciences, mostly with the Psychology Department. But we’re using Linguamatics only on the clinical side, with our ambulatory care practices.

Overall, what are you doing?

If you look at our EHR [electronic health record], there are discrete fields that might have diagnosis codes, procedure codes and the like. Let’s break apart from of that. Let’s say our HIV Clinic—they might put down HIV as a diagnosis, but in the notes, might mention hepatitis B, but they’re not putting that down as a co-diagnosis; it’s up to the provider how they document. So here’s a good example: HIV and hepatitis C have frequent comorbidity. So our organization asked a group of residents to go in and look at 5,700 patient charts, with patients with HIV and hepatitis C. Anybody in IT could say, we have 677 patients with both. But doctors know there’s more to the story. So it turns out another 443 had HIV in the code and hep C mentioned in the notes. Another 14 had hep C in the code, and HIV in the notes.

So using Linguamatics, it’s not 5,700 charts that you need to look at, but 1,150. By using Linguamatics, we narrowed it down to 1,150 patients—those who had both codes. But then we found roughly 460 who had the comorbidity mentioned partly in the notes. Before Linguamatics, all residents had to look at all 5,700 charts, in cases like this one.

So this was a huge time-saver?

Yes, it absolutely was a huge time-saver. When you’re looking at hundreds of thousands or millions of patient records, the value might be not the ones you have to look at, but the ones you don’t have to look at. And we’re looking at operationalizing this into day-to-day operations. While we’re billing, we can pull files from that day and say, here’s a common co-morbidity—HIV and hep C, with hep C mentioned in those notes—and is there a missed opportunity to get the discrete fields correct?

Essentially, then, you’re making things far more accurate in a far more efficient way?

Yes, this involves looking at patient trials on the research side, while on the clinical side, we can have better quality of care, and more updated billing, based on more accurate data management.

When did this initiative begin?

Well, we’ve been working with Linguamatics for six or seven years. Initially, our work was around discrete fields. The other type of note we look at has to do with text. We had our rheumatology department, and they wanted to find out which patients had had particular tests done—they’re looking for terms in notes… When a radiologist does a report on your x-ray, it’s not like a test for diabetes, where a blood sugar number comes out; x-rays are read and interpreted. The radiologists gave us key words to search for, sclerosis, erosions, bone edema. There are about 30 words. They’re looking for patients who have particular x-rays or MRIs done, so that instead of looking for everyone who had these x-rays done, roughly 400 had these terms. We reduced the number who were undergoing particular tests. The rheumatology department was looking for patients for patient recruitment who had x-rays done, and had these kinds of findings.

So the rheumatology people needed to identify certain types of patients, and you needed to help them do that?

Yes, that’s correct. Now, you might say, we could do word search in Microsoft Word; but the word “erosion” by itself might not help. You have to structure your query to be more accurate, and exclude certain appearances of words. And Linguamatics is very good at that. I use their ontology, and it helps us understand the appearance of words within structure. I used to be in telecommunications. When all the voice-over IP came along, there was confusion. You hear “buy this stock,” when the message was, “don’t buy this stock.”

So this makes identifying certain elements in text far more efficient, then, correct?

Yes—the big buzzword is unstructured data.

Have there been any particular challenges in doing this work?

One is that this involves an iterative process. For someone in IT, we’re used to writing queries and getting them right the first time. This is a different mindset. You start out with one query and want to get results back. You find ways to mature your query; at each pass, you get better and better at it; it’s an iterative process.

What have your biggest learnings been in all this, so far?

There’s so much promise—there’s a lot of data in the notes. And I use it now for all my preparatory research. And Drexel is part of a consortium here called Partnership In Educational Research—PIER.

What would you say to CIOs, CMIOs, CTOs, and other healthcare IT leaders, about this work?

My recommendation would be to dedicate resources to this effort. We use this not only for queries, but to interface with other systems. And we’re writing applications around this. You can get a data set out and start putting it into your work process. It shouldn’t be considered an ad hoc effort by some of your current people.

 

 


See more on Analytics

agario agario---betebet sohbet hattı betebet bahis siteleringsbahis