A New Jersey Medical Center Takes the Initiative in Early Sepsis Detection | Healthcare Informatics Magazine | Health IT | Information Technology Skip to content Skip to navigation

A New Jersey Medical Center Takes the Initiative in Early Sepsis Detection

April 12, 2018
by Rajiv Leventhal
| Reprints

Sepsis management has long been a challenge for hospitals throughout the U.S., and severe sepsis carries high mortality rates and costs the healthcare system billions of dollars each year. Indeed, in America, sepsis kills more people than AIDS, breast cancer, and stroke—combined.

To help combat this national healthcare epidemic, more and more hospitals and health IT companies are teaming up to deploy technology that can help detect sepsis early on, as research has proven that early intervention will lead to higher survival rates. In New Jersey, for instance, the Cape May-based Cape Regional Medical Center (CRMC) has linked up with Dascena, a California-based healthcare startup that develops predictive algorithms which have the potential to facilitate the timely and accurate diagnosis of complex conditions.

In particular, Dascena’s sepsis detection algorithm, InSight, has been shown to significantly improve mortality rate, average length of stay, and readmission rate among several patient populations, including those at CRMC. Andrea McCoy, M.D., chief medical officer at Cape Regional Medical Center, says that the prime motivation behind the collaboration was the recognition that the most important factor in surviving sepsis is the early detection and early intervention for the patient. “We needed to find a way to identify patients sooner, compared to our existing process [for sepsis screening],” she admits.

Prior to implementing InSight, CRMC was using a twice-a-day manual process that occurred only when the nurses did their screenings. That included looking for certain criteria that suggested the patient might be septic, and beyond that, he or she had to have evidence of organ dysfunction as well, McCoy explains. But the issue with that process was that twice-daily screenings meant there were 11 hours when the patient wasn’t being specifically screened, unless there were dramatic changes in his or her health.

As such, subtle findings that were not present until later in the illness were being missed, leading to patients being diagnosed much later into their illness. Further, McCoy explains, previously, there was not any process in the ER to formally screen for sepsis. But now, InSight “enables us to screen those patients almost at their point of entry into the healthcare system, as well as have a continual evolution for the earlier signs of sepsis on the inpatient unit,” she offers.


Experience New Records for Speed & Scale: High Performance Genomics & Imaging

Through real use cases and live demo, Frank Lee, PhD, Global Industry Leader for Healthcare & Life Sciences, will illustrate the architecture and solution for high performance data and AI...

At a high level, the algorithm is able to look at trends and correlations around the variables that might contribute to sepsis, explains Ritankar Das, who is the CEO of Dascena. While the tool only requires vital signs, it can incorporate other measurements as they become available. And it does it all of that automatically; it picks everything up from the EHR (electronic health record), meaning there is no extra work for clinicians, Das says.

The tool is also trying to help clinicians understand how the pattern looks relative to the millions of patients it has seen before. So for instance, if that pattern is a high-risk one for developing sepsis or one of its downstream complications, such as severe sepsis or septic shock, then if there is a high enough risk, a notification gets sent to the provider. In essence, it is learning what has happened with lots of other patients in the past, using just a small amount of information that’s available in the patient chart, and doing that continuously—in turn, updating the risk profile as time evolves, Das elucidates.

The most important findings from the CRMC study, which looked at how InSight could help with sepsis-related patient outcomes, were a reduction in length-of-stay for patients (9.55 percent), a reduction in the hospital’s mortality rate (60 percent); who presented with sepsis, and a reduction in sepsis-related 30-day readmission rate (50 percent).

McCoy notes that one of the keys for success was finding the “sweet spot” for the tool’s sensitivity and specificity, “so that you don’t have too many false positives, but also so that you are not missing patients who might have sepsis—which is what happens with lots of the other screening models that are out there.” Indeed, this is what happened with CRMC’s manual screening process, McCoy acknowledges. She says that even with the tool, in the early stages they did experience some alert fatigue because CRMC’s team was initially “generous” in how it was setting limits. “But we found out that doctors were getting lots of calls for patients who didn’t have sepsis. So we had to find the right spot to identify patients who had the early signs of sepsis and not some other disease process. We still will get the occasional patient who doesn’t have sepsis—but it’s nothing like before,” she says.

McCoy says that her team has been working with others in the state on early identification and to develop sepsis protocols. Since 2015, the Centers for Medicare & Medicaid Services (CMS) has had a national sepsis measure that assesses how well hospitals follow evidence-based protocol care, and just very recently in New Jersey, a law was passed that mandated education and sepsis screening protocols, says McCoy.

But she adds that implementing InSight has helped CRMC “stay ahead of the game,” since many screening protocols start looking later down the line. “Many patients will often present with vague symptoms, and sepsis is always out there hovering in the back of everyone’s minds. So having the ability to identify the early markers is what’s impactful in regard to patients’ outcomes,” McCoy says.

The Health IT Summits gather 250+ healthcare leaders in cities across the U.S. to present important new insights, collaborate on ideas, and to have a little fun - Find a Summit Near You!


You Have to Learn to Walk Before You Can Run With Predictive Analytics

November 11, 2018
by David Raths, Contributing Editor
| Reprints
Health systems report obstacles in turning their big data into actionable insights

The title of a recent webinar says all you need to know about predictive analytics in healthcare: “Within Sight Yet Out of Reach.”

The Center for Connected Medicine, jointly operated by GE Healthcare, Nokia, and UPMC, put on the webinar and partnered with HIMSS on a survey on the state of predictive analytics in healthcare.

The survey of 100 health IT leaders found that approximately 7 out of 10 hospitals and health systems say they are taking some action to formulate or execute a strategy for predictive analytics. But despite the buzz and potential, there are obstacles for health systems that want to turn their big data into actionable insights.

Although 69 percent said they are effective at using data to describe past health events, 49 percent said they are less effective at using data to predict future outcomes. They cite a lack of interoperability and a shortage of skilled workers as barriers. “They want to put all that data to work to provide insights as we deliver care, but it is not an easy task,” said Oscar Marroquin, M.D., chief clinical analytics officer at UPMC. “They are having trouble getting access to the data in useful and standardized formats and don’t have the people in place to apply machine learning techniques.”

The top five use cases cited in the survey are:


Experience New Records for Speed & Scale: High Performance Genomics & Imaging

Through real use cases and live demo, Frank Lee, PhD, Global Industry Leader for Healthcare & Life Sciences, will illustrate the architecture and solution for high performance data and AI...

• Fostering more cost-effective care

• Reducing readmissions

• Identifying at-risk patients

• Driving proactive preventive care

• Improving chronic conditions management

UPMC’s journey into the analytics space was jump-started by an institutional commitment to building the analytics program and a recognition that it needed to be a more data-driven organization. “We were never able to consume our data to drive how we deliver care until we had a dedicated team to do analytics,” Marroquin said. “Traditionally these functions were done as a side job by team members in IT systems. We have found having a dedicated team is absolutely necessary.”

Mona Siddiqui, M.D., M.P.H., chief data officer at the U.S. Department of Health & Human Services, says she is focused on the interoperability aspect across 29 agencies. “We are looking at how we are using data across silos to create more business value for the department,” she said. “We don’t have that infrastructure in place yet,” which leads to one-off projects rather than tackling larger priorities. She is focusing on enterprise-level data governance and interoperability structures. “I think the promise of big data is real, but I don’t think a lot of organizations have thought through the tough work required to make it happen. Practitioners start to see it as buzzword rather than something creating real value. There is a lot of work that needs to happen before we see value coming from data.”

Noting the survey result about human resources, she added that “the talent pool is an incredible challenge. While we talk about sharing data and using it for business intelligence, we don’t resource our teams appropriately to fulfill that promise.”

She said the move to value-based care has made predictive analytics more important to health systems. “It is a data play from the ground up,” and now we are starting to see the real impact in terms of managing chronic conditions. “More organizations like UPMC are seeing this is about data and measurement and bringing in not just data they have, but resources and data they may not have had access to previously.”

Travis Frosch, senior director of analytics at GE Healthcare, said that hospitals generate petabytes of data per year, yet only 3 percent is tagged for analytical use later on. “So 97 percent goes down the drain,” he added, suggesting that organizations need to start small. “If you are an organization that does not have maturity in analytics, start with traditional business intelligence to build the trust and foundation to move toward higher level of analytics maturity,” Frosch said. “Pick projects that don’t require tons of data sources. If you get a good a return on investment you can open up the budget to further your analytics journey. But you have to have a unit in place to measure the impact.”


More From Healthcare Informatics


Survey: More Than Half of Healthcare CIOs Lack Strong Trust in Their Data

November 9, 2018
by Heather Landi, Associate Editor
| Reprints

For U.S. healthcare leaders, trusted data is more important than ever, as their organizations migrate from the fee-for-service model to value-based care. However, a recent survey of CIOs found that less than half of healthcare organizations show very strong levels of trust in their data.

The survey, by Burlington, Mass.-based Dimensional Insight, an analytics and data management solutions provider, is based on responses from 85 members of a professional organization of CIOs and other healthcare IT leaders about trust in data across their enterprises.

During this transition from fee-for-service to value-based care, healthcare organizations must weigh investments, risks, and trade-offs objectively with quantitative, trustworthy data. This kind of data driven decision-making will be critical in shaping the initiatives and high-stakes choices required by value-based care. The transition will require increased, high-level collaboration among different constituencies within a healthcare enterprise. It also will require decisions to be quantitatively assessed against reliable, trustworthy data, the survey report notes.

The survey sought to gauge the current state of data trust and access? How much trust do CIOs and stakeholders have in their clinical, financial, and operational data these days? How many have direct, self-service access to the information they need to make data-driven decisions? Are healthcare organizations ready to invest funds to improve trust in data and self-service capabilities?

Overall, few organizations have very strong trust in their data while levels of self-service vary across the enterprise, according to the survey. Most healthcare organizations plan to invest money toward improving both data trust and self-service, the survey found.

As part of the survey, CIOs were asked to rate the index of trust in data within their various user communities, on a 1-10 scale, with 10 being the highest. The index of trust was defined as how strongly “user populations believe that they can trust the data provided to make decisions.”

Forty-eight percent of respondents assessed financial data as an 8 or above. The percentage of “8-and-up” responses was 40 percent for clinical and 36 percent for operational.

Clinical users have the lowest levels of self-service in making data-driven decisions. More than half of CIOs report that 30 percent or less of their clinical population is self-serviced in data-driven decision making.

Approximately three-quarters of healthcare organizations plan to increase investments to improve trust in data and self-service capabilities. At least 70% responded “yes” to investments in trusted data in each of the three realms. In addition, most organizations (68 – 78 percent) plan to increase their investments towards improving users’ capacity for self-service data analytics.

The survey demonstrates that healthcare organizations have a long way to go in developing rock-solid trust in their data and self-service access to it. The survey results also indicate that executives are aware of these challenges and are ready to dedicate resources to improving both trust and access.

“Trusted data is more important than ever, as healthcare organizations migrate from the fee-for-service model to value-based care,” Fred Powers, president and CEO of Dimensional Insight, said in a statement. “During this transition, healthcare organizations must weigh investments, risks, and tradeoffs against quantitative, trustworthy data. This kind of data driven decision-making will be critical in shaping the initiatives and high-stakes choices required by value-based care.”

Dimensional Insight executives also provide a number of recommendations for improving trust in data and increasing self-service capabilities:

  • Keep subject matter experts close to the data. Healthcare organizations will need programmers and data engineers to extract data from the source systems, but it is the subject matter experts who best understand the data and how it will be used.
  • Automate business logic transformations. More automation is better when it comes to the often complex logic required to transform raw data into meaningful information.
  •  Promote transparency and visibility. The best way to make sure data is right is to let people — the frontline information consumers — at it.



Related Insights For: Analytics


Study: AI Falls Short When Analyzing Data Across Multiple Health Systems

November 7, 2018
by Heather Landi, Associate Editor
| Reprints
Click To View Gallery

Artificial intelligence (AI) tools and machine learning technologies hold the promise of transforming healthcare and there is ongoing discuss about how much of an impact AI and machine learning will have on the practice of medicine and on the business of healthcare overall.

In a recent study, researchers from New York City-based Mount Sinai Hospital and Icahn School of Medicine at Mount Sinai found that AI may fall short when analyzing data across multiple health systems. In conclusions, researchers noted that the study findings indicate healthcare organizations should carefully assess AI tools and their real-world performance. The study was published in a recent special issue of PLOS Medicine on machine learning and health care.

As interest in the use of computer system frameworks called convolutional neural networks (CNN) to analyze medical imaging and provide a computer-aided diagnosis grows, recent studies have suggested that AI image classification may not generalize to new data as well as commonly portrayed, the researchers wrote in a press release about the study.

Early results in using CNNs on X-rays to diagnose disease have been promising, but it has not yet been shown that models trained on X-rays from one hospital or one group of hospitals will work equally well at different hospitals, the researchers stated. Before these tools are used for computer-aided diagnosis in real-world clinical settings, we must verify their ability to generalize across a variety of hospital systems, according to the researchers.

The study is timely giving the interest in machine learning, particularly in the area of medical imaging. A survey from Reaction Data found that 84 percent of medical imaging professionals view the technology as being either important or extremely important in medical imaging. What’s more, about 20 percent of medical imaging professionals say they have already adopted machine learning, and about one-third say they will adopt it by 2020.

Breaking it down, 7 percent of respondents said they have just adopted some machine learning and 11 percent say they plan on adopting the technology in the next 12 months. Fourteen percent of respondents said their organizations have been using machine learning for a while. About a quarter of respondents say they plan to adopt machine learning by 2020, and another 25 percent said they are three or more years away from adopting it. Only 16 percent of medical imaging professionals say they have no plans to adopt machine learning.

That survey found that there has been very little adoption by imaging centers, and all of the current adopters are hospitals.

In this particular Mount Sinai study, researchers at the Icahn School of Medicine at Mount Sinai assessed how AI models identified pneumonia in 158,000 chest X-rays across three medical institutions: the National Institutes of Health; The Mount Sinai Hospital; and Indiana University Hospital. Researchers chose to study the diagnosis of pneumonia on chest X-rays for its common occurrence, clinical significance, and prevalence in the research community.

In three out of five comparisons, CNNs’ performance in diagnosing diseases on X-rays from hospitals outside of its own network was significantly lower than on X-rays from the original health system. However, CNNs were able to detect the hospital system where an X-ray was acquired with a high-degree of accuracy, and cheated at their predictive task based on the prevalence of pneumonia at the training institution, according to the study.

Researches concluded that AI tools trained to detect pneumonia on chest X-rays suffered significant decreases in performance when tested on data from outside health systems. What’s more, researchers noted that the difficulty of using deep learning models in medicine is that they use a massive number of parameters, making it challenging to identify specific variables driving predictions, such as the types of CT scanners used at a hospital and the resolution quality of imaging.

“The performance of CNNs in diagnosing diseases on X-rays may reflect not only their ability to identify disease-specific imaging findings on X-rays but also their ability to exploit confounding information,” the researchers wrote in the study. “Estimates of CNN performance based on test data from hospital systems used for model training may overstate their likely real-world performance.”

These findings suggest that artificial intelligence in the medical space must be carefully tested for performance across a wide range of populations; otherwise, the deep learning models may not perform as accurately as expected, the researches stated.

“Our findings should give pause to those considering rapid deployment of artificial intelligence platforms without rigorously assessing their performance in real-world clinical settings reflective of where they are being deployed,” senior author Eric Oermann, M.D., instructor in Neurosurgery at the Icahn School of Medicine at Mount Sinai, said in a statement. “Deep learning models trained to perform medical diagnosis can generalize well, but this cannot be taken for granted since patient populations and imaging techniques differ significantly across institutions.”

First author John Zech, a medical student at the Icahn School of Medicine at Mount Sinai, said, “If CNN systems are to be used for medical diagnosis, they must be tailored to carefully consider clinical questions, tested for a variety of real-world scenarios, and carefully assessed to determine how they impact accurate diagnosis.”


See more on Analytics

betebettipobetngsbahis bahis siteleringsbahis