In Miami, Plunging into the Unknown World of Predictive Analytics | Healthcare Informatics Magazine | Health IT | Information Technology Skip to content Skip to navigation

In Miami, Plunging into the Unknown World of Predictive Analytics

June 1, 2016
by Rajiv Leventhal
| Reprints

In 2014, University of Miami (UM) Health System and Lockheed Martin, a Bethesda, Md.-based global security and aerospace company with involvement in healthcare analytics, announced a multi-disciplinary partnership with the end goal to access patient data faster to allow for more preventive healthcare. According to David Seo, M.D., UM Health System’s chief research information officer and chief medical informatics officer (CMIO) of the Miller School of Medicine at the time of the announcement, the plan was to help clinical leaders “come up with actionable data that is truly important for the patient the physician is taking care of.”

At the time, Seo additionally noted, “This information should not just be about the patient's past. We need a data environment that can do complex statistical analysis to help us move away from reactive medicine and toward proactive medicine, in which we get to patients before they get sick and prevent the disease from occurring." Indeed, prior to this announcement, in 2013, the Lockheed Martin/UM Health System team established a data environment, implemented big data analytics and predictive modeling tools, and started to stratify patient data and conduct risk assessments.

David Seo, M.D.

It was a few years ago when Seo, now associate vice president, information technology for clinical applications and still CMIO at UM Health System, said he and other health IT leaders at the organization began to realize the evolution of where healthcare was going. “Patient-centered medical homes and ACOs [accountable care organizations] were the trends under the main idea of managing risk,” Seo says in a more recent interview with Healthcare Informatics. “I was getting multiple calls and visits from vendors offering analytics solutions, one after the other, and what became clear was they were not offering a true full suite of what a health system needs to manage risk. Our own EHR [electronic health record] vendor talked to us, but even what they could provide was limited. We knew were headed towards a clinically integrated network and other things of that nature. We needed a company that had a long track record of understanding data analytics and security,” Seo says, referencing the partnership with Lockheed Martin.

Webinar

Experience New Records for Speed & Scale: High Performance Genomics & Imaging

Through real use cases and live demo, Frank Lee, PhD, Global Industry Leader for Healthcare & Life Sciences, will illustrate the architecture and solution for high performance data and AI...

Building from the Ground Up

Seo readily acknowledges that predictive analytics in healthcare “is still very much in its infancy no matter who you talk to.” Indeed, aside from the basics such as readmissions, true predictive analytics has not come to fruition, he notes. To this end, University of Miami started out with a diabetes risk model, and clinician leaders have shown that the model can fit within providers’ workflows, Seo says. He adds that the risk model can be ordered through the organization’s order entry system, or it can have a patient ask to run that risk model themselves in test environments. “The risk model returns a score, so you understand your risk of developing diabetes over the next five years, for example. And now we are engaged with our clinical staff to [look at] things such as what is the threshold we would set to apply an intervention, for instance,” Seo says.

Seo further emphasizes the importance of the health system’s work around different validations, which he says is a necessity before a risk model of this scope goes into production. He explains two key areas around validations. First, the validation of a phenotype or a diagnosis using EHR data needs to be validated for the system. “If I am going to say you do or do not have diabetes for example, that needs to be valid, and you need to understand what the positive predicted value of that phenotype is,” he says.

Second, he says, the diabetes prediction needs to be valid for a specific population. “I like to say that population health will be local, so the diabetes model that we pull from the literature has been validated form a highly specialized population that perhaps is of different racial or ethnic origins from our south Florida population. So what we’re doing is validating the phenotype in our population, and also understanding what the performance of that model is in our population. These are two important steps before going live with this prediction model,” Seo says.

Seo adds that even though health IT leaders may think everything should work exactly as intended, they still need to go back and look at a statistically significant number of patients, by literally going into their charts and confirming that the phenotype or diagnosis that they declared using EHR data is actually relevant in real life. “When you’re dealing with big data, data quality, and missing variables, these things all come into play, so you need to make sure your starting points and basic assumptions are correct. It’s a necessity for using EHR data for predictions,” he says.

When talking about predictive models and big data approaches, Seo feels that validation is the leap that is not really thought about or considered among clinical folks. “It’s what our organization has learned,” he says. “In theory, it’s excellent to say that I can predict this or that, when in practical reality, if you’re using this data to clinically treat patients, there needs to be granular additional steps of validating data, validating your phenotypes, and validating your predictions. There is a large amount of work that needs to go in to make sure that what you’re doing is good for patients,” he says.

Navigating in a New Era

Moving forward, Seo stresses that while vendors are now rolling out the tools to make disease management easier, health systems need to re-engineer their operations since it’s not just about looking at the doctor-patient relationship anymore, but rather healthcare leaders have to think about it now in terms of one-to-many simultaneous relationships. “Healthcare organizations have to readjust their care delivery patterns to fit this population health idea,” he says. “It can be hard, and it’s not the way we traditionally practice medicine. And also, some of your population will be managed this way while others won’t be, while finally keeping in mind that you have pressures of new payment models,” Seo says, speaking to all of the challenges health IT leaders now face.

As such, while the disease management foundation has begun to be laid out, the final step is getting to people before they make transition from good health to poor health, Seo says. “To get there, you have to put in a lot of preparation, be detail-oriented, and organizations need to recognize they might not have the skillsets required in-house. We can handle that disease management part with some assistance, but for true predictive medicine and personalized medicine, the skillsets currently don’t exist in medical centers. In larger academic medical centers, maybe so, but most places don’t have data scientists who deal with big data,” he says.

Seo further notes that two core issues in this area of analytics and disease management are provider behavior and patient engagement. Regarding the former, simply turning on alerts in a system, or sending alerts at the point of care, will lead to failure if that’s the objective someone is looking for, he says. “We have engaged with subject matter experts who are M.D.s, as we have been developing our work around diabetes, and they have been involved in a number of our activities and interactions. They have been fully participatory, they have bought in and [been supportive], and if you don’t get that, you won’t get true change in physician behavior,” Seo says. “Doctors can be very good in finding a way around something they don’t agree with, so that’s what you’ll get. Or you will get compliance without commitment to the process. Provider behavior starts with engagement early in the process from all levels.”

Regarding patient engagement, Seo says that in newer accountable care models, there’s accountability for all parties involved—including patients and their families. “You want to give patients a method to interact with their own information. Our mindset has been to give them as much data as we can in a safe and appropriate manner so in their discussions with physicians, they understand what’s going on and can participate in their own healthcare.”

In sum, Seo emphasizes that the “new” healthcare is a journey for providers, payers, and patients, and as such, everyone needs to come along at pace that works for them. “One thing doesn’t need to change; this is about change on multiple fronts,” he says. “It’s really such an intriguing time in medicine.”


The Health IT Summits gather 250+ healthcare leaders in cities across the U.S. to present important new insights, collaborate on ideas, and to have a little fun - Find a Summit Near You!


/article/analytics/miami-plunging-unknown-world-predictive-analytics
/blogs/david-raths/analytics/amia-charts-course-learning-health-system

AMIA Charts Course to Learning Health System

| Reprints
Initiative seeks to create virtuous cycle where clinical practice is not distinct from research

In September 2015, at AcademyHealth’s Concordium 2015 meeting in Washington, D.C., I saw a great presentation by Peter Embi, M.D., who was then an associate professor and vice chair of biomedical informatics as well as associate dean for research informatics and the chief research information officer at the Wexner Medical Center at Ohio State University. 

That day Dr. Embi outlined some of the limitations of the traditional approach to evidence-based medicine —  that it is a research/practice paradigm where the information flow is unidirectional, and clinical practice and research are distinct activities, with the research design as an afterthought. “We want to leverage information at the point of care and in engagements with patients so we can systematically learn. That is what the learning health system is all about,” Embi said.

But in the current model, he noted, there is little consideration of research during planning of health systems. That limits the ability to invest in and leverage clinical resources to advance research. Also, there are no financial incentives for non-researchers to engage in research. Research as an afterthought also leads to regulatory problems and wasted investments.

Embi argued for moving from “evidence-based medicine” to an “evidence-generating medicine” approach, which he defined as the systematic incorporation of research and quality improvement into the organization. Rather than findings flowing only from research done looking back at historical data, this approach creates a virtuous cycle where clinical practice is not distinct from research.

Flash forward to 2019 and Dr. Embi is now president & CEO of Regenstrief Institute Inc., vice president for learning health systems at IU Health, and chairman of the Board of Directors of the American Medical Informatics Association (AMIA). And he is still advocating for a shift to evidence-generating medicine. He and AMIA colleagues recently published a paper in JAMIA offering more than a dozen recommendations for public policy to facilitate the generation of evidence across physician offices and hospitals now that the adoption of EHRs is widespread.

The paper cites several examples of current high-visibility research initiatives that depend on the EGM approach: the All of Us Research Program and Cancer Moonshot initiative, the Health Care Systems Research Collaboratory, and the development of a national system of real-world evidence generation system as pursued by such groups as the US Food & Drug Administration (FDA), Patient-Centered Outcomes Research Institute (PCORI), National Institutes of Health (NIH), and other federal agencies.

The paper makes several recommendations for policy changes, including that the Trump administration should faithfully implement 2018 Revisions to the Common Rule as well as establish the 21st Century Cures-mandated Research Policy Board. The administration must implement this provision to better calibrate and harmonize our sprawling and incoherent federal research regulations.

Another recommendation is that the HHS Office of Civil Rights (OCR) should refine the definition of a HIPAA Designated Record Set (DRS) and ONC should explore ways to allow patients to have a full digital export of their structured and unstructured data within a Covered Entity’s DRS in order to share their data for research. In addtion, regulators should work with stakeholders to develop granular data specifications, including metadata, and standards to support research for use in the federal health IT certification program.

The AMIA authors also suggest that CMS leverage its Quality Payment Program to reward clinical practice Improvement Activities that involve research components. This would encourage office-based physicians to invest time and resources needed to realize EGM, they say.

Based on the paper’s findings, AMIA is launching a new initiative focused on advancing informatics-enabled improvements for the U.S. healthcare system. The organization says that a multidisciplinary group of AMIA members will develop a national informatics strategy, policy recommendations, and research agenda to improve:

• how evidence is generated through clinical practice;

• how that evidence is delivered back into the care continuum; and

• how our national workforce and organizational structures are best positioned to facilitate informatics-driven transformation in care delivery, clinical research, and population health.

A report detailing this strategy will be unveiled at a December 2019 conference in Washington, D.C.

 

 

More From Healthcare Informatics

/news-item/analytics/definitive-healthcare-acquires-himss-analytics-data-services

Definitive Healthcare Acquires HIMSS Analytics’ Data Services

January 16, 2019
by Rajiv Leventhal, Managing Editor
| Reprints

Definitive Healthcare, a data analytics and business intelligence company, has acquired the data services business and assets of HIMSS Analytics, the organizations announced today.

The purchase includes the Logic, Predict, Analyze and custom research products from HIMSS Analytics, which is commonly known as the data and research arm of the Healthcare Information and Management Systems Society.

According to Definitive officials, the acquisition builds on the company’s “articulated growth strategy to deliver the most reliable and consistent view of healthcare data and analytics available in the market.”

Definitive Healthcare will immediately begin integrating the datasets and platform functionality into a single source of truth, their executives attest. The new offering will aim to include improved coverage of IT purchasing intelligence with access to years of proposals and executed contracts, enabling transparency and efficiency in the development of commercial strategies.

Broadly, Definitive Healthcare is a provider of data and intelligence on hospitals, physicians, and other healthcare providers. Its product suite its product suite provides comprehensive data on 8,800 hospitals, 150,000 physician groups, 1 million physicians, 10,000 ambulatory surgery centers, 14,000 imaging centers, 86,000 long-term care facilities, and 1,400 ACOs and HIEs, according to officials.

Together, Definitive Healthcare and HIMSS Analytics have more than 20 years of experience in data collection through exclusive methodologies.

“HIMSS Analytics has developed an extraordinarily powerful dataset including technology install data and purchasing contracts among other leading intelligence that, when combined with Definitive Healthcare’s proprietary healthcare provider data, will create a truly best-in-class solution for our client base,” Jason Krantz, founder and CEO of Definitive Healthcare, said in a statement.

Related Insights For: Analytics

/news-item/analytics/machine-learning-survey-many-organizations-several-years-away-adoption-citing

Machine Learning Survey: Many Organizations Several Years Away from Adoption, Citing Cost

January 10, 2019
by Heather Landi, Associate Editor
| Reprints

Radiologists and imaging leaders see an important role for machine learning in radiology going forward, however, most organizations are still two to three years away from adopting the technology, and a sizeable minority have no plans to adopt machine learning, according to a recent survey.

A recent study* by Reaction Data sought to examine the hype around artificial intelligence and machine learning, specifically in the area of radiology and imaging, to uncover where AI might be more useful and applicable and in what areas medical imaging professionals are looking to utilize machine learning.

Reaction Data, a market research firm, got feedback from imaging professionals, including directors of radiology, radiologists, chiefs of radiology, imaging techs, PACS administrators and managers of radiology, from 152 healthcare organizations to gauge the industry on machine learning. About 60 percent of respondents were from academic medical centers or community hospitals, while 15 percent were from integrated delivery networks and 12 percent were from imaging centers. The remaining respondents worked at critical access hospitals, specialty clinics, cancer hospitals or children’s hospitals.

Among the survey respondents, there was significant variation in the number of annual radiology studies performed—17 percent performed 100-250 thousand studies each year; 16 percent performed 1 to 2 million studies; 15 percent performed 5 to 25 thousand studies; 13 percent performed 250 to 500 thousand; 10 percent performed more than 2 million studies a year.

More than three quarters of imaging and radiology leaders (77 percent) view machine learning as being important in medical imaging, up from 65 percent in a 2017 survey. Only 11 percent view the technology as not important. However, only 59 percent say they understand machine learning, although that percentage is up from 52 percent in 2017. Twenty percent say they don’t understand the technology, and 20 percent have a partial understanding.

Looking at adoption, only 22 percent of respondents say they are currently using machine learning—either just adopted it or have been using it for some time. Eleven percent say they plan to adopt the technology in the next year.

Half of respondents (51 percent) say their organizations are one to two years away (28 percent) or even more than three years away (23 percent) from adoption. Sixteen percent say their organizations will most likely never utilize machine learning.

Reaction Data collected commentary from survey respondents as part of the survey and some respondents indicated that funding was an issue with regard to the lack of plans to adopt the technology. When asked why they don’t ever plan to utilize machine learning, one respondent, a chief of cardiology, said, “Our institution is a late adopter.” Another respondent, an imaging tech, responded: “No talk of machine learning in my facility. To be honest, I had to Google the definition a moment ago.”

Survey responses also indicated that imaging leaders want machine learning tools to be integrated into PACS (picture archiving and communication systems) software, and that cost is an issue.

“We'd like it to be integrated into PACS software so it's free, but we understand there is a cost for everything. We wouldn't want to pay more than $1 per study,” one PACS Administrator responded, according to the survey.

A radiologist who responded to the survey said, “The market has not matured yet since we are in the research phase of development and cost is unknown. I expect the initial cost to be on the high side.”

According to the survey, when asked how much they would be willing to pay for machine learning, one imaging director responded: “As little as possible...but I'm on the hospital administration side. Most radiologists are contracted and want us to buy all the toys. They take about 60 percent of the patient revenue and invest nothing into the hospital/ambulatory systems side.”

And, one director of radiology responded: “Included in PACS contract would be best... very hard to get money for this.”

The survey also indicates that, among organizations that are using machine learning in imaging, there is a shift in how organizations are applying machine learning in imaging. In the 2017 survey, the most common application for machine learning was breast imaging, cited by 36 percent of respondents, and only 12 percent cited lung imaging.

In the 2018 survey, only 22 percent of respondents said they were using machine learning for breast imaging, while there was an increase in other applications. The next most-used application cited by respondents who have adopted and use machine learning was lung imaging (22 percent), cardiovascular imaging (13 percent), chest X-rays (11 percent), bone imaging (7 percent), liver imaging (7 percent), neural imaging (5 percent) and pulmonary imaging (4 percent).

When asked what kind of scans they plan to apply machine learning to once the technology is adopted, one radiologist cited quality control for radiography, CT (computed tomography) and MR (magnetic resonance) imaging.

The survey also examines the vendors being used, among respondents who have adopted machine learning, and the survey findings indicate some differences compared to the 2017 survey results. No one vendor dominates this space, as 19 percent use GE Healthcare and about 16 percent use Hologic, which is down compared to 25 percent of respondents who cited Hologic as their vendor in last year’s survey.

Looking at other vendors being used, 14 percent use Philips, 7 percent use Arterys, 3 percent use Nvidia and Zebra Medical Vision and iCAD were both cited by 5 percent of medical imaging professionals. The percentage of imaging leaders citing Google as their machine learning vendor dropped from 13 percent in 2017 to 3 percent in this latest survey. Interestingly, the number of respondents reporting the use of homegrown machine learning solutions increased to 14 percent from 9 percent in 2017.

 

*Findings were compiled from Reaction Data’s Research Cloud. For additional information, please contact Erik Westerlind at ewesterlind@reactiondata.com.

 

See more on Analytics

agario agario---betebet sohbet hattı betebet bahis siteleringsbahis