How a Data-Driven Approach Can Bolster the Fight Against Opioid Abuse | Healthcare Informatics Magazine | Health IT | Information Technology Skip to content Skip to navigation

How a Data-Driven Approach Can Bolster the Fight Against Opioid Abuse

October 12, 2018
by Steve Bennett, Ph.D., Industry Voice
| Reprints

I want to tell you about Andy. Andy’s mom, Pam, is a colleague of mine. Growing up an only child, Andy was a happy kid. He was a straight-A student, loved to play the violin, and spent a year as an exchange student in Europe. Andy had two loving parents. But Andy suffered an injury in college, and needed to have some minor surgery performed to repair his sinuses. Following that surgery, his doctor prescribed opioid pain medication for him, to which he became addicted. Despite several years of effort, Andy was unable to shake the addiction, and tragically lost his life to a heroin overdose two years after his surgery. This was a normal kid with a normal family, like mine, and like yours.

Andy’s story is an important story. The opioid epidemic has led to the deadliest drug overdose crisis in the history of the United States, killing more than 64,000 people in 2016 alone – the last year numbers were available. This is a true national epidemic, and one that continues to get worse. For the first time in nearly 60 years, life expectancy for Americans has dropped for two years in a row due to the opioid epidemic.

The opioid crisis has been so difficult to curtail, in part, because of the inability to integrate data from various stakeholders and systems. With so many players and data sources, today’s information is partial, fragmented, and often not actionable.

While this disconnect applies directly to the opioid epidemic it is a systematic problem that affects the healthcare community at large. Better data and analytics can help develop better treatment protocols for a wide array of medical and public health challenges that affect the general public. For opioids, that could be to develop better pain management programs or for better, more-targeted remediation and rehabilitation for those that become dependent on drugs.

A Data-Driven Healthcare Approach: Making Information Real

Webinar

Experience New Records for Speed & Scale: High Performance Genomics & Imaging

Through real use cases and live demo, Frank Lee, PhD, Global Industry Leader for Healthcare & Life Sciences, will illustrate the architecture and solution for high performance data and AI...

Ample data has been collected on the opioid epidemic, but disparate sources are not communicating with one another. Addressing this disconnect and lack of communication is something that can provide researchers, lawmakers and the public with improved insights.

Data-driven healthcare can help provide this guidance by using available data and analytics to help create programs that can make a tangible difference on population areas that need the most help. By looking at the data, lawmakers, hospital administrators and doctors can begin to make impactful changes throughout the system.

While much can be learned from this data, most of it is not being analyzed in a way that brings true benefits. It has been put in a silo and/or it is not organized in a way that is interoperable with other data systems.

The 21st Century Cures Act, which established the Health Information Technology Advisory Committee, shows the commitment of national leaders to improving healthcare information sharing. Analytics can take this data and turn it into something real. Subsequent visualization of this analyzed data presents the information in a way that can truly tell a story, making sense of data that analysts sometimes miss. Analytics can arrange and organize data in different ways and pick up previously undetected trends or anomalies. This information can be turned into real programs that produce real outcomes for those affected.

The data management and integration process can also help us understand where our knowledge gaps are, revealing flaws in data quality and availability. Organizations may learn that they lack sufficient data in a certain area where they want to learn more, but are currently limited. They can then make changes to data collection efforts or seek out different sources to fill these larger gaps. They can resolve data quality issues across systems and arrive at a consistent, reliable version of the truth.

As organizations get better at assembling and managing the data, automating processes to generate standard reports and file exchanges can ease the burden on analysts. Streamlining the user interfaces for prescription drug monitoring programs and other systems allows analysts and medical informatics staff to spend less time working on the data itself and more time enabling and encouraging the use of predictive modeling and “what-if” scenario capabilities.

Helping to Solve a Problem

The national opioid epidemic is a terrible and complex issue. It is not something that can be solved with just one action, approach or program. It is a layered issue that will require systematic changes to how patients are treated and how the healthcare system operates. Some of the nation’s best continue to work on providing operational solutions to these problems, but as the statistics show, they need more help.

A data-driven approach can be that help. Using data analytics to find better and deeper insights into the root problems of this epidemic can help decision-makers make real change. While opioids are the focus now, there will come a day when a new problem emerges. Having data and analytic solutions in place can prepare these organizations to tackle these future challenges as well.

64,000 people died in 2016 as a result of opioid abuse. But 64,000 is more than a large number – it’s also Andy and his family. With analytics and a data-driven approach, government and healthcare leaders can make better decisions that can help people in need.

Steve Bennett, Ph.D., is the director of SAS' global government practice. He is the former director of the National Biosurveillance Integration Center within the Department of Homeland Security


The Health IT Summits gather 250+ healthcare leaders in cities across the U.S. to present important new insights, collaborate on ideas, and to have a little fun - Find a Summit Near You!


/article/analytics/how-data-driven-approach-can-bolster-fight-against-opioid-abuse
/news-item/analytics/definitive-healthcare-acquires-himss-analytics-data-services

Definitive Healthcare Acquires HIMSS Analytics’ Data Services

January 16, 2019
by Rajiv Leventhal, Managing Editor
| Reprints

Definitive Healthcare, a data analytics and business intelligence company, has acquired the data services business and assets of HIMSS Analytics, the organizations announced today.

The purchase includes the Logic, Predict, Analyze and custom research products from HIMSS Analytics, which is commonly known as the data and research arm of the Healthcare Information and Management Systems Society.

According to Definitive officials, the acquisition builds on the company’s “articulated growth strategy to deliver the most reliable and consistent view of healthcare data and analytics available in the market.”

Definitive Healthcare will immediately begin integrating the datasets and platform functionality into a single source of truth, their executives attest. The new offering will aim to include improved coverage of IT purchasing intelligence with access to years of proposals and executed contracts, enabling transparency and efficiency in the development of commercial strategies.

Broadly, Definitive Healthcare is a provider of data and intelligence on hospitals, physicians, and other healthcare providers. Its product suite its product suite provides comprehensive data on 8,800 hospitals, 150,000 physician groups, 1 million physicians, 10,000 ambulatory surgery centers, 14,000 imaging centers, 86,000 long-term care facilities, and 1,400 ACOs and HIEs, according to officials.

Together, Definitive Healthcare and HIMSS Analytics have more than 20 years of experience in data collection through exclusive methodologies.

“HIMSS Analytics has developed an extraordinarily powerful dataset including technology install data and purchasing contracts among other leading intelligence that, when combined with Definitive Healthcare’s proprietary healthcare provider data, will create a truly best-in-class solution for our client base,” Jason Krantz, founder and CEO of Definitive Healthcare, said in a statement.

More From Healthcare Informatics

/news-item/analytics/machine-learning-survey-many-organizations-several-years-away-adoption-citing

Machine Learning Survey: Many Organizations Several Years Away from Adoption, Citing Cost

January 10, 2019
by Heather Landi, Associate Editor
| Reprints

Radiologists and imaging leaders see an important role for machine learning in radiology going forward, however, most organizations are still two to three years away from adopting the technology, and a sizeable minority have no plans to adopt machine learning, according to a recent survey.

A recent study* by Reaction Data sought to examine the hype around artificial intelligence and machine learning, specifically in the area of radiology and imaging, to uncover where AI might be more useful and applicable and in what areas medical imaging professionals are looking to utilize machine learning.

Reaction Data, a market research firm, got feedback from imaging professionals, including directors of radiology, radiologists, chiefs of radiology, imaging techs, PACS administrators and managers of radiology, from 152 healthcare organizations to gauge the industry on machine learning. About 60 percent of respondents were from academic medical centers or community hospitals, while 15 percent were from integrated delivery networks and 12 percent were from imaging centers. The remaining respondents worked at critical access hospitals, specialty clinics, cancer hospitals or children’s hospitals.

Among the survey respondents, there was significant variation in the number of annual radiology studies performed—17 percent performed 100-250 thousand studies each year; 16 percent performed 1 to 2 million studies; 15 percent performed 5 to 25 thousand studies; 13 percent performed 250 to 500 thousand; 10 percent performed more than 2 million studies a year.

More than three quarters of imaging and radiology leaders (77 percent) view machine learning as being important in medical imaging, up from 65 percent in a 2017 survey. Only 11 percent view the technology as not important. However, only 59 percent say they understand machine learning, although that percentage is up from 52 percent in 2017. Twenty percent say they don’t understand the technology, and 20 percent have a partial understanding.

Looking at adoption, only 22 percent of respondents say they are currently using machine learning—either just adopted it or have been using it for some time. Eleven percent say they plan to adopt the technology in the next year.

Half of respondents (51 percent) say their organizations are one to two years away (28 percent) or even more than three years away (23 percent) from adoption. Sixteen percent say their organizations will most likely never utilize machine learning.

Reaction Data collected commentary from survey respondents as part of the survey and some respondents indicated that funding was an issue with regard to the lack of plans to adopt the technology. When asked why they don’t ever plan to utilize machine learning, one respondent, a chief of cardiology, said, “Our institution is a late adopter.” Another respondent, an imaging tech, responded: “No talk of machine learning in my facility. To be honest, I had to Google the definition a moment ago.”

Survey responses also indicated that imaging leaders want machine learning tools to be integrated into PACS (picture archiving and communication systems) software, and that cost is an issue.

“We'd like it to be integrated into PACS software so it's free, but we understand there is a cost for everything. We wouldn't want to pay more than $1 per study,” one PACS Administrator responded, according to the survey.

A radiologist who responded to the survey said, “The market has not matured yet since we are in the research phase of development and cost is unknown. I expect the initial cost to be on the high side.”

According to the survey, when asked how much they would be willing to pay for machine learning, one imaging director responded: “As little as possible...but I'm on the hospital administration side. Most radiologists are contracted and want us to buy all the toys. They take about 60 percent of the patient revenue and invest nothing into the hospital/ambulatory systems side.”

And, one director of radiology responded: “Included in PACS contract would be best... very hard to get money for this.”

The survey also indicates that, among organizations that are using machine learning in imaging, there is a shift in how organizations are applying machine learning in imaging. In the 2017 survey, the most common application for machine learning was breast imaging, cited by 36 percent of respondents, and only 12 percent cited lung imaging.

In the 2018 survey, only 22 percent of respondents said they were using machine learning for breast imaging, while there was an increase in other applications. The next most-used application cited by respondents who have adopted and use machine learning was lung imaging (22 percent), cardiovascular imaging (13 percent), chest X-rays (11 percent), bone imaging (7 percent), liver imaging (7 percent), neural imaging (5 percent) and pulmonary imaging (4 percent).

When asked what kind of scans they plan to apply machine learning to once the technology is adopted, one radiologist cited quality control for radiography, CT (computed tomography) and MR (magnetic resonance) imaging.

The survey also examines the vendors being used, among respondents who have adopted machine learning, and the survey findings indicate some differences compared to the 2017 survey results. No one vendor dominates this space, as 19 percent use GE Healthcare and about 16 percent use Hologic, which is down compared to 25 percent of respondents who cited Hologic as their vendor in last year’s survey.

Looking at other vendors being used, 14 percent use Philips, 7 percent use Arterys, 3 percent use Nvidia and Zebra Medical Vision and iCAD were both cited by 5 percent of medical imaging professionals. The percentage of imaging leaders citing Google as their machine learning vendor dropped from 13 percent in 2017 to 3 percent in this latest survey. Interestingly, the number of respondents reporting the use of homegrown machine learning solutions increased to 14 percent from 9 percent in 2017.

 

*Findings were compiled from Reaction Data’s Research Cloud. For additional information, please contact Erik Westerlind at ewesterlind@reactiondata.com.

 

Related Insights For: Analytics

/article/analytics/drexel-university-moves-forward-leveraging-nlp-improve-clinical-and-research

Drexel University Moves Forward on Leveraging NLP to Improve Clinical and Research Processes

January 8, 2019
by Mark Hagland, Editor-in-Chief
| Reprints
At Drexel University, Walter Niemczura is helping to lead an ongoing initiative to improve research processes and clinical outcomes through the leveraging of NLP technology

Increasingly, the leaders of patient care organizations are using natural language processing (NLP) technologies to leverage unstructured data, in order to improve patient outcomes and reduce costs. Healthcare IT and clinician leaders are still relatively early in the long journey towards full and robust success in this area; but they are moving forward in healthcare organizations nationwide.

One area in which learnings are accelerating is in medical research—both basic and applied. Numerous medical colleges are moving forward in this area, with strong results. Drexel University in Philadelphia is among that group. There, Walter Niemczura, director of application development, has been helping to lead an initiative that is supporting research and patient care efforts, at the Drexel University College of Medicine, one of the nation’s oldest medical colleges (it was founded in 1848), and across the university. Niemczura and his colleagues have been partnering with the Cambridge, England-based Linguamatics, in order to engage in text mining that can support improved research and patient care delivery.

Recently, Niemczura spoke with Healthcare Informatics Editor-in-Chief Mark Hagland, regarding his team’s current efforts and activities in that area. Below are excerpts from that interview.

Is your initiative moving forward primarily on the clinical side or the research side, at your organization?

We’re making advances that are being utilized across the organization. The College of Medicine used to be a wholly owned subsidiary of Drexel University. About four years ago, we merged with the university, and two years ago we lost our CIO to the College of Medicine. And now the IT group reports to the CIO of the whole university. I had started here 12 years ago, in the College of Medicine.

Webinar

Experience New Records for Speed & Scale: High Performance Genomics & Imaging

Through real use cases and live demo, Frank Lee, PhD, Global Industry Leader for Healthcare & Life Sciences, will illustrate the architecture and solution for high performance data and AI...

And some of the applications of this technology are clinical and some are non-clinical, correct?

Yes, that’s correct. Our data repository is used for clinical and non-clinical research. Clinical: College of Medicine, College of Nursing, School of Public Health. And we’re working with the School of Biomedical Engineering. And college of Arts and Sciences, mostly with the Psychology Department. But we’re using Linguamatics only on the clinical side, with our ambulatory care practices.

Overall, what are you doing?

If you look at our EHR [electronic health record], there are discrete fields that might have diagnosis codes, procedure codes and the like. Let’s break apart from of that. Let’s say our HIV Clinic—they might put down HIV as a diagnosis, but in the notes, might mention hepatitis B, but they’re not putting that down as a co-diagnosis; it’s up to the provider how they document. So here’s a good example: HIV and hepatitis C have frequent comorbidity. So our organization asked a group of residents to go in and look at 5,700 patient charts, with patients with HIV and hepatitis C. Anybody in IT could say, we have 677 patients with both. But doctors know there’s more to the story. So it turns out another 443 had HIV in the code and hep C mentioned in the notes. Another 14 had hep C in the code, and HIV in the notes.

So using Linguamatics, it’s not 5,700 charts that you need to look at, but 1,150. By using Linguamatics, we narrowed it down to 1,150 patients—those who had both codes. But then we found roughly 460 who had the comorbidity mentioned partly in the notes. Before Linguamatics, all residents had to look at all 5,700 charts, in cases like this one.

So this was a huge time-saver?

Yes, it absolutely was a huge time-saver. When you’re looking at hundreds of thousands or millions of patient records, the value might be not the ones you have to look at, but the ones you don’t have to look at. And we’re looking at operationalizing this into day-to-day operations. While we’re billing, we can pull files from that day and say, here’s a common co-morbidity—HIV and hep C, with hep C mentioned in those notes—and is there a missed opportunity to get the discrete fields correct?

Essentially, then, you’re making things far more accurate in a far more efficient way?

Yes, this involves looking at patient trials on the research side, while on the clinical side, we can have better quality of care, and more updated billing, based on more accurate data management.

When did this initiative begin?

Well, we’ve been working with Linguamatics for six or seven years. Initially, our work was around discrete fields. The other type of note we look at has to do with text. We had our rheumatology department, and they wanted to find out which patients had had particular tests done—they’re looking for terms in notes… When a radiologist does a report on your x-ray, it’s not like a test for diabetes, where a blood sugar number comes out; x-rays are read and interpreted. The radiologists gave us key words to search for, sclerosis, erosions, bone edema. There are about 30 words. They’re looking for patients who have particular x-rays or MRIs done, so that instead of looking for everyone who had these x-rays done, roughly 400 had these terms. We reduced the number who were undergoing particular tests. The rheumatology department was looking for patients for patient recruitment who had x-rays done, and had these kinds of findings.

So the rheumatology people needed to identify certain types of patients, and you needed to help them do that?

Yes, that’s correct. Now, you might say, we could do word search in Microsoft Word; but the word “erosion” by itself might not help. You have to structure your query to be more accurate, and exclude certain appearances of words. And Linguamatics is very good at that. I use their ontology, and it helps us understand the appearance of words within structure. I used to be in telecommunications. When all the voice-over IP came along, there was confusion. You hear “buy this stock,” when the message was, “don’t buy this stock.”

So this makes identifying certain elements in text far more efficient, then, correct?

Yes—the big buzzword is unstructured data.

Have there been any particular challenges in doing this work?

One is that this involves an iterative process. For someone in IT, we’re used to writing queries and getting them right the first time. This is a different mindset. You start out with one query and want to get results back. You find ways to mature your query; at each pass, you get better and better at it; it’s an iterative process.

What have your biggest learnings been in all this, so far?

There’s so much promise—there’s a lot of data in the notes. And I use it now for all my preparatory research. And Drexel is part of a consortium here called Partnership In Educational Research—PIER.

What would you say to CIOs, CMIOs, CTOs, and other healthcare IT leaders, about this work?

My recommendation would be to dedicate resources to this effort. We use this not only for queries, but to interface with other systems. And we’re writing applications around this. You can get a data set out and start putting it into your work process. It shouldn’t be considered an ad hoc effort by some of your current people.

 

 


See more on Analytics

agario agario---betebet sohbet hattı betebet bahis siteleringsbahis