AI and Healthcare: Cure-All, Poison Pill, or Simply Smarter Medicine? | Healthcare Informatics Magazine | Health IT | Information Technology Skip to content Skip to navigation

AI and Healthcare: Cure-All, Poison Pill, or Simply Smarter Medicine?

January 7, 2019
by Josh Gluck, adjunct professor, NYU’s Wagner School of Public Service, Industry Voice
| Reprints
Click To View Gallery

Artificial intelligence (AI) is moving from hypothetical to business critical in healthcare. The global healthcare AI market is expected to reach $6.16 billion by 2022. Experts have estimated that AI applications can potentially create $150 billion in annual savings for the U.S. healthcare economy by 2026, and AI can address an estimated 20 percent of unmet clinical demand. As the statistics pile up, implementing AI may seem like a cure-all to every hospital woe, from data entry to population health to imaging, and it would be easy to get overwhelmed.

Keeping all of this in mind, let’s take a step back and examine AI for what it is: a powerful technology that can play a role in improving individual and population health when implemented judiciously. In the wrong hands, it’s clear that AI tools could be misused, but with the right strategies and careful use of AI aligned with an organization’s goals, AI can be used to generate insights based on data and analytics that may have been otherwise missed. AI has the potential to improve the quality of care and reduce cost by preventing unnecessary tests and procedures, while accelerating diagnoses and improving access by better utilizing resources. In the current healthcare climate, adding value while improving patient outcomes and access is not only a stated goal but also an imperative for survival of health systems in the emerging value-based integrated care environment.

Data-Centric Architecture Makes Real World AI Possible

So how do health systems get started with AI? Many start with small projects, using infrastructure that is on hand, but quickly identify limitations and outgrow this approach.

We’ve all heard that embracing data-centric architecture will help providers create a platform on which AI will thrive. But what does this mean? Building AI models requires data, a lot of data, on a scale most health systems have not previously explored in analytics environments. In many cases, the data exists in the health system or the community, but needs to be aggregated, cleansed and organized to support AI projects.

Webinar

Experience New Records for Speed & Scale: High Performance Genomics & Imaging

Through real use cases and live demo, Frank Lee, PhD, Global Industry Leader for Healthcare & Life Sciences, will illustrate the architecture and solution for high performance data and AI...

For timely results, health systems may need to invest in a data hub that can be used to stage data for AI models, as well as GPU-based compute environment that allows researchers to train and optimize AI systems. AI requires a departure from traditional architectures due to its large scale and computational intensity, but also requires agility and scalability as programs and use grow. Forward-looking health systems who recognize the potential of AI will invest in agile, high-performing, and cost-effective AI platforms that allow researchers to thrive. By upgrading an organization’s physical architecture and infrastructure to support AI, teams will be able to better leverage AI and accelerate the pace of innovation. But technology isn’t the only variable needed to succeed in AI.

Optimize Your Implementation: AI as a Culture Change

IT leaders know that technology cannot change in isolation; people and organizational processes also need to be brought along to support the change. Putting the appropriate AI infrastructure in place for researchers is a first step to getting started, but clinical staff and teams also need to be trained, so that they understand the models in use and are comfortable with them.

Most clinicians will initially at least express distrust with the “black box” aspects of an AI implementation. Training should cover how to use the new technology, how it works, when it does and does not replace current processes and procedures, how to discuss AI with patients, and when AI should be trusted and when it should be questioned. In addition, adding AI to treatment decision making makes IT a partner in delivering care, and clinicians will need to work more closely with IT as a result. IT should be prepared for this change to the working relationship.

Data is also an issue; the sources of the data used in AI, as well as data handling, needs to be transparent so that the clinical community is comfortable trusting the findings from AI. Beyond the technical aspects, healthcare organizations should develop their processes to support the use of AI in a tangible way. From visualization, which presents information to clinicians clearly and succinctly, to integration of AI information with workflows, all the way to automated decisions, which act on ever-evolving algorithms, analytics and AI are key to a practical and effective architecture.

A large concern from healthcare leaders around establishing an AI architecture is cost—all that data can come at a hefty price for organizations of all sizes, not to mention the costs associated with hiring the proper experts and training team members. However, AI can cut costs by automating tasks that would previously be done by clinicians or staff, freeing up their time for more crucial work. And, despite the associated costs, AI is no longer just a flashy option for healthcare organizations—it can also provide a distinct advantage in both quality of care and business performance, as AI leaders have begun to see evidence of in their organizations.

Optimize for AI: Making an Impact

We’ve understood the need for quality data and integrating that data into a secure data-centric architecture to get it ready for AI. Now, it’s time to ensure AI makes a lasting impact, both within your organization and for the patients you serve. This requires a shift in mindset from all employees, and it starts from the top.

C-suite leaders should work to create a community around AI, sharing inspiration and forming multi-disciplinary teams that elevate the AI narrative and boost economies of scale. This connection with others, from all parts of the organization, is key to reshaping the current landscape in the short-term, and lays the foundation for AI as the new normal. AI is neither the poison pill nor the ultimate cure-all, but powerful medicine in the quest for better care and lower costs.

 

Josh Gluck is an adjunct professor of health policy and management at NYU’s Wagner School of Public Service and the Vice President of Healthcare Technology Strategy at Pure Storage. Gluck has over 20 years of experience directing information technology initiatives, managing complex IT projects, leading technical and professional teams, and providing critical business strategy support. His previous roles include Deputy CIO for Weill Cornell Medicine and Director of Information Technology at New York Presbyterian Hospital.

 


2019 Rocky Mountain Denver Health IT Summit

Renowned leaders in U.S. and North American healthcare gather throughout the year to present important information and share insights at the Healthcare Informatics Health IT Summits.

July 15 - 16, 2019 | Rocky Mountain


/article/analytics/ai-and-healthcare-cure-all-poison-pill-or-simply-smarter-medicine
/news-item/analytics/machine-learning-survey-many-organizations-several-years-away-adoption-citing

Machine Learning Survey: Many Organizations Several Years Away from Adoption, Citing Cost

January 10, 2019
by Heather Landi, Associate Editor
| Reprints

Radiologists and imaging leaders see an important role for machine learning in radiology going forward, however, most organizations are still two to three years away from adopting the technology, and a sizeable minority have no plans to adopt machine learning, according to a recent survey.

A recent study* by Reaction Data sought to examine the hype around artificial intelligence and machine learning, specifically in the area of radiology and imaging, to uncover where AI might be more useful and applicable and in what areas medical imaging professionals are looking to utilize machine learning.

Reaction Data, a market research firm, got feedback from imaging professionals, including directors of radiology, radiologists, chiefs of radiology, imaging techs, PACS administrators and managers of radiology, from 152 healthcare organizations to gauge the industry on machine learning. About 60 percent of respondents were from academic medical centers or community hospitals, while 15 percent were from integrated delivery networks and 12 percent were from imaging centers. The remaining respondents worked at critical access hospitals, specialty clinics, cancer hospitals or children’s hospitals.

Among the survey respondents, there was significant variation in the number of annual radiology studies performed—17 percent performed 100-250 thousand studies each year; 16 percent performed 1 to 2 million studies; 15 percent performed 5 to 25 thousand studies; 13 percent performed 250 to 500 thousand; 10 percent performed more than 2 million studies a year.

More than three quarters of imaging and radiology leaders (77 percent) view machine learning as being important in medical imaging, up from 65 percent in a 2017 survey. Only 11 percent view the technology as not important. However, only 59 percent say they understand machine learning, although that percentage is up from 52 percent in 2017. Twenty percent say they don’t understand the technology, and 20 percent have a partial understanding.

Looking at adoption, only 22 percent of respondents say they are currently using machine learning—either just adopted it or have been using it for some time. Eleven percent say they plan to adopt the technology in the next year.

Half of respondents (51 percent) say their organizations are one to two years away (28 percent) or even more than three years away (23 percent) from adoption. Sixteen percent say their organizations will most likely never utilize machine learning.

Reaction Data collected commentary from survey respondents as part of the survey and some respondents indicated that funding was an issue with regard to the lack of plans to adopt the technology. When asked why they don’t ever plan to utilize machine learning, one respondent, a chief of cardiology, said, “Our institution is a late adopter.” Another respondent, an imaging tech, responded: “No talk of machine learning in my facility. To be honest, I had to Google the definition a moment ago.”

Survey responses also indicated that imaging leaders want machine learning tools to be integrated into PACS (picture archiving and communication systems) software, and that cost is an issue.

“We'd like it to be integrated into PACS software so it's free, but we understand there is a cost for everything. We wouldn't want to pay more than $1 per study,” one PACS Administrator responded, according to the survey.

A radiologist who responded to the survey said, “The market has not matured yet since we are in the research phase of development and cost is unknown. I expect the initial cost to be on the high side.”

According to the survey, when asked how much they would be willing to pay for machine learning, one imaging director responded: “As little as possible...but I'm on the hospital administration side. Most radiologists are contracted and want us to buy all the toys. They take about 60 percent of the patient revenue and invest nothing into the hospital/ambulatory systems side.”

And, one director of radiology responded: “Included in PACS contract would be best... very hard to get money for this.”

The survey also indicates that, among organizations that are using machine learning in imaging, there is a shift in how organizations are applying machine learning in imaging. In the 2017 survey, the most common application for machine learning was breast imaging, cited by 36 percent of respondents, and only 12 percent cited lung imaging.

In the 2018 survey, only 22 percent of respondents said they were using machine learning for breast imaging, while there was an increase in other applications. The next most-used application cited by respondents who have adopted and use machine learning was lung imaging (22 percent), cardiovascular imaging (13 percent), chest X-rays (11 percent), bone imaging (7 percent), liver imaging (7 percent), neural imaging (5 percent) and pulmonary imaging (4 percent).

When asked what kind of scans they plan to apply machine learning to once the technology is adopted, one radiologist cited quality control for radiography, CT (computed tomography) and MR (magnetic resonance) imaging.

The survey also examines the vendors being used, among respondents who have adopted machine learning, and the survey findings indicate some differences compared to the 2017 survey results. No one vendor dominates this space, as 19 percent use GE Healthcare and about 16 percent use Hologic, which is down compared to 25 percent of respondents who cited Hologic as their vendor in last year’s survey.

Looking at other vendors being used, 14 percent use Philips, 7 percent use Arterys, 3 percent use Nvidia and Zebra Medical Vision and iCAD were both cited by 5 percent of medical imaging professionals. The percentage of imaging leaders citing Google as their machine learning vendor dropped from 13 percent in 2017 to 3 percent in this latest survey. Interestingly, the number of respondents reporting the use of homegrown machine learning solutions increased to 14 percent from 9 percent in 2017.

 

*Findings were compiled from Reaction Data’s Research Cloud. For additional information, please contact Erik Westerlind at ewesterlind@reactiondata.com.

 

More From Healthcare Informatics

/article/analytics/drexel-university-moves-forward-leveraging-nlp-improve-clinical-and-research

Drexel University Moves Forward on Leveraging NLP to Improve Clinical and Research Processes

January 8, 2019
by Mark Hagland, Editor-in-Chief
| Reprints
At Drexel University, Walter Niemczura is helping to lead an ongoing initiative to improve research processes and clinical outcomes through the leveraging of NLP technology

Increasingly, the leaders of patient care organizations are using natural language processing (NLP) technologies to leverage unstructured data, in order to improve patient outcomes and reduce costs. Healthcare IT and clinician leaders are still relatively early in the long journey towards full and robust success in this area; but they are moving forward in healthcare organizations nationwide.

One area in which learnings are accelerating is in medical research—both basic and applied. Numerous medical colleges are moving forward in this area, with strong results. Drexel University in Philadelphia is among that group. There, Walter Niemczura, director of application development, has been helping to lead an initiative that is supporting research and patient care efforts, at the Drexel University College of Medicine, one of the nation’s oldest medical colleges (it was founded in 1848), and across the university. Niemczura and his colleagues have been partnering with the Cambridge, England-based Linguamatics, in order to engage in text mining that can support improved research and patient care delivery.

Recently, Niemczura spoke with Healthcare Informatics Editor-in-Chief Mark Hagland, regarding his team’s current efforts and activities in that area. Below are excerpts from that interview.

Is your initiative moving forward primarily on the clinical side or the research side, at your organization?

We’re making advances that are being utilized across the organization. The College of Medicine used to be a wholly owned subsidiary of Drexel University. About four years ago, we merged with the university, and two years ago we lost our CIO to the College of Medicine. And now the IT group reports to the CIO of the whole university. I had started here 12 years ago, in the College of Medicine.

Webinar

Experience New Records for Speed & Scale: High Performance Genomics & Imaging

Through real use cases and live demo, Frank Lee, PhD, Global Industry Leader for Healthcare & Life Sciences, will illustrate the architecture and solution for high performance data and AI...

And some of the applications of this technology are clinical and some are non-clinical, correct?

Yes, that’s correct. Our data repository is used for clinical and non-clinical research. Clinical: College of Medicine, College of Nursing, School of Public Health. And we’re working with the School of Biomedical Engineering. And college of Arts and Sciences, mostly with the Psychology Department. But we’re using Linguamatics only on the clinical side, with our ambulatory care practices.

Overall, what are you doing?

If you look at our EHR [electronic health record], there are discrete fields that might have diagnosis codes, procedure codes and the like. Let’s break apart from of that. Let’s say our HIV Clinic—they might put down HIV as a diagnosis, but in the notes, might mention hepatitis B, but they’re not putting that down as a co-diagnosis; it’s up to the provider how they document. So here’s a good example: HIV and hepatitis C have frequent comorbidity. So our organization asked a group of residents to go in and look at 5,700 patient charts, with patients with HIV and hepatitis C. Anybody in IT could say, we have 677 patients with both. But doctors know there’s more to the story. So it turns out another 443 had HIV in the code and hep C mentioned in the notes. Another 14 had hep C in the code, and HIV in the notes.

So using Linguamatics, it’s not 5,700 charts that you need to look at, but 1,150. By using Linguamatics, we narrowed it down to 1,150 patients—those who had both codes. But then we found roughly 460 who had the comorbidity mentioned partly in the notes. Before Linguamatics, all residents had to look at all 5,700 charts, in cases like this one.

So this was a huge time-saver?

Yes, it absolutely was a huge time-saver. When you’re looking at hundreds of thousands or millions of patient records, the value might be not the ones you have to look at, but the ones you don’t have to look at. And we’re looking at operationalizing this into day-to-day operations. While we’re billing, we can pull files from that day and say, here’s a common co-morbidity—HIV and hep C, with hep C mentioned in those notes—and is there a missed opportunity to get the discrete fields correct?

Essentially, then, you’re making things far more accurate in a far more efficient way?

Yes, this involves looking at patient trials on the research side, while on the clinical side, we can have better quality of care, and more updated billing, based on more accurate data management.

When did this initiative begin?

Well, we’ve been working with Linguamatics for six or seven years. Initially, our work was around discrete fields. The other type of note we look at has to do with text. We had our rheumatology department, and they wanted to find out which patients had had particular tests done—they’re looking for terms in notes… When a radiologist does a report on your x-ray, it’s not like a test for diabetes, where a blood sugar number comes out; x-rays are read and interpreted. The radiologists gave us key words to search for, sclerosis, erosions, bone edema. There are about 30 words. They’re looking for patients who have particular x-rays or MRIs done, so that instead of looking for everyone who had these x-rays done, roughly 400 had these terms. We reduced the number who were undergoing particular tests. The rheumatology department was looking for patients for patient recruitment who had x-rays done, and had these kinds of findings.

So the rheumatology people needed to identify certain types of patients, and you needed to help them do that?

Yes, that’s correct. Now, you might say, we could do word search in Microsoft Word; but the word “erosion” by itself might not help. You have to structure your query to be more accurate, and exclude certain appearances of words. And Linguamatics is very good at that. I use their ontology, and it helps us understand the appearance of words within structure. I used to be in telecommunications. When all the voice-over IP came along, there was confusion. You hear “buy this stock,” when the message was, “don’t buy this stock.”

So this makes identifying certain elements in text far more efficient, then, correct?

Yes—the big buzzword is unstructured data.

Have there been any particular challenges in doing this work?

One is that this involves an iterative process. For someone in IT, we’re used to writing queries and getting them right the first time. This is a different mindset. You start out with one query and want to get results back. You find ways to mature your query; at each pass, you get better and better at it; it’s an iterative process.

What have your biggest learnings been in all this, so far?

There’s so much promise—there’s a lot of data in the notes. And I use it now for all my preparatory research. And Drexel is part of a consortium here called Partnership In Educational Research—PIER.

What would you say to CIOs, CMIOs, CTOs, and other healthcare IT leaders, about this work?

My recommendation would be to dedicate resources to this effort. We use this not only for queries, but to interface with other systems. And we’re writing applications around this. You can get a data set out and start putting it into your work process. It shouldn’t be considered an ad hoc effort by some of your current people.

 

 


Related Insights For: Analytics

/news-item/analytics/artificial-intelligence-helps-detect-early-heart-disease-mayo-researchers-find

Artificial Intelligence Helps Detect Early Heart Disease, Mayo Researchers Find

January 8, 2019
by Rajiv Leventhal, Managing Editor
| Reprints
Click To View Gallery

A new Mayo Clinic study has found that applying artificial intelligence (AI) to an electrocardiogram (EKG) test results in an easy early indicator of asymptomatic left ventricular dysfunction, which is a precursor to heart failure.

The Mayo research team found that the AI/EKG test accuracy compares favorably with other common screening tests, such as mammography for breast cancer. The findings were published in the journal Nature Medicine.

Asymptomatic left ventricular dysfunction is characterized by the presence of a weak heart pump with a risk of overt heart failure. It affects 7 million Americans, and is associated with reduced quality of life and longevity. But asymptomatic left ventricular dysfunction is treatable when identified, Mayo researchers explained. But they added that there is currently no inexpensive, noninvasive, painless screening tool for asymptomatic left ventricular dysfunction available for diagnostic use.

In their study, Mayo Clinic researchers hypothesized that asymptomatic left ventricular dysfunction could be reliably detected in the EKG by a properly trained neural network. Using Mayo Clinic stored digital data, more than 625,000 paired EKG and transthoracic echocardiograms were screened to identify the population to be studied for analysis. To test their hypothesis, researchers created, trained, validated and then tested a neural network.

The study concluded that AI applied to a standard EKG reliably detects asymptomatic left ventricular dysfunction. The accuracy of the AI/EKG test compares favorably with other common screening tests, such as prostate-specific antigen for prostate cancer, mammography for breast cancer and cervical cytology for cervical cancer.

“Congestive heart failure afflicts more than 5 million people and consumes more than $30 billion in health care expenditures in the U.S. alone,” said Paul Friedman, M.D., senior author and chair of the Midwest Department of Cardiovascular Medicine at Mayo Clinic. "The ability to acquire a ubiquitous, easily accessible, inexpensive recording in 10 seconds—the EKG—and to digitally process it with AI to extract new information about previously hidden heart disease holds great promise for saving lives and improving health.”

The study also revealed that in patients without ventricular dysfunction, those with a positive AI screen were at four times the risk of developing future ventricular dysfunction, compared with those with a negative screen. “In other words, the test not only identified asymptomatic disease, but also predicted risk of future disease, presumably by identifying very early, subtle EKG changes that occur before heart muscle weakness,” said Dr. Friedman.

See more on Analytics

agario agario---betebet sohbet hattı betebet bahis siteleringsbahis