Attendance at the annual RSNA Conference, sponsored by the Oak Brook, Ill.-based Radiological Society of North America, and being held this week at Chicago’s vast McCormick Place Convention Center, held steady this year in relation to the attendance recorded at recent previous RSNA Conferences.
As of Monday, total advance registration for this conference, which brings together professionals connected to radiology from all over the world, was recorded at 48,615, with 22,914 professional registrants. Those figures compare with 48,445 and 48,888 in 2017 and 206, and 23,097 and 23,656 professional registrants in 2017 and 2016. In other words, the figures this ear are almost exactly the same as last year and the year before.
Meanwhile, 693 companies are exhibiting this year, compared to 667 last year, and 691 in 2016.
These advance registration figures represent the number of individuals who had planned to attend. Some attendees this year have been delayed because of an intense snowstorm that hit Chicago in the early morning hours into the mid-morning, and that has caused travel chaos across a large swath of the central United States. On Sunday, more than 800 flights were cancelled at Chicago’s O’Hare Airport, and about 300 flights were cancelled at Chicago’s Midway Airport; a few hundred additional flights were cancelled at O’Hare Airport on Monday.
Meanwhile, those who were able to make it to Chicago had the opportunity on Sunday to hear a president’s address given by Vijay M. Rao, M.D., the current president of RSNA, one that focused strongly on artificial intelligence and machine learning. “No matter where I travel, I see the hype, the hope and the fear created by the rapid rise of technologies, such as artificial intelligence and machine learning, Dr. Rao told the audience in Arie Crown Theater for Sunday’s opening session, according to Monday’s edition of the official Daily Bulletin of RSNA. “I believe more firmly than ever that AI has the potential to enhance our profession and transform the practice of radiology worldwide. It will allow radiologists to spend more time on initiatives that will benefit both patients and physicians.”
In her president’s address, Dr. Rao peered ahead into the future to imagine how the growth in digital imaging and the overall explosion of data now available in medicine will address some of the current challenges faced in radiology. She called for a rebranding of reading rooms into digital diagnostic data hubs where clinical teams could gather, or even participate virtually, to make patient management decisions as a group. And she suggested that, at these diagnostic data hubs, radiologists could turn to AI to aggregate current imaging findings with those of prior images from other modalities, along with lab results, biopsy findings, and key aspects of the histories and physical exams of patients.
Nor was Dr. Rao speaking in the abstract; as the Daily Bulletin also reported, researchers at Stanford University have developed a model employing machine learning techniques to assess the efficacy of ultrasound surveillance of hepatocellular carcinoma (HCC) in high-risk patients. As Lynn Antonopolous reported, “Long-term, longitudinal data from the study may help validate and improve care recommendations and assess the clinical outcomes of HCC surveillance patients.” And she quoted Hailey Choi, M.D., Ph.D., an assistant professor of clinical radiology at the University of California, San Francisco (UCSF), who said of the research program, “The development of robust AI natural language processing techniques, and the introduction of structured reporting with the American College of Radiology’s (ACR) ultrasound Liver Imaging Reporting and Data System (LI-RADS) in recent years, presented an opportunity for us to review our own clinical experience with US screening for HCC on a large scale.”
Dr. Choi and her team assessed the free-text in a selection of 13,860 U.S. screening and surveillance exams from 4,830 subjects performed between 2007 and 2017, prior to the release of U.S. LI-RADS specifications. Then, using 1,744 more recent reports containing U.S. LI-RADS specifications, they applied a scalable, ensemble ML approach to build a model that inferred U.S. LI-RADS categories from neural word embedding analysis of the body text—a process that mathematically represents words and can gauge the relationship between them. “We created a lexicon of key terms used in ultrasound liver imaging to prove a framework for analysis and machine learning algorithms on the report text; we also labeled a subset of the unstructured reports for further training of the model,” she noted in a presentation.
None of the emphasis on AI and machine learning surprises James Whitfill, M.D., chief medical officer at Innovation Care Partners, a clinically integrated network based in Phoenix. “Certainly, it’s another year where machine learning is absolutely dominating the conversation,” Dr. Whitfill told Healthcare Informatics Monday at McCormick Place. “In radiology, we continue to be aware of how the hype of machine learning is giving way to the reality; that it’s not a wholesale replacement of physicians. Tremendous advances in, for example, interpreting chest x-rays; some of the work that Stanford’s done,” he said, referencing the work of Dr. Choi and fellow researchers at Stanford. “They’ve got algorithms that can diagnose 15 different pathological findings. So there is true material advancement taking place. At the same time,” he cautioned, “people are realizing that coming up with the algorithm is one piece, but that there are surprising complications. So you develop an algorithm on Siemens equipment, but when you to Fuji, the algorithm fails—it no longer reliably identifies pathology, because it turns out you have to train the algorithm not just on examples form just one manufacturer, but from lots of manufacturers.”
Indeed, Dr. Whitfill said, “We continue to find that these algorithms are not as consistent as identifying yourself on Facebook for example. It’s turning out that radiology is way more complex. We take images on lots of different machines. So huge strides are being made. But it’s very clear that human and machine learning together will create the breakthroughs. We talk about physician burnout, and even physicians leaving. I think that machine offers a good chance of removing a lot of the drudgery in healthcare. If we can automate some processes, then it will free up our time for quality judgment, and also to spend time talking to patients, not just staring at the screen.”
Meanwhile, Dr. Whitfill said, “Every booth and product at RSNA this year will talk about machine learning and artificial intelligence, just has been the case with population health at HIMSS in the past couple of years,” referring to the annual HIMSS Conference, sponsored by the Chicago-based Healthcare Information and Management Systems Society. “But the dirty secret is that everybody’s still struggling with what really works. We haven’t see commercial viability of much. And you have to pay for the machine learning out of your existing capital, and that’s a challenge.”