The Mayo Clinic Health System in Mankato is an 11-hospital, five-clinic health system under the broader umbrella of the national Mayo Health System. Mayo-Mankato, based in Mankato, in southeastern Minnesota, involves 17 staff radiologists.
The leaders at Mayo-Mankato have been moving forward in a variety of different ways in the imaging informatics space, with a desire to move beyond the storage and image presentation capabilities of the PACS (picture archiving and communications systems) and RIS (radiology information systems) systems that virtually all radiology organizations already use.
Ernest Beaupain, PACS administrator at Mayo Clinic Health System in Mankato, has been helping to lead a process of process improvement around critical test results reporting for diagnostic imaging tests. In that area, he and his colleagues have been partnering with the Sarasota, Fla.-based peerVue. Beaupain spoke during the RSNA Conference, being held this week at the McCormick Place Convention Center in Chicago, with HCI Editor-in-Chief Mark Hagland, regarding the current initiative unfolding at Mayo-Mankato. Below are excerpts from that interview.
Tell me about the original impetus for moving forward on your current critical results reporting initiative?
The original impetus, really, for purchasing peerVue, was a determination of non-compliance with the Joint Commission’s requirements on critical test results reporting.
When did you go live with the peerVue solution?
We became a customer of peerVue in 2010, and went live in the summer of 2011 with their critical test results management solution; we went live with their semi-urgent tests results management solution that fall, along with two other modules from them: peer review, and technologist quality improvement. The technologist quality improvement module measures the skill with which radiologic technologists perform diagnostic exams, and helps us to assess trends that are developing. During 2011, we implemented the critical test results management solution in eight of the eleven hospitals; the other three went live in 2013.
Tell me more about your implementation of the critical test results management solution. How had the critical test results management process been executed prior to automation?
Before the implementation of the system, the degree of success with the communication of critical test results was largely unknown; the radiologists had pen and paper, but usually just called the clinicians to convey those results. But in any busy physician practice, getting hold of your colleague is nearly impossible. Now, if radiologists wish to call referring physicians to inform them of critical test results, they still can, and use peerVue to track that call; but they also have the ability to send off the result to an internally staffed call center. What was hindering them moving onto the next case after making that initial call—in other words, being burdened by having to continue to make repeated attempts at contact.
Do you have any before-and-after metrics on this?
We don’t have any solid metrics about what went on before implementation, but it was taking at least an hour, on average; following implementation, we reduced that time period to under 30 minutes.
With regard to semi-urgent results communication, what has happened?
The degree of improvement was the same, and that measure was also something that had never been tracked before. They would make phone calls in some circumstances. But often, what would happen is that that incidental lung nodule they’d noticed when they were looking at the base of the lung—in the past, they would notice it in the report, but no one would be called. And sometimes, the clinicians wouldn’t even read the report; the process relied on the referring physicians seeing and reading those things. The driver there was patient safety, even though not as addressed by the Joint Commission, since they don’t address those goes specifically.
Tell me a bit more about the peer review process, as facilitated by automation?
In that, we are being driven by the radiologists’ workflow. And what they’ve chosen to do is, the first five cases that have prior exams, each day, they’re doing the peer review on the prior exams.
Are you using analytics yet?
No, right now, the radiologists have chosen purely time-based selection in terms of which cases they’re looking at. We’re not using the engine to do prospective peer review.
How has the implementation of the critical test results solution influenced workflow, process, and culture in your organization?
Radiologists have seen a dramatic improvement in their workflow because of this. They can submit the case to peerVue, know that it’s being taken care of, and move on to the next case.
And the referring physicians have gotten used to having a call center call them?
That’s where the culture change comes in. Two-and-a-half years out, there are still multiple clinicians who are saying, I’m an orthopedic surgeon, I know that that heel is fractured, why are you bothering me? So that remains a challenge.
Aren’t the referring physicians grateful, though, with regard to results that are life-threatening?