As data analytics becomes more and more common in healthcare, it is being used for more and more complex and “advanced” purposes, and is reaching into all sorts of innovative niches in patient care organizations. That certainly is the case these days at the Altamonte Springs, Fla.-based Adventist Health System, where Stephen Knych, M.D., the 45-hospital-campus health system’s chief quality and patient safety officer, is leading a pioneering effort to leverage analytics to help improve the clinical skills and performance of surgeons engaged in robotic surgeries in that health system.
Dr. Knych presented about this initiative in a workshop focused on the “Business Case for Safety,” last May, at the annual NPSF Safety Congress, sponsored by the National Patient Safety Foundation, as well as participating in the IHI National Forum on Quality Improvement in Health Care, held in Orlando in December, and sponsored by the Cambridge, Mass.-based Institute for Healthcare Improvement.
Dr. Knych and his colleagues at Adventist have been partnering with the Seattle-based healthcare technology company C-SATS, in order to leverage analytics to improve surgeons’ clinical and operational performance.
Using C-SATS’ analytics at Adventist facilities, Dr.Knych and his colleagues have seen scientifically measured and statistically significant improvements in quality measures in robotic surgeries, specifically in reductions in cases that were converted to open surgery, and in blood loss. They have also seen significant reductions in procedural costs.
Among the advances documented by the collaborative performance initiative in this area so far:
> Conversions from robotic to open surgery dropped by more than half after a surgeon received 10 or more C-SATS assessments (5.3 percent to 1.6 percent)
> Incidents of blood loss greater than 500ml during robotic surgeries dropped (2.4 percent to 0.7 percent) after a surgeon underwent 10 or more C-SATS assessments
> Median surgery time reduced by 22-23 minutes in laparoscopic hernia repairs (ASA Class I-II and III-IV) after C-SATS assessments
Recently, Dr. Knych, along with Derek Streat, CEO of C-SATS, spoke with Healthcare Informatics Editor-in-Chief Mark Hagland, to discuss the progress being made in this initiative. Below are excerpts from that interview.
Tell me about the origins of this program?
Stephen Knych, M.D.: Our interest in the program began at Adventist when we had developed a robotic-assisted minimally invasive surgery guideline, and had had had that guideline approved internally, and our medical executive committees had approved it in all or in part. We had 15 hospitals at that time engaged in robotically assisted minimally invasive surgery. And because there’s no nationally recognized body, as in weight-loss surgery, these guidelines helped us structure how we managed the program. So we put that out, gave our facilities a year and a half to adopt it; and then we recognized a gap for surgeons, around continuing medical education credits, for robotic-assisted minimally invasive surgeries. Even the medical specialty societies hadn’t created programs around this.
Stephen Knych, M.D.
So we started looking for information. And Dr. Richard Satava knew about an innovative approach at the University of Washington, and introduced us to this technology. He is an independent surgeon, an expert in the field of simulation and training; he worked on a fundamentals of robotic surgery curriculum. He was on our robotic surgeon task force as a simulation and training expert; he was an external expert. He is based out of Washington state.
What pieces were missing, for practicing surgeons?
The robotic surgery guidelines are contained in a 25-or-so-page document; a consensus document that had been developed by our taskforce. What was asked for was a specific number of robotic surgery-specific CME [continuing medical education] credits for each privileging cycle that all doctors go through in hospitals. But they weren’t able to obtain that from their specialty societies, so we had to create this.
What types of gaps were there? Technical, clinical, process gaps?
All of the above. Education and training, privileging, etc. What was in the literature in 2015 when these were offered, and pertaining to establishing robotic surgery guidelines—clinical practice, education and training, and some of the nuances around privileging and credentialing.
What was the role of C-SATS in this?
Derek Streat: As Dr. Knych mentioned, C-SATS was spun out of the University of Washington in 2014, based on research by my co-founder, Dr. Thomas Lendvay. Dr. Lendvay is our co-founder and CMO, and still a practicing pediatric urologist at Seattle Children’s Hospital. Addressing this area of skill improvement, and doing it in a scalable and effective way. Even in his own practice, he was finding it was difficult to get feedback as a robotic surgeon. Either you’d get feedback from over your shoulder, and they’d be colleague at best, a competitor at worst; and most of all, people didn’t have the time. So he came up with the idea of using distributed reviewers, people with certain skills around surgery. And then he figured out how to take a complex task like a surgical case, and break it up into smaller pieces, to assign those pieces to people on a panel.
For example, you’d consider a robotic prostatectomy case, and take a video of the case being performed. And you’d say, OK, let’s have somebody evaluate the left and right-hand movements of the surgeon to determine their level of bimanual dexterity. So we take many pieces of the surgery and evaluate them. And he figured out that if you broke these surgeries into pieces and sent them out to people around the world, and got all that data back and rolled it up, you actually got a very accurate representation of that surgeon’s skill, and you could identify specific ways to improve performance. So we’re up to about 50 clinical articles, and we turned it into a product. So C-SATS is now a technology offering that people can use, to provide methodology and feedback for surgeons’ individual performance.
This has been commercialized as a service?
Yes, it’s a software-as-a-service offering.
How many organizations are using it?
Right now, we’re approaching about 100 hospitals around the country.
What percentage of the universe of robotic surgery is using this?
I’m not sure; this really works for any type of procedure, even beyond surgery, that can be video-recorded. As long as you can take a video of something, it can be evaluated in our system. As of today, five of the ten largest non-profit health systems in the country are using CSATS to evaluate their robotic surgery procedures, and Adventist is one of those.
What kinds of things are you learning from using this application of this technology, Dr. Knych?
Knych: That’s a good question, and it’s interesting. The edict came down years ago from the Office of Technology in Washington, D.C, about converting to electronic medical records in hospitals, and yet people are still asking the question, what are we getting from spending billions and billions of dollars on converting to electronic records? One of the criticisms has been the question, what are we getting out of technology, that makes the investment worthwhile? And this is an example of where technology is able to be leveraged to provide some very meaningful impact. And I would classify it in this way: I as a surgeon can go into the OR, upload a case to C-SATS, they will assess the case with 30 or so reviewers, using a carefully evaluated tool, and can come back to me in five to seven days, confidentially to me, with reviews that will give me a quantitative score about my skills, as well as qualitative reviews of me and my case, and the piece de resistance is that, through the comfort of my own home, looking at my own computer or mobile device, I can look at educational skills offered through the program, to improve myself, and can take this into my case tomorrow.
So this shows how technology can speed up processes. In this case, technology is accelerating the learning process. And that’s what we’re doing, and that’s where I think the payoff might be. The other part of my job is performance improvement. And performance improvement science tells us that you need actionable feedback that is timely, in order create improvement; and this gives the surgeon actionable feedback that can accelerate the learning curve.
How many surgeons in the Adventist system are using this feedback solution and process now?
We have somewhere between 140 and 160 surgeons actively practicing, using this robotic platform, using robotic assistance.
How many have made use of this process? All of them?
Yes, the latest numbers are showing us, right around 74 percent. We instituted this program on a voluntary basis at our institutions. The surgeons choose to participate, and around 73-74 percent have done so. We had a nine-month pilot period that went from March 2016 through December 2016. So it’s been fully implemented at Adventist since the end of February 2017.
How would you say that this fits into the broader performance improvement movement in healthcare in general? And how does this fit into the broader culture around performance improvement at Adventist, and what you’re trying to accomplish?
It fits very much with the culture we’ve been advancing here. We have six imperatives at Adventist Health System that we are pursuing, and this fits squarely into several of those. One of those is called “Improve the Product”; another is “Improve People Systems.” And also, we’re finding that we’re lowering the cost, because we’re finding that as our skills increase, our efficiency increases, and our costs go down. So it hits three of our six imperatives. With regard to your question about surgeons being resistant to performance improvement initiatives in the past, I think some of that is changing, with the shift from volume to value. Surgeons are becoming much more accustomed to seeing performance measures that drill down to them. We have hospital medicine measures, patient satisfaction measures; they’re getting quite used to working with measures that relate directly to them.
But I agree with your premise about culture and process: when you look at how surgical performance and quality have been measured in the past, through very distant measures, such as mortality statistics, and such—when you start at that level, it’s very difficult for surgeons to embrace what their performance has had to do with that outcome; but when you give them a measure, saying, this is your individual surgical skill, they embrace that very quickly. So if you can give them measures that really connect to what they do, and to clinical outcomes, they will embrace those measures, as long as they are objective and connect directly to their practice. We all think we’re superior in skill, but will adjust when we’re able to see the data.
Streat: The speed with which you’re able to receive and respond to the feedback, is impressive. You may see your tissue-handling score was at 4.0 for a particular case, and then you’ll get specific feedback about how you can improve that specific aspect in your case, and can see videos of better performance. And doctors have said, I got feedback and immediately changed my behaviors with the next case. So the data becomes addictive to people; they want to see how their scores improve.
What will happen at Adventist in the next year or two, in this area?
Knych: We’re investigating two opportunities with C-SATS, deploying this to our advanced laparoscopic surgeons; and secondly, with developing similar assessments, with robotic surgical first assistants, the people who assist in the surgery. Some are surgical residents. There’s a whole industry that provides surgical assistants. Some will be residents and interns; but in community hospitals with no training programs, some are PAs, and some are nurses, etc. But they go through training as surgical first assistants, and are privileged credentialed according to those requirements.
What should the IT and data analytics professionals in other hospitals and health systems around the country, know about this, especially with regard to its potential to change behaviors?
Yes, it will change behaviors. But it also has to fit within their workflow. It has to be presented to them in a meaningful, actionable, timely way; and also, with people running back and forth everywhere across the hospital, it has to be presented to them within their workflow—so being able to be presented on mobile devices or wherever it’s convenient for them, is very important, because knowledge has a very short half-life. If I see feedback on a case from a few weeks ago, it’s less meaningful than if I see it within a week. So if I can be sitting in the physicians’ lounge and can quickly ingest information, I can digest it right away. And it’s so important for the technology to allow for the portability of this information, and the workflow integration.
Streat: I think that two things are important here: one, this is very much a data story; and in particular, it’s the creation of a new data set that either doesn’t exist or has been hidden until recently, that’s connected to individual clinical performance. That needs to be unearthed. And the other side of the story. I’m a technologist, and the companies I’ve built in healthcare and other industries have typically been data mining or AI companies. That’s where this is going. Even now, when a surgeon gets feedback, much of that has been powered by information coming from preceding systems. We’re approaching 3 million assessments that have been run through our service; that’s a lot of data that’s been integrated into the system. And ultimately, that starts to allow you to predict the outcomes and the performance that will be experienced by surgeons and patients. Just by looking at past cases, that allows you to do prescriptive improvement—it helps drive the outcomes you want to drive. And that’s driven by having enough data, and by training the systems to produce the actionable recommendations you want; and that’s what we’re starting to do today.
Knych: The fact that I can get a turnaround time this tight, of my own information, using a standardized, validated research assessment tool, by 30 people in many locations, and one to three experts, and it’s specifically targeted on key aspects of the procedure, that are important to outcomes—that is important. And, in terms of getting all those elements pulled together in that short a time, I can’t get that without technology.