Skip to content Skip to navigation

One-on-One With Mercy Medical Center CIO Jeff Cash, Part III

October 30, 2008
by root
| Reprints
When it comes to accessing records and maintaining communication, there’s no such thing as too much back-up.

This past June, Iowa was hit some of the worst flooding in the state’s history, and right in the thick of it was the 370-bed Mercy Medical Center. More than 4,000 homes in the area had to be evacuated, and the hospital was forced to move 176 patients to nearby facilities. However, despite the fact that water levels rose so high that sandbags had to be piled up outside the doors, physicians were never left in the dark, as the facility’s network, EMR and communication systems stayed up during the entire ordeal. For Jeff Cash, Mercy’s vice president and CIO, it was the ultimate test for his staff’s preparedness.

Part I

Part II

KH: Going back to the electronic records, how were you able to ensure that clinicians had access even as staff and patients were being transported?

Our EMR, Meditech, has a physician portal that we’ve plugged on top of it that we bought from Patient Keeper. Although we have redundancy with Patient Keeper here in the hospital, there was a period of time where we didn’t have Internet service, and there was a period of time where we were concerned we would lose electrical distribution inside the facility because of the flooding. So Patient Keeper was able to reestablish our EMR off of one of our backups. They took a point-in-time snapshot and restored at their facility in Boston, and so we were able to allow physicians to continue to access our EMR, and they would have been able to do that even if we didn’t have any facilities at all.

Bear in mind, we had 176 patients we had to transfer outside the facility, and although we had some amount of paperwork and medical record information with those patients, their full history was still in our system. We needed to give physicians access to patients’ records, so we had Patient Keeper activate that portion for us from the facility in Boston. So we had a number of physicians that were able to log in to their site to take care of our Iowa-based patients by using our Web portal in Boston, interestingly enough.

That part of the backup and recovery worked extremely well. Those are the two areas that we thought we were going to need, and as it turned out, that was correct. Going forward, we’ll continue to always have provisions to be able to host a copy of your EMR as a hospital patient as well as our consumer portal offsite of our facilities, just in case we should go through something like this again.

KH: I think it’s safe to say that the next time this happens, you guys will be ready.

JC: I think we will. The other thing we did that we’ve gotten some press for is we are on the Qwest Sonnet ring here in the community. Qwest has the Sonnet ring that has entrances that come in at opposite ends of our hospital to bring in all of our data services — long distance and local telephone lines and Internet services. We have 20 outside clinics that we support as well as part of our hospital network, and of course the data that feeds them for their primary care EHR comes from the hospital. To stay connected with those communities, we thought it was important that we be on the Sonnet ring so that if we have a fiber cut somewhere, we wouldn’t lose access to the Qwest network.

Well, the one part to that which you might call the Achilles heel was that we have all the Sonnet equipment in the hospital at a below-ground level, and that ended up being down near our electrical switch distribution gear. So we started having the same problem with water coming in through the walls, etc in that area as well. We were afraid that we were going to lose that outside communication, so for an hour and a half we were down because we called Qwest and said we needed them to move their Sonnet gear and D-mart from the basement up to the first floor data center, and they were a little shocked.

They were here in a matter of 30 minutes with a couple of their engineers, and they actually went over the sandbags and waded through the water all the way down to the basement and found that they were able to disconnect everything and reattach all the fiber, bring it up one of our elevators, put it back in the data center and connect it back up just within a matter of a couple hours. And that brought us back on the network so we still had access to all of our outside communications. That was a tremendous effort by them to put it back together, and now that we’ve gone through that pain, we’re going to continue to keep that host in an above ground location.

KH: How else were you able to maintain communications?