Skip to content Skip to navigation

One-on-One With Mercy Medical Center CIO Jeff Cash, Part II

October 14, 2008
by root
| Reprints
Cash never expected to get hit with a flood — but he did have plans in place in the event of a disaster.

This past June, Iowa was hit some of the worst flooding in the state’s history, and right in the thick of it was the 370-bed Mercy Medical Center. More than 4,000 homes in the area had to be evacuated, and the hospital was forced to move 176 patients to nearby facilities. However, despite the fact that water levels rose so high that sandbags had to be piled up outside the doors, physicians were never left in the dark, as the facility’s network, EMR and communication systems stayed up during the entire ordeal. For Jeff Cash, Mercy’s vice president and CIO, it was the ultimate test for his staff’s preparedness.

Part I

KH: As far as the patient records, were clinicians able to access EMRs the entire time?

JC: Absolutely. We have really two primary systems that house our EMR. We use the Meditech Magic system, and we have two of those as well — one running in our primary data center and a back-up one available in our secondary data center. So in the primary data center, which wasn’t really affected by the flood at all, we continued to operate Meditech. We decided to shut it down at one point for a brief period of about 18 hours after we had evacuated our patients, and that was for the internal records we use. That was just a precautionary measure; we were concerned that if for some reason our generator power couldn’t be distributed that we’d have a big problem.

What we did to remediate that was, we had several contractors come in during the flood and we pulled generator feeds directly through the first floor of the hospital and we had a permanent done installation between our generators and our primary data center that completely bypassed all the electrical switches in the ground floor and below. So we were able to pipe generator power directly in almost through an overhead system into the primary data center. With that, we were comfortable we would not lose electrical power again in the foreseeable future, so then we started bringing all of those systems back up.

KH: Now that you’ve been through a disaster situation, how prepared would you say that Mercy was for such an event? Did you have the necessary steps in place?

JC: We have a business continuity plan that we had used to create most of the planning for what we build, so I think that helped us prepare for a vast majority of relocations and further redundancy that we had. We have tested some of that redundancy in the past, especially things like communication systems and call manager; we’ve run tests on that to fail them over. We’ve brought up our secondary Meditech system in the past, so we’ve been able to test that as well even though we didn’t have to use it this time.

One of the things we’ve planned is to make our data centers essentially as portable as possible, with the intention that if something like this came up or should we need to move a data center at some point in the future, that we’d be able to do that. Our data centers have been in a modular fashion, so we’ve tried to keep all of our cabinets self-contained in the sense that we’ve moved to a blade server technology infrastructure. So we’ve been replacing at a very rapid rate, over the last couple of years, all of our traditional servers with blade servers, so that makes it a lot easier to have a smaller number of cabinets that we would have to move if we had to be portable.

We moved to a fiber-based architecture for all of our network switching, and we’ve put in a large storage area network that’s redundant between both of our data center as well. We use two big HP EVA stands and we have business copy between them and between our data centers. The idea is you had to move a cabinet or if you had to move an entire data center, it should be as easy as pulling the power off of the cabinet, pulling the network connections off of the cabinet, and being able to move it to an alternate location and plugging it back in where you have a network connection and being back online.

I think that helped us with the ability to move out of the data center. It was a little bit longer evacuation of our second data center, so what we ended up choosing to do was remain in our primary data center, and we took an opportunity to expand an rebuild and update our second data center significantly. We did end up doubling up on our primary data center for a period of time longer than you might have traditionally expected due to the evacuation.

We were able to do that with those modular building blocks essentially, not having a PVX to tie us down and not having all the traditional copper cabling to tie us down. By keeping a high concentration of servers in a single cabinet, it’s much easier to move them on a portable basis.

KH: Does it become a challenge in prioritizing what systems will stay up during an emergency?