Three years ago, Hurricane Katrina wreaked havoc on New Orleans, and particularly on the city's hospitals, forcing patients to relocate to football stadiums, and physicians to administer treatment without access to medical records. The destruction the city faced was well-documented and had a scarring effect on those living in the area. Fortunately, the storm also had a positive impact, as it was a much-needed catalyst for local healthcare organizations to do better next time.
This past September, when Gustav, a category 2 hurricane, bounded toward the city, Ochsner Health System was better prepared. The disaster recovery steps put into place throughout the past few years played a significant role as patients were treated and EMRs were powered throughout the storm.
“We had a real-life experience that taught us the value of getting our disaster plan in place,” says Lynn Witherspoon, M.D., vice president and CIO at Ochsner, a seven-hospital, academic health system based in New Orleans. “Aside from the one brief interruption of some services in our Elmwood Facility, things went on pretty normally,” he says. “The ability to evacuate two significantly-sized hospitals and turn around and do it all over again in the course of about five days is pretty incredible. We're very pleased with the performance of our platforms and systems.”
Disaster preparedness and recovery became a big priority after Katrina. Since that time, Ochsner has created a formal disaster command center, adopting the approach established by Washington-based Federal Emergency Management Agency. The health system also added several power generators and increased redundancy in the wide-area network, Witherspoon says.
This type of planning, says Jonathan Thompson, vice president of client services at Minneapolis-based Healthia Consulting, is critical and should be a model for all health systems. “I can't tell you how many times we've had an organization tell us they have a disaster plan, and it's a gigantic book created by a consultancy that came in and spent a lot of money to develop a deliverable that's really never been touched.”
Lately, however, the tide seems to be changing, Thompson says, as there is increased awareness around disaster preparedness and recovery (see chart). “It's not just an application or system or network infrastructure going down,” he says. “There's a greater risk of additional, bigger issues that can not only create system downtime, but affect where your people are, your building infrastructure, and the way you access the systems to recover.”
Two of the most serious issues CIOs must deal with during a natural disaster are loss of power and impaired access to IT systems — issues that are closely intertwined. In order to keep IT systems up for as long as possible, power is needed. And while most hospitals have emergency power capabilities, sometimes that isn't enough.
After losing a piece of its internal power-generation capability, which led to difficulty in keeping systems cool during Katrina, Ochsner added several generators during its recovery process. Since then, the IT team has also remained vigilant about refueling to ensure the generators could keep running. This step, Witherspoon says, proved pivotal during Gustav, as Ochsner never lost any of its systems in the main campus data center.
Keeping the data center running and ensuring that users can access IT systems is critical, as was demonstrated again during the recent storm, Witherspoon says. Ochsner, which has three un-interruptible power supply (UPS) units that protect the data center's electrical platform, experienced several dozen intermittent losses of commercial power in the computer room and was forced to shift to generator power. The UPS units worked reliably the entire time, says Witherspoon, who advises keeping up maintenance on the units and having at least a half-dozen spot coolers or portable air conditioning units on hand.
However, Witherspoon notes, it takes more than just sufficient power to keep things running. “In terms of the integrity of the data center and keeping systems up, the other piece that's critical is that end users are able to get to those services,” he says. Prior to Katrina, Ochsner had not Web-enabled its core clinical platforms, and as a result, clinicians had to be physically connected to the network. In the ensuing years, enabling Web access to any of its clinical platforms was top priority. “That proved to create a lot of versatility as both doctors and patients were evacuated and found themselves in unusual places where no one could access medical records.
“Since Katrina, we have extensively increased the redundancy in our wide area network,” which he says, “worked brilliantly during this storm.” Ochsner was able to collaborate with the three Internet providers it uses to provide a redundant backup network through which the IT staff was able to support the entire operation, including PACS.