Skip to content Skip to navigation

When Disaster Strikes: How Technology Drives Better Preparation

October 1, 2012
by John DeGaspari
| Reprints
How are providers using technology to improve their disaster recovery processes?
Click To View Gallery


Three hospital systems provide details about how technology has influenced the way they prepare for disasters and what they have learned from their experiences.

Disasters can strike at any time, and there is really no way provider organizations can completely insulate themselves from unforeseen or large-scale natural events such as hurricanes, floods, and fires. Nonetheless, as hospitals continue on their steady march to becoming paperless organizations, many are following strategies that are minimizing their risk of unplanned downtime.

Key to any disaster recovery effort is the ability to protect electronic data, whether the core clinical information systems or ancillary systems such as imaging or business functions, according to experts interviewed for this article. Jeff White, a principal of the Pittsburgh-based Aspen Advisors, LLC, notes that disaster planning is typically a top-down process, and is inclusive of the clinical and business units in a hospital organization. The IT department, he says, should play a central role as implementer, charged with enacting plans, making the investments in technologies, and architecting systems that meet the clinical and business requirements.

How is technology driving better preparedness? Healthcare provider organizations are following various strategies to prepare against unplanned downtime. White provides a few trends that help to explain progress in the disaster recovery arena. More hospital systems are moving toward multiple data center environments, purely for the sake of disaster recovery and business continuity.

Jeff White

Core electronic health record (EHR) systems have an architecture that plays into real-time or near real-time replication of data.  Data exists as a single database, so can be replicated in their entirety from a primary site to a secondary site. Replication can be done in near real time, resulting in minimal data lost in case of an interruption.

For ancillary systems, many providers are moving to a virtualized environment. Some providers, especially larger ones, have invested in storage area network (SAN) replication, with a duplicate SAN at a remote site. This can be an expensive set-up, and some mid-sized hospitals are still on that migration path. The advantage is that replication can occur very quickly.

In White’s view most hospitals do a good job planning, configuring, and testing their disaster recovery capabilities, particularly with their core EHRs systems. However, he adds that many organizations struggle with their ancillary systems, because they often lack the people, bandwidth, and time to test major changes adequately on an annual basis.

Disaster recovery system testing should happen annually, White says, adding that regular testing helps train the IT staff in proper procedures. In addition, disaster recovery plans should be revised whenever there is a change in the technology. “Once a disaster is declared, the staff may react differently because the technology has changed. If they don’t have that documented, then it becomes more difficult for them to react once the disaster has happened,” he says.

The following case studies discuss how three hospital systems prepare for potential disasters and the lessons learned from past experiences.

Lessons Learned in Joplin

In May 2011, an EF-5 tornado slammed into St. John’s Regional Medical Center in Joplin, Mo., part of the Mercy Health System, leaving a mile-wide path of destruction. Mike McCreary, chief of services at Mercy Technology Services in St. Louis, says the hospital’s disaster planning is an integrated effort. The IT infrastructure component at the corporate level provides redundancy and connectivity; and a local component operates at the community level. “We follow both hospital emergency and state command systems,” he says. Failover drills are done quarterly, and local disaster drills are done annually in conjunction with the city.

When the tornado struck, it destroyed Joplin’s communication infrastructure. Cell towers were destroyed, removing voice communication (there remained enough bandwidth for text messaging). To fill the gap, the hospital established a command center with a satellite link to provide phone and Internet connectivity, he says. As a result, Mercy now has a mobile communication center with satellite capability and satellite phones. It has incorporated into its plans that text messaging be the primary means of communication when a disaster happens.

McCreary says the hospital’s patient record systems fared well, partly the result of timing and partly due to the remote location of its data center and failover site.

At the time of the tornado, the hospital had been part of the Mercy system for about two and a half years. It was in the process of moving older equipment, including hardware and a variety of systems including nursing documentation and legacy accounts receivable systems, from Joplin to a data center in Washington, Mo., about 250 miles away. “Our model is to have a central suite of applications that is standard on the Mercy system; and the transition was complete at Joplin except for some clean-up,” he says.

The hospital was already live with its EHR (supplied by Epic Systems Corp., Verona, Wis.), which was fully functional when the tornado struck. Had it struck prior to the go-live, it would have been much worse from a data standpoint, McCreary says: “We would have lost all of the systems; and even though there were backups, once something like that happens you are restoring new equipment, and there are always complications.”