As Hurricane Sandy grew in the Atlantic Ocean and began its approach towards the mid-Atlantic U.S. coast, William Bifulco, the CIO of Southampton Hospital, a 125-bed community hospital in Suffolk County on the eastern end of New York’s Long Island, led his IT team and others at the hospital in storm preparations. Southampton Hospital is a member of the East End Health Alliance, a three-hospital alliance that also includes Eastern Long Island Hospital (Greenport) and Peconic Bay Medical Center (Riverhead).
Fortunately, the eastern end of Long Island was mostly spared serious damage from Sandy, and Southampton Hospital never suffered any major consequences; though the hospital lost regular power, its generators kicked in, and normal operations continued. In the wake of the storm’s passing, Will Bifulco spoke with HCI Editor-in-Chief Mark Hagland regarding what went right at his facility, and what might have been done even better. Below are excerpts from that interview.
How did you prepare for Hurricane Sandy?
We knew the storm was coming, and reviewed all of our emergency policies for failover processes. With regard to all of our redundant ISPs/data carriers, we tested each of those for availability within our network and from outside our network; we tested our redundant PRIs [primary rate interfaces], or telephone carriers; we have multiple phone carriers. In fact, we have two phone systems, and we tested the auto-routing, so that each phone system could go onto either PRI, and tested the failover for that. And we made sure everyone knew how to manually reroute and check that the reroute worked, as well. We checked in with Lightpath [the Jericho, N.Y.-based Optimum Lightpath, Inc.], which has our main PRI, so that they could execute our manual failover if needed, to make sure 30 of our main numbers would reroute.
You have an EHR, right?
Yes, and we made sure all of our backups for the EHR were functioning, and we checked to make sure that the environment in the data center was acceptable; we have two independent, 12-ton air conditioners, and each is on a separate generator. We made sure those were functioning; and that the uninterruptible power supply in the data center was functioning properly. And we set up remote monitoring for all of those elements from our command center, and made sure we had remote capability for command as well.
None of those systems were affected, correct?
We did lose utility, but our generators came on and carried the load; we also have an alert process that sends off e-mails every time the UPS system is engaged. But all the redundancies kicked in fine.
I’m sure that was a relief?
Oh, it was a huge relief.
You had no flooding, right?
You stayed the night?
Yes, my team and I and probably 40 other hospital staff stayed the night, pulled out 50 cots, and I slept in my office on a cot. And that way, we could stay accessible. My team Monday night was myself and two other members of the total IT team of 25 people. The rest of the team was remotely connected. Some members of our IT team travel to work from a pretty significant distance, and I told them that unless I got into real trouble, they should just stay home and remotely monitor.
What lessons did you learn from the experience?
I think the takeaway for me is, I knew this was happening. So I had a ton of time to prepare, to dust off all my policies. But we got lucky, because we had a ton of time to prepare. And we need to be more agile; so, I would say, keep all your policies fresh, keep your staff fresh on procedures. So, contacting LIPath to prepare for a cutover if necessary. And my staff would never have found that number. And in the moment is not a good time to be thinking up creative ideas. Doing it on the fly is not the way; so even though we have the policies, really making sure that you’re prepared at any moment for sudden incidents is very important, I think. And I’m glad to share these insights with my fellow CIOs; I think we should all share our experiences with one another to become better prepared.