Disaster Recovery and Business Continuity Strategies Evolve Forward | Healthcare Informatics Magazine | Health IT | Information Technology Skip to content Skip to navigation

Disaster Recovery and Business Continuity Strategies Evolve Forward

February 27, 2018
by Mark Hagland
| Reprints
Click To View Gallery

When it comes to the key, interrelated subjects of disaster recovery and business continuity, there’s no learning like learning in the moment. That reality was emphasized during a cybersecurity-focused panel held on February 2, the second day of the Health IT Summit in San Diego, sponsored by Healthcare Informatics, when Sri Bharadwaj, director, information services, and CISO, at UC Irvine Health (Irvine, Calif.), led a panel entitled “Ransomware Risks: What We Learned From NotPETYA and WannaCry.”

As has been widely noted, the May 2017 cybersecurity attack dubbed “WannaCry” grabbed storylines internationally and across the healthcare landscape as tens of thousands of hospitals, organizations, and agencies across 153 countries had their data held hostage, while the June PETYA/NotPETYA attack unleashed further damage worldwide.

“I was actually at a Healthcare Informatics conference” when the global WannaCry attack hit last May, Bharadwaj noted as he opened the discussion on Feb. 2, and referred to the Health IT Summit in Chicago held in May 2017. “I was speaking on a panel that morning, in Chicago, and this thing hit us. I got a frantic call, and I was on the phone call. For the first ten minutes, I said, OK, I’ll try to figure that out. That became six hours. I almost missed my flight home that day. It was one call after the other, providing updates, communication, etc. But we did not shut down the Internet, our Outlook, or any feedback back to the end users. We got the most hit from our medical devices. It was fairly easy to patch stuff and get stuff done, but we realized that our realm of exposure encompassed all sorts of things—who the heck knew that the parking system was running on a Windows 98G? Who knew that the cafeteria system was running an old version of Windows so old that we had to figure out what it was?”

“The key questions,” Banash said on that panel in response to Bharadwaj’s opening statement, “are, are you managing your risk? Do you understand your attack surfaces? What vectors are you vulnerable to? When this started out, no one knew what was going on; it was crazy. If you had one of those maps in your security center, it was all lit up, and it looked like ‘War Games.’ Initially, we thought it was via email, and we were chasing emails, but when we found out it was SMB [server message block] vulnerability, we were able to chase that down. We were hit, but there was no successful attack on us. But understanding what was in your environment—it never became more important than on that day. And those MRI machines running on Windows XP—those machines are million-dollar pieces of equipment; it’s hard to justify new purchases to the board. I would say we were lucky; I’d like to say we manage things well, but we did get lucky.”

Asked about connections with law enforcement, Christian Abou Jaoude, director of enterprise architecture at Scripps Health, San Diego, said on the Feb. 2 panel, “We do have a direct contact with law enforcement; we also have a protocol that we follow that’s been well-established. We followed those procedures, but the same thing happened to us: there wasn’t much information available during the first couple of days” following the WannaCry attack. “So I went out and read as much as I could about it, read articles to see whether there was something different about this. So we enacted that process, sent out notifications, and then a few days later, everyone learned what had happened.”

“I think we got lucky,” said Chris Convey, vice president, IT risk management and CISO, Sharp Healthcare, also on that Feb. 2 panel, “because this started in other parts of the world. Here in the U.S., we got lucky. I was at Millennium Healthcare then. SMB was blocked, that was the first thing. And then, how are our backups protected? And then patching. And it turns out, the basic security hygiene was needed. Look at what happened at NHS (National Health Service). And to be honest, we hadn’t patched as well as we could have. It’s hard to do, especially in the healthcare space, because you’ve got to test, and you don’t want to bring down patient care.”

Looking at the Bigger Picture

The kinds of experiences that the members of that Feb. 2 cybersecurity panel cited are exactly the kinds of issues that industry experts say need to be carefully worked out and strategized in advance. And some of what’s involved really is fundamentals, says Shefali Mookencherry, a principal advisor in the Naperville, Ill.-based Impact Advisors consulting firm. “First of all,” Mookencherry says, “You have to plan. Most folks will have [a broad disaster recovery strategy] in their heads. But if it’s not written down, you may forget. “You need to look at all of your business processes and put together an actual disaster recovery team; and that requires an interdepartmental, indeed, enterprise-wide, effort. And it’s related to project management. You have to look at the data, the documentation of a plan, the processes, you’re going to look at, and of course, policies and procedures. Enforcing and communicating those to everyone involved is key,” she emphasizes.

In other words, prioritization. “You need to classify people, data and technology, in terms of what’s essential, what’s important, etc.,” Mookencherry says. “And if you have a plan, who’s going to execute it? What’s your first line of communication? What would an employee say if a disaster had occurred? How would the plan be carried out?”

Pages

Get the latest information on Health IT and attend other valuable sessions at this two-day Summit providing healthcare leaders with educational content, insightful debate and dialogue on the future of healthcare and technology.

Learn More

RELATED INSIGHTS FOR:
Topics