Monitoring Users’ Behaviors to Better Secure Data: One Health Plan’s Story | Healthcare Informatics Magazine | Health IT | Information Technology Skip to content Skip to navigation

Monitoring Users’ Behaviors to Better Secure Data: One Health Plan’s Story

March 1, 2018
by Rajiv Leventhal
| Reprints
Aetna’s CSO discusses the importance of behavioral-based security measures

About three years ago at the Hartford, Conn.-headquartered Aetna, a health plan with more than 37 million consumers, organizational leaders set out to create new security measures for its mobile and web applications that would aim to transform existing controls.

At the core of the initiative was being able to monitor user behavior in real time, says Jim Routh, the chief security officer (CSO) and global information security function leader for Aetna. “As it turns out, security is evolving pretty quickly into a model-driven security realm,” he says in a recent interview. Routh explains that model-driven security centers around frontline security controls in which algorithmic models determine things such as: how much access to give to a consumer, an employee, or a privileged user; whether something running on an endpoint device is malware; or whether a phishing email is being sent through the email infrastructure.

In many cases, notes Routh, it’s the models that are driving security controls; at Aetna, there are 200 models in production today that are doing just that. “And we do a lot of manipulation of the models, which is evolving cybersecurity and physical security practices from conventional to unconventional controls,” he says.

In the interview, Routh spoke to Healthcare Informatics about how Aetna has been able to put these controls in place, why behavioral-based security is so important and more. Below are excerpts of that discussion.

Tell me about your plan at Aetna to monitor users’ behaviors in real time. How did it all begin?

Webinar

How to Assess IT Risk in a Healthcare Environment

In this webinar, Community Health System’s CISO Scott Breece and Lockpath's Sam Abadir will discuss the unique IT landscape of the healthcare industry and the challenges this presents for IT risk...

Three years ago we hired a chief data scientist to be dedicated to security, and this was someone with nine years of experience at the NSA (National Security Agency). We asked him to build a large data lake for security, enterprise-wide and to scale, that would be something we could use off-the-shelf to do better cyber hunting, and figure out patterns and anomalistic behaviors across a wide swath of data, and develop broad models for some of our payment business segments.

And he did that exceptionally well. In two-and-a-half years’ time, he had 110 models running in production and it was exactly what we asked him to do. The irony is that during that time we deployed eight other implementations of technology platforms that had embedded unsupervised machine learning to drive controls.

Jim Routh

Can you give some examples of these models?

The first and most significant example from a consumer perspective is when we moved into what we call continuous behavioral-based authentication. And there are two parts to that, with one being that it is continuous. Every time we have designed authentication for any kind of technology, it was an event at the front end of an electronic interaction. So in a web app, you provide the user ID and password, you are in and trusted, with full access to the system. And there is little monitoring thereafter since you’re a trusted entity. Remember, binary controls for authentication can be defeated since they are based on assumption that only the end-user has the information. And that used to be the case with passwords, but it isn’t anymore.

In 2016, three billion credentials were harvested just based on public breach data. Shape Security [a Mountain View, Calif.-based security company] did the analysis based on public breach data, and they believe that the number is actually closer to 10 billion credentials. And there are only 320 million people in the U.S., so the assumption that you are the only one that has your password is no longer valid.

There is actually a tool called Sentry MBA that bad actors could use. So let’s say that they harvest 10,000 e-mail credentials from Yahoo, for example, take those credentials (user ID and password combination) and try it out on any other domain that they wish—and they can do it through a script in the Sentry MBA tool—they will get a 2 percent hit, so they will own 200 accounts from the 10,000 credentials that they attempted. And the reason that 2 percent hit is because you and I, like everyone else, can’t remember passwords for 100 different sites and mobile apps. So we use the same password. So 2 percent of the time you will get a hit just by using the same user ID and password from one domain to the other.

We are reaching the point of when the credential availability will be unlimited, and when that happens, there is no friction stopping a criminal to do this at scale. A growing percentage of authenticated log-ins in an enterprise are done by somebody using someone else’s credentials. So the obsolesces of passwords will continue to grow and grow significantly. It will take a decade to swap out login credentials across all of the enterprises, since that doesn’t happen overnight; 99 percent of authentication today is done through passwords. But in order to solve this problem, we have to recognize that authentication moves from an event to the front end of an interaction to a continuous process. And we are using behavioral attributes gathered electronically, and we apply a risk score to that, and that risk score notifies the application how much access to provide throughout the interaction.

An example is that in a web app, we will have 30 attributes of an end-user’s device configuration, browser configuration, location, and electronic behavioral patterns, and for each one of those we create a numeric value of it. And we have one that identifies the pattern that’s the norm—so it’s capturing data over a period of time for that attribute, recognizing what the norm is, and representing that numerically. Then we take the actual data, turn that into a number, compare it, and it gives us a deviation score, or a risk score, essentially. We add that up across the 30 attributes, aggregate it into one risk score, and that tells the app what the trust level is for that user at that point in time. And that will change; there are thresholds. The app decides what to do with the risk score. Some apps will be highly sensitive and others will be indifferent. So authentication moves from an event to a continuous process and it’s math that is driving it.

We have modified our web and mobile apps to allow us to not only do continuous authentication, but we can change authentication controls any time we wish without writing a line of code. So we push a button from a policy and we can create an authentication control for a unique segment of a consumer market and treat them differently than others, and then adjust authentication controls and the attributes that we take. So when Apple comes out with a picture biometric, we can use that just as much as we use the touch ID. And the consumer chooses the biometric. We take that as an attribute, score it along with the other attributes, and do continuous authentication.

This is in production for a million people today, so it’s not theoretical. Most of the top four banks all have continuous behavioral-based authentication in production. They keep asking for the passwords because they don’t want to tell consumers that they no longer use the passwords, so they just do this in the background. The technology has been around to do this for quite some time. We took four early stage companies that have best-of-breed capabilities to build this infrastructure. We started implementing this last year and hopefully by the end of [2018] all of our consumers will be using it. They will be choosing their biometric of choice and we use all of the other attributes. And the nice thing is they won’t have to remember their password anymore.

What is Aetna’s expected outcome from this work as it relates to keeping bad actors away and protecting users’ data?

With most of the breaches you read about today, the criminals are after credentials. People were scratching their heads three years ago when they started hearing about email passwords being the source of an attack. As it turns out, because we all use the same passwords [for everything], email passwords are good proxies. And combine that with the demographic consumer information that is available for criminals today, all of that can be used to bypass our controls for password reset. [Bad actors] will actually call a call center and will try to convince the phone rep to give them the person’s credentials. And then they own the account, and that’s across any enterprise, regardless of industry. The use of passwords is growing problematic, and it’s because of the credentials combined with the demographic information that is out there in public domain.

Our position at Aetna is that we are telling our consumers that we are moving to this model. We’re giving them choices; they can choose the biometric they wish, or they can choose to keep using a password PIN. Most consumers like the convenience of not having to log into an app. It is a better consumer experience and it’s better from a risk management standpoint. Every time you add security to a consumer, you add friction. But in this case we are adding security and removing friction at the same time. So far, of the million folks using it on the mobile side, 90 percent choose not to use the one-time password and choose their biometric of choice.

From a CISO/leadership level, was it challenging to convince your board to invest in a model like this?

This wasn’t always the case, but convincing leadership of the importance of security is the easiest thing in the world today. And that’s because security is front-page news. So the interesting thing is you can do security for less money, but it involves technique and unconventional controls. And in order to get there, it requires innovation and engineering time.

Aetna is willing to invest our engineering time with early-stage companies. Larger security companies, by default, have to cater to the broadest part of the market. And that’s a good business model for them, but the broadest part of the enterprise market doesn’t happen to have the most knowledge and maturity in security. So these larger companies cater to a dumbed-down version of a capability. We prefer at Aetna to push the innovation to design new controls that change the rules for the threat actors, and we are willing to take risk to do that.


The Health IT Summits gather 250+ healthcare leaders in cities across the U.S. to present important new insights, collaborate on ideas, and to have a little fun - Find a Summit Near You!


/article/cybersecurity/monitoring-users-behaviors-better-secure-data-one-health-plan-s-story
/article/cybersecurity/cisos-cios-not-confident-their-medical-device-security-strategy-new-klas

CISOs, CIOs Not Confident in Their Medical Device Security Strategy, New KLAS Research Finds

October 9, 2018
by Heather Landi, Associate Editor
| Reprints
According to a survey of CIOs and CISOs, healthcare organizations have an average of 10,000 connected medical devices
Click To View Gallery

The healthcare industry continues to be bombarded with security attacks, and these cyber attacks are continuously evolving and become more sophisticated over time. At the same time, the healthcare ecosystem has become more connected with the increasing use of Internet of Things (IoT) medical devices, and these medical devices introduce vulnerabilities into healthcare organizations.

Unsecured and poorly secured medical devices put patients at risk of great harm if those devices are hacked, while also posing a threat to the security and privacy of patients’ protected health information (PHI). A recent medical device security report, the result of a collaborative effort between the College of Healthcare Information Management Executives (CHIME), the Association for Executives in Healthcare Information Security (AEHIS), and the Orem, Utah-based KLAS Research, sheds light on the current state of the medical device security industry. For the report, KLAS interviewed 148 CIOs, chief information security officers (CISOs), chief technology officers (CTOs) and other professionals at provider organizations to gauge their level of confidence in their medical device security strategies, the most common challenges they face, their perceptions of the security and transparency of major medical device manufacturers, and the best practices they leverage to overcome medical device security challenges.

The author of the report, Dan Czech, director, market analysis, cybersecurity at KLAS Research, will provide an in-depth overview of this report and medical device security trends during Healthcare Informatics’ Seattle Health IT Summit Oct. 22-23 at the Grand Hyatt Seattle.

The sheer number of connected medical devices that the average healthcare provider is trying to manage speaks to the tremendous challenge IT security leaders face, says Czech. “We spoke to organizations ranging from small to mid-sized clinics all the way to large multi-hospital IDNs (integrated delivery networks), and everyone in between, and the average number of connected medical devices was just under 10,000 medical devices. You think of the enormity of that problem, for an organization to wrap their arms around the problem of managing 10,000 devices,” he says.

What’s more, respondents reported that, among the thousands of connected medical devices that their organizations are managing, about one-third (33 percent) of those devices are “unpatchable.”

Webinar

How to Assess IT Risk in a Healthcare Environment

In this webinar, Community Health System’s CISO Scott Breece and Lockpath's Sam Abadir will discuss the unique IT landscape of the healthcare industry and the challenges this presents for IT risk...

According to the research, 18 percent of provider organizations had medical devices impacted by malware or ransomware in the last 18 months, although few of these incidents resulted in compromised PHI or an audit by the Office for Civil Rights, U.S. Department of Health and Human Services (HHS OCR).

Czech notes that there have not been any patient safety events, to date, as a result of a medical device security issue; however, respondents cite patient safety as a top concern. “Let’s take an infusion pump,” he says. “The ability for a bad actor to gain access to that pump and change the dosage of the medication that’s being injected into a human, that is the kind of patient safety issue that we are concerned about.”

Czech continues, “Another way medical device security affects patient safety is if a device is on Windows XP, and WannaCry ransomware hits; if something like that happens, that device is taken out of production. You may have an oncology patient who needs consistent treatment with a medical device, and if you take that out of production, it disrupts patient care and impacts patient safety.”

The report found that most respondents are either neutral about or not confident in their current medical device security strategy, with CISOs and CIOs more likely to report concern. Only 39 percent of respondents said they were very confident or confident that their current strategy protects patient safety and prevents disruptions in care. Thirty-one percent said they were unconfident or very unconfident, and another 30 percent were neutral. About one-fifth of respondents feel that the inherent risks of medical devices—several of which are outside of their control—will prevent them from ever feeling confident.

Those healthcare leaders who expressed confidence most often point to their security processes and policies, including access limitations, network segmentation and regular device monitoring and risk assessment, as the source of their confidence, followed by strong technology. To support these processes and policies, many leverage security technologies, such as access controls, asset tracking, firewalls, and medical device monitoring. Strong executive support (financial and organizational) and cross-department collaboration also drive confidence, as evidenced by the fact that large IDNs, who more commonly have greater financial resources, are more likely to be confident in their strategies, according to the report.

“Respondents who report they are more confident also are those that have a clear line of ownership, not a shared responsibility,” Czech notes.

Those respondents that lacked confidence in their medical device security cited lack of manufacturer support as the top reason. Almost as common are internal issues related to basic—but hard-to-master—security tasks, such as understanding what assets exist in their organization, which have been patched, which are connected to their network, and what systems those devices are talking to. “Asset and inventory visibility is the basic blocking and tackling of medical device security strategy—you can’t protect what you don’t know. They are looking for tools and processes that they can put in place that will help them understand all the devices they have, what’s connected to their networks, and some cases, what software is on the devices” Czech says.

What’s more, 76 percent of provider organizations report that their resources are insufficient or too strained to adequately secure their medical devices.

More Manufacturer Support and Collaboration Needed

Taking a deep dive into the root causes of medical device security struggles, the report finds that interviewed organizations are almost unanimous in citing manufacturer-related factors as a cause of their medical device security issues. Most provider organization see this issue as one of shared responsibility. As one CISO explained in the report, “I think there needs to be a coordinated effort between the manufacturers, the provider sites, and the regulators. I wish there were some other way for us to address this issue, but without that three-way partnership, I just don’t see how things will work out.”

According to Czech, the research findings indicate there is a gap between how long organizations expect to be able to use a device and how long vendors feel they can keep a device up to date and secure. As a result, nearly all interviewed organizations (93 percent) have struggled with out-of-date operating systems or the inability to patch a device throughout its expected life cycle. Currently, many manufacturers do not allow customers to patch devices themselves, or void warranties if they do.

Insufficient security controls, insufficient encryption, and hardcoded passwords are each cited as manufacturer-caused issues by about half of respondents. Adding to provider organizations’ frustration, on average, almost one-third of medical device vendors decline to offer contract provisions favorable to security.

However, the industry is beginning to shift, Czech notes. "Many provider organizations have drawn a line in the sand to say all contracts now and going forward will include standardized security contract language," he says. "This trend has been led by forward-thinking provider organizations and it also has benefited smaller organizations that may not have the legal teams or the cybersecurity teams that bigger organizations have, but they can use that standardized language in their contracts as well."

What’s interesting, Czech notes, is that many respondents spontaneously brought up frustrations regarding the role of the U.S. Food and Drug Administration (FDA) in medical device security, though KLAS did not specifically ask respondents about it. “It gets back to shared responsibility,” he says. “Respondents feel that manufacturers have a stake in this, they have a stake in this, but so does the FDA. Predominantly, the concern that they shared was that their manufacturer would hide behind their perceptions of the FDA regulations."

Almost two-thirds of respondents said manufacturers blame FDA policies, claiming the policies prevent them from making devices more secure. About a third said FDA policies are unclear, giving manufacturers ways to skirt around responsibility and a third said that even when policies are clear, the FDA doesn’t hold manufacturers accountable, according to the report.

Cybersecurity Programs Advancing Forward

According to the research, organizations are increasingly adopting a number of best practices to strengthen medical device security. There are foundational best practices that organization should implement, such as performing risk assessments, ensuring the inclusion of security provisions in their contracts, and ensuring they receive a software bill of materials, Czech notes. Organizations also report using the most common and basic defense techniques such as network segmentation, antivirus software, and vulnerability scanning to ameliorate security risk.

With regards to organizations’ patching strategies, many provider organizations have begun requesting that vendors use contract language that clearly outlines patching responsibilities and timelines.

Providers also are leveraging third-party solutions to improve medical device security, with nearly 75 percent of respondents currently using or planning to use third-party software or services, according to the report. Network access control (NAC) is most often used to segment networks and approve/deny access. To reduce costs and clearly define ownership, other organizations outsource their clinical engineering as well.

Looking at overall cybersecurity trends, the report indicates that organizations are investing more resources, both operationally and financially, in their cybersecurity programs. Almost 70 percent of organizations (68 percent) report having a VP or C-level leader in charge of the security program, and that’s up from only 42 percent in 2017, representing a 26-percent increase.

“Large IDNs are definitely leading the way with CISO leadership, as about 80 percent of their organizations have a CISO in charge, whereas if you look at clinics and community hospitals, those would be hospitals under 200 beds, only less than 10 percent have a CISO in charge,” Czech says. “Many of those smaller organizations have a CIO that wears two hats—an IT hat and a security hat.”

Organizations also reported improvements to security programs compared to a year ago. Twenty-seven percent considered their security programs to be fully functional and 47 percent said they were developed or starting to function in 2018, compared to 16 percent and 41 percent, respectively, in 2017.   

More than half of organizations (57 percent) report that security is an agenda item at board meetings monthly or quarterly. In addition, 83 percent of organizations have increased their security budget in the last two years, and, on average, budgets increased by 85 percent, according to the report.

 


More From Healthcare Informatics

/news-item/cybersecurity/aspire-health-suffers-email-breach-phishing-attack

Aspire Health Suffers Email Breach from Phishing Attack

September 28, 2018
by Heather Landi, Associate Editor
| Reprints

Aspire Health, a Nashville-based in-home healthcare provider, was hacked Sept. 3 as a result of a phishing attack and “lost” some protected health information (PHI), according to a report by the Tennessean.com.

The hack was disclosed for the first time in federal court records filed on Tuesday, according to the media report. The company suffered a phishing attack on Sept. 3 which gained access to Aspires internal email system. The Tennessean article cites information in the court records that indicates the hacker then forwarded 124 emails to an external email account, including emails that contained “confidential and proprietary information and files” and “protected health information.”

“No other information about the contents of the hacked emails have been made public, so it is unclear how many patients have been exposed and what kind of information was leaked. Aspire has issued a statement saying it has already alerted a ‘small handful’ of patients who ‘may have been impacted’ by the email breach,” the article stated.

According to an email sent to the Tennessean from Cory Brown, a chief compliance officer for Aspire, the company immediately locked the compromised email account after discovering the phishing attack.

Brown added that it is unknown if the stolen emails were actually opened by the hacker.

In a statement to the local News4 station about the cyber attack, Aspire Health said:Aspire takes the security of its data and the personal information of its patients very seriously. Aspire recently learned one of its employees was the victim of an international phishing attack. Aspire’s information security team quickly discovered the attack and immediately took action to lock the employee’s account. Aspire is now working through the legal process to determine if any Aspire information was ultimately accessed by a third-party. Out of an abundance of caution, Aspire has already alerted the small handful of customers who may have been impacted by this event.”

According to the article, Aspire Health was founded in 2013 by former Sen. Bill Frist and current CEO Brad Smith. The company offers house-call physicians offering palliative care for advanced cancer and other serious illnesses.

“In the court records filed on Tuesday, Aspire has said it has tried to identify the hacker but so far has been unable to do so. The phishing attack originated from a website with an IP address in Eastern Europe for which Google is the registrar,” the article stated.

Court records detail Aspire Health's effort to subpoena Google and identify the hacker, according to media reports. The hacking attack was revealed Tuesday as Aspire filed a federal court motion seeking to subpoena Google for more information on the unknown hacker. Aspire attorney James Haltom said in the court motion that Google’s internal records should be able to identify the culprit – currently known only as John Doe 1, the Tennessean reported.

Haltom wrote in court records that Aspire has requested the information from Google “informally,” but Google said Aspire would need to get a subpoena, the article stated.

“The proposed subpoena to Google should provide information showing who has accessed and/or maintains the phishing website and the subscriber of the e-mail account that John Doe 1 used in the phishing attack,” Haltom wrote. “This information will likely allow Aspire to uncover and locate John Doe 1.”

 

Related Insights For: Cybersecurity

/news-item/cybersecurity/research-hackers-leveraging-error-messages-connected-medical-devices

Research: Hackers Leveraging Error Messages from Connected Medical Devices

September 27, 2018
by Heather Landi, Associate Editor
| Reprints

Recent research has identified a new trend in cyber attacks targeting connected medical devices—by simply monitoring the network traffic for common error messages, hackers can gain valuable insight into the inner workings of a device’s application.

New research from Zingbox, a provider of a healthcare Internet of Things (IoT) analytics platform, identifies new trends in connected medical device hacking that could impact patients’ protected health information (PHI).

Hackers are leveraging error messages from connected medical devices, including radiology, X-ray and other imaging systems, to gain valuable insights. These insights are then used to refine the attacks, increasing the chance of a successful hack, according to the research.

The information gathering phase of a typical cyberattack is a very time intensive phase where hackers learn as much as they can about the target network and devices, according to researchers. By simply monitoring the network traffic for common error messages, hackers can gain valuable insight into the inner workings of a device’s application; the type of web server, framework and versions used; the manufacturer that developed it; the database engine in the back end; the protocols used; and even the line of code that is causing the error, researchers note.

Hackers can also target specific devices to induce error messages. With this information, the information gathering phase is greatly shortened and they can quickly customize their attack to be tailored to the target device.

Zingbox’s research discovered that information shared as part of common error messages can be leveraged by hackers to compromise target connected devices. What’s more, hackers can “trick” or induce medical devices into sharing detailed information about the device’s inner workings. Leveraging this information quickens a hacker’s access to a hospital’s network, the researchers found.

“Hackers are finding new and creative ways to target connected medical devices. We have to be in front of these trends and vulnerabilities before they can cause real harm,” Xu Zou, Zingbox CEO and co-founder, said in a statement.

“Imagine how much more effective hackers can be if they find out that a device is running on IIS Web Server, using Oracle as backend and even gathering usernames,” Daniel Regalado, principal security researcher at Zingbox and co-author of Gray Hat Hacking, said in a statement. “That will help them to focus their attack vectors towards the database where PHI data might be stored.”

The research also revealed that the healthcare industry has made great strides in collaborating across providers, vendors and manufacturers: there was rapid response and a willingness to generate patches for their medical devices from three out of seven manufacturers whose devices were included in the study. However, the researches note, there is still work to be done to bring the urgency of these findings as well as increased collaboration between security vendors and device manufacturers.

 

See more on Cybersecurity