Guest Blog: When Business Masquerades As Social Conscience | Healthcare Informatics Magazine | Health IT | Information Technology Skip to content Skip to navigation

Guest Blog: When Business Masquerades As Social Conscience

October 25, 2016
by Mac McMillan, CEO, CynergisTek
| Reprints

Based on recent news and the headline of this article, you are likely expecting this will be a discussion of the irresponsible actions of the MedSec and Muddy Waters organizations that outed St. Jude Medical by disclosing vulnerabilities in the medical devices they make.

Certainly this is not something I condone or support as the right path to an acceptable end, as it raised fears in the people using those devices, gave the criminal element harmful information and quite possibly caused irreparable financial harm to St. Jude before perhaps the issues identified were even verified. I would argue, however, that the fault for this situation has a much broader cast than the characters represented in this one episode.

For years we have known and debated what to do about the insecurity in medical devices. The government bears a significant responsibility for this situation, because unlike the other various actors involved; they alone have a mandate to take action to protect consumers when they become aware of a situation that places their safety at risk. The department of Homeland Security has conducted tests and provided irrefutable proof that multiple security flaws plague medical devices. Others have conducted tests and hacked different devices and publicly disclosed their findings at well-known hacker events like DefCon and BlackHat. Providers have repeatedly requested assistance from the Food and Drug Administration (FDA) as well as their vendor suppliers to help solve this problem before someone is harmed. But after all their appeals, all the evidence, and all the debate, nothing concrete has been done.

So what of MedSec’s and Muddy Waters’ actions? Was it a public service? Was it financially motivated? Was it somehow Robin Hood-esque in its intentions to save the masses from the evil wrong-doers? It doesn’t matter. At the end of the day it was irresponsible and it potentially put both providers and consumers at risk. What is ironic about this situation is that it is reminiscent of what we experienced during the ‘90s with social activist hackers who would reverse engineer or attack systems, find flaws, and then publish them on the internet creating havoc for anyone using those systems as they scrambled to find a fix.

All too often we hear, “we hacked the system and found X, we told the vendor who ignored us, so for the greater good we published it online to embarrass them and show that we care.” Whoa, you showed me you care by publishing it where? Even when you knew there was no fix available? This doesn’t add up. I can assure you that those of us who were engaged in the pursuit of protecting systems and data didn’t feel the warmth of their actions. So how does this happen?

The answer is simple, although many will tell you it is a complex problem. It has a name. It’s called ambivalence. The technologists and manufacturers who make the devices argue that fixing the problem can stifle innovation and increase the cost of development. It is not in their self-interest to change. The providers complain (but are helpless to affect the market) because they don’t have choices, and if caught between using a device that is insecure to save a person and not using it they will always opt to use and heal and accept the risk. We want them to. So it is not in their or our self-interests to force change by refusing to use insecure devices. The government studies, tests, debates, but doesn’t take action. Too much legislation is bad so it’s not in their self-interest, but sometimes the market needs to be regulated. So we have ambivalence, conflicting interests and lack of action which all lead to insecure medical devices and at-risk consumers. One of the negative side effects of ambivalence are those who become frustrated or who seek to take advantage of the situation and exploit fear to bring about change.

What is most frustrating about this issue is that everyone involved knows the problem, understands the risk, and knows what needs to be done. I’ve had countless conversations with providers on how to manage the risk. They are spending time and money to provide whatever level of protection they can with imperfect solutions.

I was recently asked in two separate conversations with CIOs, who also just happened to be medical doctors, whether we will ever see a solution. They had no faith that manufacturers would respond on their own citing, as it is not in their financial interests to do so. They restated their frustration with the lack of sufficient choices to affect change by selecting those products that were secure.

As I reflected on our discussions, I realized we were just contributing to the ambivalence around this issue. We need to stop. Cybersecurity IS a patient safety issue and cybersecurity controls should be required criteria for certification of medical devices that connect to a caregiver’s network or to a patient. Standards should be created for developing and implementing medical devices that assure the consumer their safety is being addressed properly. Medical devices should be required to pass independent tests as part of a certification process, before being approved for sale. Manufacturers should be required to provide ongoing support during the life of these devices to maintain their integrity as other software vendors do. And the FDA should issue standards for testing and certifying medical devices prior to approval for sale.

Unfortunately, we cannot eliminate undesirable behavior, but maybe we can affect the environment so that incidents like what we saw with St. Jude Medical no longer seem reasonable. Responsible testing of systems and applications is a critical component of good security, and there is a responsible way to go about it and a responsible way to manage what we learn from it.


The Health IT Summits gather 250+ healthcare leaders in cities across the U.S. to present important new insights, collaborate on ideas, and to have a little fun - Find a Summit Near You!


/article/cybersecurity/guest-blog-when-business-masquerades-social-conscience
/news-item/cybersecurity/ocr-fines-providers-hipaa-violations-failure-follow-basic-security

Florida Provider Pays $500K to Settle Potential HIPAA Violations

December 12, 2018
by Heather Landi, Associate Editor
| Reprints

Florida-based Advanced Care Hospitalists PL (ACH) has agreed to pay $500,000 to the Office for Civil Rights (OCR) of the U.S. Department of Health and Human Services (HHS) to settle potential HIPAA compliance failures, including sharing protected health information with an unknown vendor without a business associate agreement.

ACH provides contracted internal medicine physicians to hospitals and nursing homes in west central Florida. ACH provided services to more than 20,000 patients annually and employed between 39 and 46 individuals during the relevant timeframe, according to OCR officials.

Between November 2011 and June 2012, ACH engaged the services of an individual that claimed to be a representative of a company named Doctor’s First Choice Billings, Inc. (First Choice). The individual provided medical billing services to ACH using First Choice’s name and website, but allegedly without the knowledge or permission of First Choice’s owner, according to OCR officials in a press release published last week.

A local hospital contacted ACH on February 11, 2014 and notified the organization that patient information was viewable on the First Choice website, including names, dates of birth and social security numbers. In response, ACH was able to identify at least 400 affected individuals and asked First Choice to remove the protected health information from its website. ACH filed a breach notification report with OCR on April 11, 2014, stating that 400 individuals were affected; however, after further investigation, ACH filed a supplemental breach report stating that an additional 8,855 patients could have been affected.

According to OCR’s investigation, ACH never entered into a business associate agreement with the individual providing medical billing services to ACH, as required by the Health Insurance Portability and Accountability Act (HIPAA) Privacy and Security Rules, and failed to adopt any policy requiring business associate agreements until April 2014. 

“Although ACH had been in operation since 2005, it had not conducted a risk analysis or implemented security measures or any other written HIPAA policies or procedures before 2014. The HIPAA Rules require entities to perform an accurate and thorough assessment of the potential risks and vulnerabilities to the confidentiality, integrity, and availability of an entity’s electronic protected health information,” OCR officials stated in a press release.

In a statement, OCR Director Roger Severino said, “This case is especially troubling because the practice allowed the names and social security numbers of thousands of its patients to be exposed on the internet after it failed to follow basic security requirements under HIPAA.”

In addition to the monetary settlement, ACH will undertake a robust corrective action plan that includes the adoption of business associate agreements, a complete enterprise-wide risk analysis, and comprehensive policies and procedures to comply with the HIPAA Rules. 

In a separate case announced this week, a Colorado-based hospital, Pagosa Springs Medical Center, will pay OCR $111,400 to settle potential HIPAA violations after the hospital failed to terminate a former employee’s access to electronic protected health information (PHI).

Pagosa Springs Medical Center (PSMC) is a critical access hospital, that at the time of OCR’s investigation, provided more than 17,000 hospital and clinic visits annually and employs more than 175 individuals.

The settlement resolves a complaint alleging that a former PSMC employee continued to have remote access to PSMC’s web-based scheduling calendar, which contained patients’ electronic protected health information (ePHI), after separation of employment, according to OCR.

OCR’s investigation revealed that PSMC impermissibly disclosed the ePHI of 557 individuals to its former employee and to the web-based scheduling calendar vendor without a HIPAA required business associate agreement in place. 

The hospital also agreed to adopt a substantial corrective action plan as part of the settlement, and, as part of that plan, PSMC has agreed to update its security management and business associate agreement, policies and procedures, and train its workforce members regarding the same.

“It’s common sense that former employees should immediately lose access to protected patient information upon their separation from employment,” Severino said in a statement. “This case underscores the need for covered entities to always be aware of who has access to their ePHI and who doesn’t.”

Covered entities that do not have or follow procedures to terminate information access privileges upon employee separation risk a HIPAA enforcement action. Covered entities must also evaluate relationships with vendors to ensure that business associate agreements are in place with all business associates before disclosing protected health information. 

 

More From Healthcare Informatics

/news-item/cybersecurity/eye-center-california-switches-ehr-vendor-following-ransomware-incident

Eye Center in California Switches EHR Vendor Following Ransomware Incident

December 11, 2018
by Rajiv Leventhal, Managing Editor
| Reprints

Redwood Eye Center, an ophthalmology practice in Vallejo, Calif., has notified more than 16,000 patients that its EHR (electronic health record) hosting vendor experienced a ransomware attack in September.

In the notification to the impacted patients, the center’s officials explained that the third-party vendor that hosts and stores Redwood’s electronic patient records, Illinois-based IT Lighthouse, experienced a data security incident which affected records pertaining to Redwood patients. Officials also said that IT Lighthouse hired a computer forensics company to help them after the ransomware attack, and Redwood worked with the vendor to restore access to our patient information.

Redwood’s investigation determined that the incident may have involved patient information, including patient names, addresses, dates of birth, health insurance information, and medical treatment information.

Notably, Redwood will be changing its EMR hosting vendor, according to its officials. Per the notice, “Redwood has taken affirmative steps to prevent a similar situation from arising in the future. These steps include changing medical records hosting vendors and enhancing the security of patient information.”

Ransomware attacks in the healthcare sector continue to be a problem, but at the same time, they have diminished substantially compared to the same time period last year, as cyber attackers move on to more profitable activities, such as cryptojacking, according to a recent report from cybersecurity firm Cryptonite.

Related Insights For: Cybersecurity

/news-item/cybersecurity/report-30-percent-healthcare-databases-exposed-online

Report: 30 Percent of Healthcare Databases Exposed Online

December 10, 2018
by Heather Landi, Associate Editor
| Reprints

Hackers are using the Dark Web to buy and sell personally identifiable information (PII) stolen from healthcare organizations, and exposed databases are a vulnerable attack surface for healthcare organizations, according to a new cybersecurity research report.

A research report from IntSights, “Chronic [Cyber] Pain: Exposed & Misconfigured Databases in the Healthcare Industry,” gives an account of how hackers are tracking down healthcare personally identifiable information (PII) data on the Dark Web and where in the attack surface healthcare organizations are most vulnerable.

The report explores a key area of the healthcare attack surface, which is often the easiest to avoid—exposed databases. It’s not only old or outdated databases that get breached, but also newly established platforms that are vulnerable due to misconfiguration and/or open access, the report authors note.

Healthcare organizations have been increasingly targeted by threat actors over the past few years and their most sought-after asset is their data. As healthcare organizations attempt to move data online and increase accessibility for authorized users, they’ve dramatically increased their attack surface, providing cybercriminals with new vectors to steal personally identifiable information (PII), according to the report. Yet, these organizations have not prioritized investments in cybersecurity tools or procedures.

Healthcare budgets are tight, the report authors note, and if there’s an opportunity to purchase a new MRI machine versus make a new IT or cybersecurity hire, the new MRI machine often wins out. Healthcare organizations need to carefully balance accessibility and protection.

In this report, cyber researchers set out to show that the healthcare industry as a whole is vulnerable, not due to a specific product or system, but due to lack of process, training and cybersecurity best practices. “While many other industries suffer from similar deficiencies, healthcare organizations are particularly at risk because of the sensitivity of PII and medical data,” the report states.

The researchers chose a couple of popular technologies for handling medical records, including known and widely used commercial databases, legacy services still in use today, and new sites or protocols that try to mitigate some of the vulnerabilities of past methods. The purpose of the research was to demonstrate that hackers can easily find access to sensitive data in each state: at rest, in transit or in use.

The researchers note that the tactics used were pretty simple: Google searches, reading technical documentation of the aforementioned technologies, subdomain enumeration, and some educated guessing about the combination of sites, systems and data. “All of the examples presented here were freely accessible, and required no intrusive methods to obtain. Simply knowing where to look (like the IP address, name or protocol of the service used) was often enough to access the data,” the report authors wrote.

The researchers spent 90 hours researching and evaluated 50 database. Among the findings outlined in the report, 15 databases were found exposed, so the researchers estimate about 30 percent of databases are exposed. The researchers found 1.5 million patient records exposed, at a rate of about 16,687 medical records discovered per hour.

The estimated black-market price per medical record is $1 per record. The researchers concluded that hackers can find a large number of records in just a few hours of work, and this data can be used to make money in a variety of ways. If a hacker can find records at a rate of 16,687 per hour and works 40 hours a week, that hacker can make an annual salary of $33 million, according to the researchers.

“It’s also important to note that PII and medical data is harder to make money with compared to other data, like credit card info. Cybercriminals tend to be lazy, and it’s much quicker to try using a stolen credit card to make a fraudulent purchase than to buy PII data and run a phishing or extortion campaign. This may lessen the value of PII data in the eyes of some cybercriminals; however, PII data has a longer shelf-life and can be used for more sophisticated and more successful campaigns,” IntSights security researcher and report author Ariel Ainhoren wrote.

The researchers used an example of hospital using a FTP server. “FTP is a very old and known way to share files across the Internet. It is also a scarcely protected protocol that has no encryption built in, and only asks you for a username and password combination, which can be brute forced or sniffed

by network scanners very easily,” Ainhoren wrote. “Here we found a hospital in the U.S. that has its FTP server exposed. FTP’s usually hold records and backup data, and are kept open to enable backup to a remote site. It could be a neglected backup procedure left open by IT that the hospital doesn’t even know exists.”

According to the report, hackers have three main motivations for targeting healthcare organizations and medical data:

  • State-Sponsored APTs Targeting Critical Infrastructure: APTs are more sophisticated and are usually more difficult to stop. They will attempt to infiltrate a network to test tools and techniques to set the stage for a larger, future attack, or to obtain information on a specific individual’s medical condition.
  • Attackers Seeking Personal Data: Attackers seeking personal data can use it in multiple ways. They can create and sell PII lists, they can blackmail individuals or organizations in exchange for the data, or they can use it as a basis for further fraud, like phishing, Smishing, or scam calls.
  • Attackers Taking Control of Medical Devices for Ransom: Attackers targeting vulnerable infrastructure won’t usually target healthcare databases, but will target medical IT equipment and infrastructure to spread malware that exploits specific vulnerabilities and demands a ransom to release the infected devices. Since medical devices tend to be updated infrequently (or not at all), this provides a relatively easy target for hackers to take control.

The report also offers a few general best practices for evaluating if a healthcare organization’s data is exposed and/or at risk:

  • Use Multi-Factor Authentication for Web Applications: If you’re using a system that only needs a username and password to login, you’re making it significantly easier to access. Make sure you have MFA setup to reduce unauthorized access.
  • Tighter Access Control to Resources: Limit the number of credentials to each party accessing the database. Additionally, limit specific parties’ access to only the information they need. This will minimize your chance of being exploited through a 3rd party, and if you are, will limit the damage of that breach.
  • Monitor for Big or Unusual Database Reads: These may be an indication that a hacker or unauthorized party is stealing information. It’s a good idea to setup limits on database reads and make sure requests for big database reads involve some sort of manual review or confirmation.
  • Limit Database Access to Specific IP Ranges: Mapping out the organizations that need access to your data is not an easy task. But it will give you tighter control on who’s accessing your data and enable you to track and identify anomalous activity. You can even tie specific credentials to specific IP ranges to further limit access and track strange behavior more closely.

 

See more on Cybersecurity

betebet sohbet hattı betebet bahis siteleringsbahis