July 2016 | SafeLogic

Archive for July, 2016

25 Jul 2016

#Winning

2016 - Golden Bridge Award - SilverThis morning, I had a nice surprise waiting in my inbox. SafeLogic won a Golden Bridge Award!

Awards have never been a priority for us, in large part due to our positioning… and the fact that we are focused on revenue and customers, not our own ego. We are the vendor to the vendors, a key component but rarely the feature. Award nominations always ask about end users, such as in the Fortune 500. “Symantec uses SafeLogic encryption. BlackBerry uses SafeLogic encryption,” I usually respond. “We have a great roster of customers, but it’s ultimately their end users, not ours.” Then we inevitably get sorted to the back of the list. I never worried about it because yes, I know, tech vendor awards are often only as valuable as the paper that they’re printed on, and we knew that we didn’t need to conform to a traditional category to be successful.

This time was different. The Golden Bridge Award team got us! They understood the importance of our role, the innovation behind our products, and recognized that while Joe Schmo wouldn’t go download a copy of our software directly, it’s pretty damn likely that Joe is already using it, and that merits recognition.

So with great pride, the SafeLogic team announces that we have won Silver in the category of Security Software Startups!
It feels good to be an award-winning company.

Click to Tweet: #Crypto startup @SafeLogic pulls down a trophy at #GoldenWorldAwards! http://bit.ly/SLaward725

Kudos also to our customer Securonix on winning a variety of awards, including a Grand Trophy, and Tanuj Gulati, their Co-founder & CTO, for winning a Gold for Executive of the Year in Security Services and a Silver for Most Innovative Executive of the Year. Well done!

Now with all this talk of Golds and Silvers, I’m ready for the Olympics to open in Rio. U-S-A! U-S-A!

BlogFooterWalt3

19 Jul 2016

OpenSSL 1.1’s Big, Bright, FIPS Validated Future

SafeLogic is the Orange Knight!The OpenSSL project posted to their official blog today with some major news – OpenSSL 1.1 will be getting a FIPS 140-2 validated module! It’s a huge deal and the SafeLogic team is proud to be leading the effort.

In September, OpenSSL’s Steve Marquess explained in a blog post (FIPS 140-2: It’s Not Dead, It’s Resting) why the ubiquitous open source encryption provider would be hard-pressed to bring FIPS mode to the 1.1 release. With changes over the last few years at the CMVP, the viability of legacy OpenSSL FIPS module validations have been repeatedly threatened and the crypto community simply cannot accept the possibility of being without a certificate. An open source module with a communal certificate available is a crucial component that allows many start-up companies to test the waters in federal agencies and regulated industries before investing in a validation for themselves. Likewise, many major corporations have relied upon OpenSSL FIPS modules over the years as a building block for extensive engineering efforts. Without this commitment, many would have been caught in the dilemma whether to use the FIPS 140 validated open source module compatible with a rapidly aging, often-maligned older version of OpenSSL, or the new, sleek, secure OpenSSL 1.1, but without a FIPS validated module at its heart.

The choice will now be an obvious one, and the community can safely remove their heads from the sand and begin planning their future roadmap around a fully validated FIPS module for OpenSSL 1.1 and beyond.

As the OpenSSL team announced today, SafeLogic will sponsor the engineering work on the FIPS module and we will be handling the validation effort ourselves. (What, you expected us to hire an outside consultant? Surely you jest.) Acumen will be the testing laboratory, as they have been for many of our RapidCerts, and together we have high hopes for a smooth and relatively painless process.

Click to Tweet: Have you heard? @SafeLogic is leading #FIPS140 effort for new #OpenSSL #crypto module! https://www.SafeLogic.com/openssl-1-1-future/

One key element in the OpenSSL blog post that will surprise some folks:

“This is also an all-or-nothing proposition; no one – including SafeLogic – gets to use the new FIPS module until and if a new open source based validation is available for everyone.”

Why would we agree to that? For that matter, why would we take on this project at all, while other “leaders” in the community relished the idea of a world without validated open source options?

At SafeLogic, we are true believers in the importance of open source, in encryption and elsewhere. Past versions of OpenSSL have provided a basis for SafeLogic’s CryptoComply modules, so you may ask why we’re doing this – why we’re not just building it ourselves and letting the open source community fend for themselves.

Well, we thought about doing just that, but we decided against it for both altruistic and strategic reasons. We believe that SafeLogic has the chance to help not only the OpenSSL team, but the tech community at large. We realize that product vendors, government entities, education institutions, and other organizations need validated open source modules, and not all of them can or will implement SafeLogic solutions.

As a team, we believe that a rising tide lifts all boats, and we are putting that philosophy into action. The availability of an OpenSSL 1.1 FIPS module will provide greater security in regulated verticals and more opportunities for everyone working in this community. SafeLogic will be at the epicenter of the effort, of course, and I would be remiss if I didn’t mention that our success in this endeavor will push SafeLogic even further forward as the true leader in providing validated crypto!

Our central role in the effort will ensure that nobody has more expertise or knowledge in the design, operation and validation of OpenSSL 1.1 modules than SafeLogic, and future versions of CryptoComply will be the best yet. Trust me, our customers will reap the benefits. We are happy to put in the sweat equity on the open source communal validation, knowing that when product teams need a FIPS 140-2 certificate in their own name, custom work, integration assistance, comprehensive support or anything else related to OpenSSL 1.1 and FIPS 140-2, SafeLogic will be the obvious choice.

We’re very excited to work with Steve, the OpenSSL team, and Acumen, as we join forces to lead the OpenSSL 1.1 FIPS module through FIPS 140-2 validation. Stay tuned for updates!

For more information about the project, how to contribute, the future roadmap, or media inquiries, please contact us at OpenSSL@SafeLogic.com.

BlogFooterRay2

15 Jul 2016

Achieving Safe Harbor in Healthcare – Episode 4

Thanks for returning for the final installment in this blog series! If you need to catch up, please see Episode 1Episode 2 and Episode 3, posted each of the last three days.

Here in Episode 4, we are tackling HITECH Breach Notification for Unsecured Protected Health Information; Interim Final Rule part (a)(ii), which covers data in motion. This will be the longest section, so grab a cup of coffee and let’s rock! For your reference, here is the full passage again:

Protected health information (PHI) is rendered unusable, unreadable, or indecipherable to unauthorized individuals if one or more of the following applies:

(a) Electronic PHI has been encrypted as specified in the HIPAA Security Rule by ‘‘the use of an algorithmic process to transform data into a form in which there is a low probability of assigning meaning without use of a confidential process or key’’ and such confidential process or key that might enable decryption has not been breached. To avoid a breach of the confidential process or key, these decryption tools should be stored on a device or at a location separate from the data they are used to encrypt or decrypt. The encryption processes identified below have been tested by the National Institute of Standards and Technology (NIST) and judged to meet this standard.

(i) Valid encryption processes for data at rest are consistent with NIST Special Publication 800–111, Guide to Storage Encryption Technologies for End User Devices.

(ii) Valid encryption processes for data in motion are those which comply, as appropriate, with NIST Special Publications 800–52, Guidelines for the Selection and Use of Transport Layer Security (TLS) Implementations800– 77, Guide to IPsec VPNs; or 800–113, Guide to SSL VPNs, or others which are Federal Information Processing Standards (FIPS) 140–2 validated.

Let’s go in order. First, for Transport Layer Security (TLS), we are directed to another NIST Special Publication, this time 800–52, Guidelines for the Selection and Use of Transport Layer Security (TLS) Implementations. Here’s a quote, directly from the document’s Minimum Requirements section:

The cryptographic module used by the server shall be a FIPS 140-validated cryptographic module. All cryptographic algorithms that are included in the configured cipher suites shall be within the scope of the validation, as well as the random number generator.

That’s pretty straight-forward. NIST wants you to use NIST validated encryption. Makes sense, we expected that. Okay, we’re off to a great start!

Let’s look at IPsec VPNs next, governed by NIST Special Publication 800– 77, Guide to IPsec VPNs. Here’s an excerpt from the Executive Summary. (Italics below are mine.)

NIST’s requirements and recommendations for the configuration of IPsec VPNs are:

  • – If any of the information that will traverse a VPN should not be seen by non-VPN users, then the VPN must provide confidentiality protection (encryption) for that information.
  • – A VPN must use a FIPS-approved encryption algorithm. AES-CBC (AES in Cipher Block Chaining mode) with a 128-bit key is highly recommended; Triple DES (3DES-CBC) 1 is also acceptable. The Data Encryption Standard (DES) is also an encryption algorithm; since it has been successfully attacked, it should not be used.
  • – A VPN must always provide integrity protection.
  • – A VPN must use a FIPS-approved integrity protection algorithm. HMAC-SHA-1 is highly recommended. HMAC-MD5 also provides integrity protection, but it is not a FIPS-approved algorithm.

That’s also pretty blunt, laid out right in the first few pages. The only discrepancy here from previous guidance is that they call it “FIPS-approved” instead of the usual “FIPS validated”. It’s still a clear reference to the NIST program and the only resources available to confirm approval – the public validation list. Note that CAVP is responsible for validating algorithms and maintains public lists for each algorithm. That would be enough to satisfy the minimum requirements here, although deploying FIPS validated algorithms is not enough to claim full conformance to the standard. So for comprehensive coverage, a FIPS validated module is still required.

Let’s go to the last alternative category, SSL VPNs, which refers to NIST Special Publication 800–113, Guide to SSL VPNs. This one is probably the most interesting (and confusing). SP 800-113 clearly states in several places that Federal agencies require FIPS 140-2 encryption, but never explicitly extends the requirement beyond those government offices. It gets complicated because the Breach Notification Final Interim Rule clearly refers to this document… which dedicates probably 80% of the content in the SP to discussing how to meet the FIPS 140-2 standard. So the bulk of the document concerns the validation, even though it does not call it a mandate. It’s like handing someone a copy of Rosetta Stone but not actually asking them to learn the language. It’s a strong implication, so let’s chalk this one up to ‘strongly recommended’ even if not ‘required’.

Anything that didn’t fall into one of those categories (TLS, IPsec VPN or SSL VPN) lands squarely on the “others which are Federal Information Processing Standards (FIPS) 140–2 validated” box.

So if you’re keeping score at home, part (a) lays out five scenarios:

  • – Data at rest, referencing one NIST publication that points to two others, ultimately mapping all security controls to FIPS 140-2 validated encryption.
  • – Data in motion with a TLS implementation, which is mandated to include FIPS 140-2 validated encryption.
  • – Data in motion with an IPsec VPN, which must be handled with a FIPS-approved algorithm and a FIPS-approved integrity protection algorithm.
  • – Data in motion with an SSL VPN, which was not explicitly required to be FIPS validated, but the referenced publication is about how to ensure that it is FIPS validated, so deviate at your own risk.
  • – All other active data, which must be encrypted with a FIPS 140-2 validated module.

All scenarios lead back to the same conclusion. HIPAA and HITECH are both pieces of federal legislation and enforced by a federal agency, which refers to another federal agency for judgment on encryption benchmarks. FIPS 140-2 is the only certification that unequivocally meets the demands of these government bodies.

HIPAA Safe Harbor

Please use these links and excerpts to complete your own research, but be sure to ask yourself, “Do I really want to risk liability by using anything less than FIPS 140-2 validated encryption?”

The Breach Notification for Unsecured Protected Health Information; Interim Final Rule itself makes a strong point – you can choose to skip encryption and still potentially comply with the HIPAA Security Rule. But if you are hoping to avoid breach notification and penalties, you will be out of luck. Here is another excerpt from the Interim Final Rule, explaining the disconnect and solution. (Italics below are mine.)

Under 45 CFR 164.312(a)(2)(iv) and (e)(2)(ii), a covered entity must consider implementing encryption as a method for safeguarding electronic protected health information; however, because these are addressable implementation specifications, a covered entity may be in compliance with the Security Rule even if it reasonably decides not to encrypt electronic protected health information and instead uses a comparable method to safeguard the information.

Therefore, if a covered entity chooses to encrypt protected health information to comply with the Security Rule, does so pursuant to this guidance, and subsequently discovers a breach of that encrypted information, the covered entity will not be required to provide breach notification because the information is not considered ‘‘unsecured protected health information’’ as it has been rendered unusable, unreadable, or indecipherable to unauthorized individuals.

On the other hand, if a covered entity has decided to use a method other than encryption or an encryption algorithm that is not specified in this guidance to safeguard protected health information, then although that covered entity may be in compliance with the Security Rule, following a breach of this information, the covered entity would have to provide breach notification to affected individuals. For example, a covered entity that has a large database of protected health information may choose, based on their risk assessment under the Security Rule, to rely on firewalls and other access controls to make the information inaccessible, as opposed to encrypting the information. While the Security Rule permits the use of firewalls and access controls as reasonable and appropriate safeguards, a covered entity that seeks to ensure breach notification is not required in the event of a breach of the information in the database would need to encrypt the information pursuant to the guidance.

The Interim Final Rule can be very difficult to follow, but this much is clear: They consistently refer judgment to the appropriate government agency – NIST. As the National Institute of Standards and Technology, it is their experts that set the benchmarks, procedures for implementation, and decide what is approved and what is not. At the end of the day, NIST is very black and white with their program for testing and validating encryption modules. Everything that appears on the public validation list is approved, and everything else is not. For the needs of federal agencies and the military, NIST goes so far as to say that any unvalidated encryption is considered to be the equal of plaintext. Essentially, without validation, it cannot be trusted, not even a little bit. The privacy and security of citizens’ health information has rightfully been deemed a priority and should be treated with the same respect.

If you are a Covered Entity, you need to do a complete audit of the encryption in use throughout your organization. Every software solution from every vendor being used by every authorized individual and every Business Associate should contain a certified encryption module that appears on NIST’s public validation list. If it’s not there, start asking direct questions of the vendor. Start with “Why isn’t it validated?” and don’t let them dodge the question. The validation should clearly display the vendor’s name and the operating environment that you are using. If it’s not clear that they are FIPS 140-2 validated and have a certificate number to reference, they probably aren’t validated and you are in that gray area, subject to interpretation by the HHS. Nobody wants to be in that gray area when a device is lost, so get FIPS validated!

Thank you for reading my four episode opus. If you have any questions or feedback, please email me at Walt@SafeLogic.com or ping me on Twitter @SafeLogic_Walt.

BlogFooterWalt

14 Jul 2016

Achieving Safe Harbor in Healthcare – Episode 3

Welcome back! If you need to catch up, please see Episode 1 and Episode 2.

Yesterday, we established that our interest in the HITECH Breach Notification for Unsecured Protected Health Information; Interim Final Rule was limited to part (a), which refers to the cryptographic protection of actively-accessed PHI. We discarded part (b) for our purposes, because it only covers devices that have been decommissioned. For your reference, here is the passage again:

Protected health information (PHI) is rendered unusable, unreadable, or indecipherable to unauthorized individuals if one or more of the following applies:

(a) Electronic PHI has been encrypted as specified in the HIPAA Security Rule by ‘‘the use of an algorithmic process to transform data into a form in which there is a low probability of assigning meaning without use of a confidential process or key’’ and such confidential process or key that might enable decryption has not been breached. To avoid a breach of the confidential process or key, these decryption tools should be stored on a device or at a location separate from the data they are used to encrypt or decrypt. The encryption processes identified below have been tested by the National Institute of Standards and Technology (NIST) and judged to meet this standard.

(i) Valid encryption processes for data at rest are consistent with NIST Special Publication 800–111, Guide to Storage Encryption Technologies for End User Devices.

(ii) Valid encryption processes for data in motion are those which comply, as appropriate, with NIST Special Publications 800–52, Guidelines for the Selection and Use of Transport Layer Security (TLS) Implementations800– 77, Guide to IPsec VPNs; or 800–113, Guide to SSL VPNs, or others which are Federal Information Processing Standards (FIPS) 140–2 validated.

Breach Safe HarborMoving forward! Within part (a), the Interim Final Rule refers to NIST for judgment and testing in two categories: data at rest and data in motion.

First, part (i) for data at rest, refers to NIST Special Publication 800–111, Guide to Storage Encryption Technologies for End User Devices. Yes, NIST governs this category (spoiler alert – they govern them all!) so expect more cross-referencing. In this case, to another Special Publication.

Organizations should select and deploy the necessary controls based on FIPS 199’s categories for the potential impact of a security breach involving a particular system and NIST Special Publication 800-53’s recommendations for minimum management, operational, and technical security controls.

FIPS Publication 199 dates back to 2004, but is still widely used. It’s a relatively short document and is a reference guide provided by NIST to assist with control classifications. 800-111 explains further.

Organizations should select and deploy the necessary security controls based on existing guidelines. Federal Information Processing Standards (FIPS) 199 establishes three security categories – low, moderate, and high – based on the potential impact of a security breach involving a particular system. NIST SP 800-53 provides recommendations for minimum management, operational, and technical security controls for information systems based on the FIPS 199 impact categories. The recommendations in NIST SP 800-53 should be helpful to organizations in identifying controls that are needed to protect end user devices, which should be used in addition to the specific recommendations for storage encryption listed in this document.

So depending on the FIPS 199 classifications, you should consult NIST Special Publication 800-53 and act accordingly.  This is even more confusing, because 800-53 is a catalog-style document used to map controls from a variety of other Special Publications, so it does not have breadcrumbs to lead us directly from Interim Final Rule to Safe Harbor. Luckily, SafeLogic’s whitepaper on HIPAA security controls covers this exact topic. Rest assured, NIST connects every encryption requirement back to their own standard which they certify – FIPS 140-2. Go ahead and download the whitepaper and review at your leisure. Regardless of the FIPS 199 classification, SP 800-53 is satisfied by deploying FIPS 140-2 encryption. In the interest of space and time, I will not rehash all of the controls, but it’s all in the whitepaper.

Part (ii) is for data in motion and is subdivided into four categories as applicable: TLS, IPsec VPN, SSL VPN, or else the catch-all “others”, which goes straight to – yes, you guessed it – FIPS 140-2. Have I already mentioned that NIST wants everyone to use FIPS 140-2 validated encryption? It’s almost like NIST is promoting the use of their own standard…

These categories will be covered tomorrow, with excerpts from the referenced NIST Special Publications, in the final episode. Kudos if you’re still with me!

BlogFooterWalt

13 Jul 2016

Achieving Safe Harbor in Healthcare – Episode 2

Yesterday, I posted Episode 1, discussing some of the terminology and background for this discussion. Between the reputation hit and the financial penalties, we’ve established that achieving Safe Harbor should be a priority for every healthcare provider. Enduring a PHI breach is just no fun and it’s not worth the risk. Now that Business Associates are included in the liability of a Covered Entity (see CHCS breach), it’s more important than ever to know exactly whether appropriate encryption is always being used when accessing your patients’ data, so let’s cut to the chase. You need to confirm that every device that is authorized to access PHI is encrypting the data in full compliance with the Safe Harbor rule.

Healthcare Safe HarborHow do you know if the deployed encryption qualifies for Safe Harbor?

The HITECH Breach Notification for Unsecured Protected Health Information; Interim Final Rule was issued in August 2009 by HHS, stating that even in the event of device hardware being lost or stolen, it is not considered a breach if the data is fully obscured from intruding eyes. Here is the complete passage:

Protected health information (PHI) is rendered unusable, unreadable, or indecipherable to unauthorized individuals if one or more of the following applies:

(a) Electronic PHI has been encrypted as specified in the HIPAA Security Rule by ‘‘the use of an algorithmic process to transform data into a form in which there is a low probability of assigning meaning without use of a confidential process or key’’ and such confidential process or key that might enable decryption has not been breached. To avoid a breach of the confidential process or key, these decryption tools should be stored on a device or at a location separate from the data they are used to encrypt or decrypt. The encryption processes identified below have been tested by the National Institute of Standards and Technology (NIST) and judged to meet this standard.

(i) Valid encryption processes for data at rest are consistent with NIST Special Publication 800–111, Guide to Storage Encryption Technologies for End User Devices.

(ii) Valid encryption processes for data in motion are those which comply, as appropriate, with NIST Special Publications 800–52, Guidelines for the Selection and Use of Transport Layer Security (TLS) Implementations800– 77, Guide to IPsec VPNs; or 800–113, Guide to SSL VPNs, or others which are Federal Information Processing Standards (FIPS) 140–2 validated.

(b) The media on which the PHI is stored or recorded have been destroyed in one of the following ways:

(i) Paper, film, or other hard copy media have been shredded or destroyed such that the PHI cannot be read or otherwise cannot be reconstructed. Redaction is specifically excluded as a means of data destruction.

(ii) Electronic media have been cleared, purged, or destroyed consistent with NIST Special Publication 800–88, Guidelines for Media Sanitization, such that the PHI cannot be retrieved.

Let’s break this down.

Safe Harbor applies if:
(a) Proper encryption is used, or
(b) Data has been destroyed properly.

We aren’t talking about old PHI that was shredded on a discarded hard drive, we’re talking about active data in use. The stuff that is being actively accessed and leveraged by healthcare workers on the devices that are being lost and stolen. So we can ignore part (b) for the purposes of this discussion, which will continue tomorrow when I post Episode 3, focusing on this active encryption. We will look at the actual verbiage used in each of NIST’s referenced Special Publications referenced in part (a) and exactly how they require you, your facility, your competitors, your vendors, and your BAs to deploy encryption for each scenario.

Note that the Interim Final Rule even says that “The encryption processes identified below have been tested by the National Institute of Standards and Technology (NIST) and judged to meet this standard.” It doesn’t mention any other sanctioning body or agency that is authorized to assess whether the standards have been met. The buck stops with NIST.

Stay tuned!

BlogFooterWalt

12 Jul 2016

Achieving Safe Harbor in Healthcare – Episode 1

After I cross-posted Ray Potter’s HealthITSecurity.com editorial to one of my LinkedIn groups (HIPAA Survival Guide), it spawned a great conversation about the lack of clarity on the topic of encryption in healthcare IT. Ray’s article and the subsequent debate raised new questions and new confusion, even among the most qualified and experienced folks in the group. It became obvious that additional guidance would be helpful, so here goes. This is Episode 1 of 4, to be posted each day this week.

First, let’s do a rundown on a few of the acronyms and terminology crucial to understanding encryption in healthcare.

Safe Harbor in HealthcareNIST is the National Institute of Standards and Technology. They are responsible for setting benchmarks and providing guidance for technological implementations. They also set minimum requirements for federal government agencies to follow. Because of this role, private sector companies use NIST guidance as a template even when it’s not legally mandated.

FIPS 140-2 is the Federal Information Processing Standard 140 (Version 2). It governs cryptography, was written by NIST and is possibly their most well-known standard. NIST even established a department dedicated to the validation of encryption modules that meet the standard. FIPS 140-1 refers to the first version of the standard, while FIPS 140-2 is the current version. FIPS 140-3 is not an official version, although many use it to refer to a theoretical future revision.

FIPS 140-2 Level 1 is the appropriate validation level for software. Levels 2-4 are concerned with increasing levels physical security of hardware, including tamper resistance and seals that show evidence of attempted access. Cool stuff, but irrelevant for a software solution like those being deployed on healthcare providers’ laptops and mobile devices.

CMVP is the Cryptographic Module Validation Program. It is the NIST department (mentioned above) that handles the testing and certification of encryption modules from the private sector. Their little sister department, CAVP, is the Cryptographic Algorithm Validation Program, which provides the certification of individual algorithms as a building block towards CMVP’s FIPS 140-2 validation.

The CMVP maintains a public list of all modules that have been validated. If it’s not listed, it didn’t get tested, didn’t qualify, or at least hasn’t completed the process. (It used to take 12-18 months, before SafeLogic cut it down to 8 weeks.) Any technology vendor that has received FIPS 140-2 validation will proudly provide their certificate number upon request, and you should absolutely cross-reference it with the public list to confirm.

“FIPS validated” is the term for a certified algorithm or module. It will have a unique certificate number on the public list. “FIPS compliant” means that it is ready to be tested, but has not completed the process, so it cannot be confirmed. Likewise, “FIPS ready” or “designed for FIPS” or “pre-validated for FIPS” are not actual, verifiable claims.

So why do you care? Well, if you’re working in health, a little thing called Safe Harbor provides a big motivation. This is the legal avoidance of notifying patients that their PHI (Protected Health Information) was exposed. Let’s say that a physician’s laptop is stolen. If you qualify under Safe Harbor, you’re all good, just need to procure a new laptop for the poor doctor. If you don’t qualify, you’re embarking on a pretty terrible journey, letting people know that their data was lost and their identity is at risk of theft, and waiting for the penalty assessment from the Department of Health and Human Services (HHS) Office of Civil Rights (OCR). The most recent announcement was a $650k settlement with a nursing home. [Correction 7/15/16 – the nursing home had to report the breach, but the settlement was paid by the Business Associate directly responsible for the lost iPhone.] Not good. Even worse, this one was due to a Business Associate (BA) decision not to encrypt a device that had access to patient data. Organizations are no longer insulated from liability due to Business Associate Agreements (BAAs), so the stakes are very high. You need to know exactly what your vendors, partners, and Business Associates (BAs) are deploying, since they must be included in your risk assessments now.

Tomorrow, I will post Episode 2, diving into the HITECH Breach Notification for Unsecured Protected Health Information; Interim Final Rule itself. Stay tuned!

BlogFooterWalt

1 Jul 2016

How Unvalidated Encryption Threatens Patient Data Security

HealthcareGraphic2Originally posted in its entirety at HealthITSecurity.com.

Proper healthcare encryption methods can be greatly beneficial to organizations as they work to improve patient data security.

Technology vendors building solutions for deployment in healthcare love to talk about encryption and how it can help patient data security. It’s the silver bullet that allows physicians and patients alike to embrace new apps and tools. Symptoms may include increased confidence, decreased stress, and a hearty belief in the power of technology.

But what if that encryption was creating a false sense of security? What if the technology wasn’t providing a shield for ePHI at all?

Say goodbye to privacy, say goodbye to HIPAA compliance… and say hello to breach notifications and financial penalties.

Safe Harbor, as outlined by the HITECH Act, provides for the good faith determination of whether ePHI has indeed been exposed when a device with access has been stolen or misplaced.

It is based on the concept that strong encryption, properly deployed, would thwart even a determined attacker with physical access to an authorized device. Thus, even when a laptop or mobile device or external hard drive is lost, the data is considered to be intact and uncompromised inside the device if the data was properly encrypted.

This is a key distinction, and it is the difference between a breach notification (causing a significant hit to the brand and future revenues as well as serious financial penalties) and Safe Harbor (causing a large exhale of relief and a flurry of high-fives).

Click to Tweet: #FIPS140 #encryption: the difference between breach notification & Safe Harbor #HIPAA #Healthcare #Privacy

Here’s the rub – how is strong encryption differentiated from weak encryption for the purposes of HIPAA compliance?

Keep reading at HealthITSecurity.com!

BlogFooterRay2