Blog | SafeLogic

Blog | SafeLogic

24 Aug 2016

How does the SWEET32 Issue (CVE-2016-2183) affect SafeLogic’s FIPS Modules?

Executive Summary:

SWEET32 issueA newly demonstrated attack, SWEET32: Birthday attacks on 64-bit block ciphers in TLS and OpenVPN, shows that a network attacker monitoring an HTTPS session secured by Triple-DES can recover sensitive information. The attack was performed in a lab setting in less than two days by capturing 785 GB of traffic over a single HTTPS connection.

Sounds scary at first.

The good news: No action is required by SafeLogic customers for the SWEET32 issue.

 

My FIPS 140-2 Module is not Broken?

Correct. Triple-DES [1] is a FIPS Approved algorithm and Triple-DES is expected to remain a FIPS Approved algorithm for the foreseeable future. Triple-DES uses 64-bit block sizes which makes it vulnerable to this attack. Cryptographers have long been aware of this type of vulnerability in ciphers designed with small block sizes.

The AES symmetric cipher (also a FIPS Approved algorithm) is not vulnerable to this attack.

[1] Two-key Triple-DES may only be used for decryption purposes in the FIPS mode of operation. Three-key Triple-DES may be used for encryption and decryption purposes in the FIPS mode of operation.

What Might NIST Do?

Since a considerable amount of ciphertext needs to be captured to make this attack possible, this is a low security concern for nearly every use of TLS. We anticipate that CMVP (NIST/CSE) may publish future guidance limiting the amount of plaintext that is encrypted using a single Triple-DES key, but we do not expect the CMVP to remove Triple-DES from the list of FIPS Approved algorithms due to this reported attack.

 

Should I Turn Off Triple-DES to be Safe?

That depends on your company’s security policy for addressing vulnerabilities. The SWEET32 issue does not make Triple-DES itself any less secure than it was yesterday and the method of attack is not new. You may need to continue supporting Triple-DES in order to allow TLS connections that are not able to negotiate use of the AES cipher. (Note that good security practices always negotiate AES at a higher priority than Triple-DES). In short, there is no need to turn off the use of Triple-DES in your application.

 

What If I Still Have Questions?

Please contact me. I am happy to be a resource to you.

BlogFooter_Mark

18 Aug 2016

Encryption Concerns in the UK


This is a guest post from Amazing Support’s David Share as a special contribution to SafeLogic.

BlogFooter_Guest_DavidShareIn the early days of 2015, the British Prime Minister at the time, David Cameron, put forth an idea to ban all forms of encryption in the United Kingdom (UK) dealing with software and especially embedded in messaging applications. This proposal to ban encryption followed Paris’ Charlie Hebdo massacre, in which the attackers were thought to have been communicating with each other using apps similar to WhatsApp and iMessage. Were this ban to be realized, a backdoor would have to be created into any and all apps, whether web or mobile-based, that utilise end-to-end encryption.

Encryption has become a battleground as of late. Government bodies and those who fear that apps are being utilised for the propagation of terrorism seem to be firmly entrenched of the idea of creating backdoors in these apps. Technology companies, like Apple, and those who are trying to preserve what they perceive as the last vestiges of civil rights and privacies, are fighting to maintain encryption’s independence. Needless to say, both sides have their pros and cons.

Creating a backdoor, according to proponents like Cameron and current British Prime Minister Theresa May, would ensure that law enforcement and government agencies are able to monitor and act upon those that would cause harm to the UK. When using the Charlie Hebdo massacre as an example of how a ban on encryption could have helped, it does make sense.

However, tech companies and cryptography experts fear that the creation of a backdoor does not ensure that it could only be used by the “good guys”. To them, a backdoor is a legitimate vulnerability that could be equally exploited by foreign spies and corrupt police, among others. Businesses are concerned that it may portend the end of ecommerce as we currently know it, since almost all credit card transactions online are done through encrypted channels. If that encryption had a backdoor, it may create a sense of distrust among the consumer base and scare off business. Finally, there is the matter of privacy. If the encryption walls did fall by government command, then users are left terribly exposed and would have to endlessly worry if what they say online can be misconstrued as dangerous or worse, an act of terror.

UK Prime Minister Theresa May

UK Prime Minister Theresa May

The proposal has been legitimised and is known as the Investigatory Powers Bill (IPB) under Theresa May’s leadership. According to May, the bill does not state that tech companies are forced to create backdoors in their encryptions. However, it does require companies to provide decrypted messages upon the presentation of a warrant. This is a problem in and of itself, as the messages from apps that utilise end-to-end encryption cannot be accessed by anyone without a proper password or code, and that includes the software publisher. So to comply with IPB and present a decrypted message, some sort of backdoor will be needed. Through the use of sly wording, May and the IPB is effectively forcing tech companies to create backdoors afterall, lest they face a potential ban from operating within the confines of the UK.

Already known as the Snooper’s Charter, the IPB will test the limits to which tech companies and citizens are willing to relinquish a portion of their privacy. If the IPB ever becomes law, the government or any law enforcement agency must simply find cause to issue a warrant to gain access to any citizen’s message history. May and her supporters insist that they will only do this to people who may pose a risk to the safety of the nation, but who is deemed to be a threat can take on many meanings. The opponents of the IPB are afraid that this could and would lead to breaches in privacy laws, even going so far as to say that it would go against portions of the European Convention on Human Rights. Those challenging the bill are questioning Britons about whether they want to join the ranks of countries such as China and Russia, which closely monitor and even dictate what sites can be browsed, what data can be accessed and what messages can be sent.

It seems that May and the current government are selling the IPB under the guise of improving national security. However, they have failed to answer opponents’ concerns about the negative effects of the bill – the potential invasion of privacy and the creation of a new vector of attack for malicious hackers. May says that the bill does not infringe on the rights and privacies of the citizens but experts on the matter believe otherwise. More frighteningly, May and her party have yet to come up with a rational solution to the security problems that the creation of a backdoor poses.

If Britons were to stand up and made their voices heard they should do it sooner rather than later. The bill has already made it to the House of Lords and passed its second reading, and is now headed to the committee stage on the 5th of September. As it is, and without strong opposition from within the House or the people, the IPB will almost surely be passed and become law.

2 Aug 2016

Why Should We Get Our Own FIPS Certificate?

Why Should We Get Our Own FIPS Certificate?After our big announcement with OpenSSL last week, we’ve had some interesting conversations with possible future SafeLogic clients. Several have asked pointed questions, like “Why should we get our own FIPS certificate, if OpenSSL will get one after all?” and “Why buy the cow when we can get the milk for free with open source?”

I love these questions. It tells me that our potential partners have a healthy dose of skepticism and really understand the need to extract value from their capital expenditures.

In a nutshell, the answer is: because your customers also have a healthy dose of skepticism and need to extract maximum value from their expenditures!

Let’s start at the beginning. Building early versions of your product with open source encryption, whether it’s OpenSSL or Bouncy Castle, is a smart move. Open source crypto provides functional, widely compatible, peer-reviewed cryptography and leaves your options open for future replacements. Locking into a proprietary module early in the development phase has proven to be problematic when it requires unique architecture. (RSA BSAFE is now defunct, of course.)

The problems begin when you leverage open source for FIPS 140-2. In order to properly deploy an open source FIPS module within conformance standards, you need to follow the exact recipe. That means having to follow the 221 page User Guide for the OpenSSL FIPS Object Module v2.0, for example. That’s a lot of work, only to be questioned by your own prospective customers. “Where is your FIPS certificate? Don’t you have one with your name on it?”

Luckily, that’s exactly what SafeLogic provides. You’re not dealing with a DIY effort with directions from the worst Ikea bookshelf you’ve ever built. You get strong technical support from the SafeLogic team, standing behind our CryptoComply modules. (No, we don’t just send you a massive PDF of directions.) And that elusive FIPS 140-2 certificate? RapidCert delivers it in just 8 weeks, explicitly displaying your company name and operating environments. “Just trust me” doesn’t belong in your salespeople’s vocabulary.

So when you’re selling to the federal government, financial institutions, healthcare providers, or other regulated industries, expect your customers to be skeptical of your open source usage. You also need to be cognizant of the competitive landscape. You do not want to be cutting off your nose to spite your face, saving a few bucks by skating on FIPS validation, only to lose deals to rivals carrying certificates. Invest in your product and win those head-to-head opportunities! We even have a Top 10 list of reasons to choose SafeLogic over open source.

The comic below is a humorous dramatization of a sales call going wrong, but your target customers (in the green cube) really just want confirmation that your company carries a FIPS 140-2 validation. No tricks, no technicalities, just a certificate on the NIST website. Real, valid, honest-to-goodness, easy to cross-reference and confirm.

Open source FIPS validations are important for the community to have. It’s a good starting point, and for some small companies it’s the best that they can access. Maybe it’s enough for you right now. But customers can’t nitpick if you have your own certificate, and that’s where SafeLogic knocks it out of the park. You won’t find an easier or faster way to add that FIPS 140-2 validation to your salespeople’s arsenal. We’ll be ready when you are.

[Click to enlarge.]Why Should We Get Our Own FIPS Certificate?

BlogFooterWalt3

25 Jul 2016

#Winning

2016 - Golden Bridge Award - SilverThis morning, I had a nice surprise waiting in my inbox. SafeLogic won a Golden Bridge Award!

Awards have never been a priority for us, in large part due to our positioning… and the fact that we are focused on revenue and customers, not our own ego. We are the vendor to the vendors, a key component but rarely the feature. Award nominations always ask about end users, such as in the Fortune 500. “Symantec uses SafeLogic encryption. BlackBerry uses SafeLogic encryption,” I usually respond. “We have a great roster of customers, but it’s ultimately their end users, not ours.” Then we inevitably get sorted to the back of the list. I never worried about it because yes, I know, tech vendor awards are often only as valuable as the paper that they’re printed on, and we knew that we didn’t need to conform to a traditional category to be successful.

This time was different. The Golden Bridge Award team got us! They understood the importance of our role, the innovation behind our products, and recognized that while Joe Schmo wouldn’t go download a copy of our software directly, it’s pretty damn likely that Joe is already using it, and that merits recognition.

So with great pride, the SafeLogic team announces that we have won Silver in the category of Security Software Startups!
It feels good to be an award-winning company.

Click to Tweet: #Crypto startup @SafeLogic pulls down a trophy at #GoldenWorldAwards! http://bit.ly/SLaward725

Kudos also to our customer Securonix on winning a variety of awards, including a Grand Trophy, and Tanuj Gulati, their Co-founder & CTO, for winning a Gold for Executive of the Year in Security Services and a Silver for Most Innovative Executive of the Year. Well done!

Now with all this talk of Golds and Silvers, I’m ready for the Olympics to open in Rio. U-S-A! U-S-A!

BlogFooterWalt3

19 Jul 2016

OpenSSL 1.1’s Big, Bright, FIPS Validated Future

SafeLogic is the Orange Knight!The OpenSSL project posted to their official blog today with some major news – OpenSSL 1.1 will be getting a FIPS 140-2 validated module! It’s a huge deal and the SafeLogic team is proud to be leading the effort.

In September, OpenSSL’s Steve Marquess explained in a blog post (FIPS 140-2: It’s Not Dead, It’s Resting) why the ubiquitous open source encryption provider would be hard-pressed to bring FIPS mode to the 1.1 release. With changes over the last few years at the CMVP, the viability of legacy OpenSSL FIPS module validations have been repeatedly threatened and the crypto community simply cannot accept the possibility of being without a certificate. An open source module with a communal certificate available is a crucial component that allows many start-up companies to test the waters in federal agencies and regulated industries before investing in a validation for themselves. Likewise, many major corporations have relied upon OpenSSL FIPS modules over the years as a building block for extensive engineering efforts. Without this commitment, many would have been caught in the dilemma whether to use the FIPS 140 validated open source module compatible with a rapidly aging, often-maligned older version of OpenSSL, or the new, sleek, secure OpenSSL 1.1, but without a FIPS validated module at its heart.

The choice will now be an obvious one, and the community can safely remove their heads from the sand and begin planning their future roadmap around a fully validated FIPS module for OpenSSL 1.1 and beyond.

As the OpenSSL team announced today, SafeLogic will sponsor the engineering work on the FIPS module and we will be handling the validation effort ourselves. (What, you expected us to hire an outside consultant? Surely you jest.) Acumen will be the testing laboratory, as they have been for many of our RapidCerts, and together we have high hopes for a smooth and relatively painless process.

Click to Tweet: Have you heard? @SafeLogic is leading #FIPS140 effort for new #OpenSSL #crypto module! http://www.SafeLogic.com/openssl-1-1-future/

One key element in the OpenSSL blog post that will surprise some folks:

“This is also an all-or-nothing proposition; no one – including SafeLogic – gets to use the new FIPS module until and if a new open source based validation is available for everyone.”

Why would we agree to that? For that matter, why would we take on this project at all, while other “leaders” in the community relished the idea of a world without validated open source options?

At SafeLogic, we are true believers in the importance of open source, in encryption and elsewhere. Past versions of OpenSSL have provided a basis for SafeLogic’s CryptoComply modules, so you may ask why we’re doing this – why we’re not just building it ourselves and letting the open source community fend for themselves.

Well, we thought about doing just that, but we decided against it for both altruistic and strategic reasons. We believe that SafeLogic has the chance to help not only the OpenSSL team, but the tech community at large. We realize that product vendors, government entities, education institutions, and other organizations need validated open source modules, and not all of them can or will implement SafeLogic solutions.

As a team, we believe that a rising tide lifts all boats, and we are putting that philosophy into action. The availability of an OpenSSL 1.1 FIPS module will provide greater security in regulated verticals and more opportunities for everyone working in this community. SafeLogic will be at the epicenter of the effort, of course, and I would be remiss if I didn’t mention that our success in this endeavor will push SafeLogic even further forward as the true leader in providing validated crypto!

Our central role in the effort will ensure that nobody has more expertise or knowledge in the design, operation and validation of OpenSSL 1.1 modules than SafeLogic, and future versions of CryptoComply will be the best yet. Trust me, our customers will reap the benefits. We are happy to put in the sweat equity on the open source communal validation, knowing that when product teams need a FIPS 140-2 certificate in their own name, custom work, integration assistance, comprehensive support or anything else related to OpenSSL 1.1 and FIPS 140-2, SafeLogic will be the obvious choice.

We’re very excited to work with Steve, the OpenSSL team, and Acumen, as we join forces to lead the OpenSSL 1.1 FIPS module through FIPS 140-2 validation. Stay tuned for updates!

For more information about the project, how to contribute, the future roadmap, or media inquiries, please contact us at OpenSSL@SafeLogic.com.

BlogFooterRay2

15 Jul 2016

Achieving Safe Harbor in Healthcare – Episode 4

Thanks for returning for the final installment in this blog series! If you need to catch up, please see Episode 1Episode 2 and Episode 3, posted each of the last three days.

Here in Episode 4, we are tackling HITECH Breach Notification for Unsecured Protected Health Information; Interim Final Rule part (a)(ii), which covers data in motion. This will be the longest section, so grab a cup of coffee and let’s rock! For your reference, here is the full passage again:

Protected health information (PHI) is rendered unusable, unreadable, or indecipherable to unauthorized individuals if one or more of the following applies:

(a) Electronic PHI has been encrypted as specified in the HIPAA Security Rule by ‘‘the use of an algorithmic process to transform data into a form in which there is a low probability of assigning meaning without use of a confidential process or key’’ and such confidential process or key that might enable decryption has not been breached. To avoid a breach of the confidential process or key, these decryption tools should be stored on a device or at a location separate from the data they are used to encrypt or decrypt. The encryption processes identified below have been tested by the National Institute of Standards and Technology (NIST) and judged to meet this standard.

(i) Valid encryption processes for data at rest are consistent with NIST Special Publication 800–111, Guide to Storage Encryption Technologies for End User Devices.

(ii) Valid encryption processes for data in motion are those which comply, as appropriate, with NIST Special Publications 800–52, Guidelines for the Selection and Use of Transport Layer Security (TLS) Implementations800– 77, Guide to IPsec VPNs; or 800–113, Guide to SSL VPNs, or others which are Federal Information Processing Standards (FIPS) 140–2 validated.

Let’s go in order. First, for Transport Layer Security (TLS), we are directed to another NIST Special Publication, this time 800–52, Guidelines for the Selection and Use of Transport Layer Security (TLS) Implementations. Here’s a quote, directly from the document’s Minimum Requirements section:

The cryptographic module used by the server shall be a FIPS 140-validated cryptographic module. All cryptographic algorithms that are included in the configured cipher suites shall be within the scope of the validation, as well as the random number generator.

That’s pretty straight-forward. NIST wants you to use NIST validated encryption. Makes sense, we expected that. Okay, we’re off to a great start!

Let’s look at IPsec VPNs next, governed by NIST Special Publication 800– 77, Guide to IPsec VPNs. Here’s an excerpt from the Executive Summary. (Italics below are mine.)

NIST’s requirements and recommendations for the configuration of IPsec VPNs are:

  • – If any of the information that will traverse a VPN should not be seen by non-VPN users, then the VPN must provide confidentiality protection (encryption) for that information.
  • – A VPN must use a FIPS-approved encryption algorithm. AES-CBC (AES in Cipher Block Chaining mode) with a 128-bit key is highly recommended; Triple DES (3DES-CBC) 1 is also acceptable. The Data Encryption Standard (DES) is also an encryption algorithm; since it has been successfully attacked, it should not be used.
  • – A VPN must always provide integrity protection.
  • – A VPN must use a FIPS-approved integrity protection algorithm. HMAC-SHA-1 is highly recommended. HMAC-MD5 also provides integrity protection, but it is not a FIPS-approved algorithm.

That’s also pretty blunt, laid out right in the first few pages. The only discrepancy here from previous guidance is that they call it “FIPS-approved” instead of the usual “FIPS validated”. It’s still a clear reference to the NIST program and the only resources available to confirm approval – the public validation list. Note that CAVP is responsible for validating algorithms and maintains public lists for each algorithm. That would be enough to satisfy the minimum requirements here, although deploying FIPS validated algorithms is not enough to claim full conformance to the standard. So for comprehensive coverage, a FIPS validated module is still required.

Let’s go to the last alternative category, SSL VPNs, which refers to NIST Special Publication 800–113, Guide to SSL VPNs. This one is probably the most interesting (and confusing). SP 800-113 clearly states in several places that Federal agencies require FIPS 140-2 encryption, but never explicitly extends the requirement beyond those government offices. It gets complicated because the Breach Notification Final Interim Rule clearly refers to this document… which dedicates probably 80% of the content in the SP to discussing how to meet the FIPS 140-2 standard. So the bulk of the document concerns the validation, even though it does not call it a mandate. It’s like handing someone a copy of Rosetta Stone but not actually asking them to learn the language. It’s a strong implication, so let’s chalk this one up to ‘strongly recommended’ even if not ‘required’.

Anything that didn’t fall into one of those categories (TLS, IPsec VPN or SSL VPN) lands squarely on the “others which are Federal Information Processing Standards (FIPS) 140–2 validated” box.

So if you’re keeping score at home, part (a) lays out five scenarios:

  • – Data at rest, referencing one NIST publication that points to two others, ultimately mapping all security controls to FIPS 140-2 validated encryption.
  • – Data in motion with a TLS implementation, which is mandated to include FIPS 140-2 validated encryption.
  • – Data in motion with an IPsec VPN, which must be handled with a FIPS-approved algorithm and a FIPS-approved integrity protection algorithm.
  • – Data in motion with an SSL VPN, which was not explicitly required to be FIPS validated, but the referenced publication is about how to ensure that it is FIPS validated, so deviate at your own risk.
  • – All other active data, which must be encrypted with a FIPS 140-2 validated module.

All scenarios lead back to the same conclusion. HIPAA and HITECH are both pieces of federal legislation and enforced by a federal agency, which refers to another federal agency for judgment on encryption benchmarks. FIPS 140-2 is the only certification that unequivocally meets the demands of these government bodies.

HIPAA Safe Harbor

Please use these links and excerpts to complete your own research, but be sure to ask yourself, “Do I really want to risk liability by using anything less than FIPS 140-2 validated encryption?”

The Breach Notification for Unsecured Protected Health Information; Interim Final Rule itself makes a strong point – you can choose to skip encryption and still potentially comply with the HIPAA Security Rule. But if you are hoping to avoid breach notification and penalties, you will be out of luck. Here is another excerpt from the Interim Final Rule, explaining the disconnect and solution. (Italics below are mine.)

Under 45 CFR 164.312(a)(2)(iv) and (e)(2)(ii), a covered entity must consider implementing encryption as a method for safeguarding electronic protected health information; however, because these are addressable implementation specifications, a covered entity may be in compliance with the Security Rule even if it reasonably decides not to encrypt electronic protected health information and instead uses a comparable method to safeguard the information.

Therefore, if a covered entity chooses to encrypt protected health information to comply with the Security Rule, does so pursuant to this guidance, and subsequently discovers a breach of that encrypted information, the covered entity will not be required to provide breach notification because the information is not considered ‘‘unsecured protected health information’’ as it has been rendered unusable, unreadable, or indecipherable to unauthorized individuals.

On the other hand, if a covered entity has decided to use a method other than encryption or an encryption algorithm that is not specified in this guidance to safeguard protected health information, then although that covered entity may be in compliance with the Security Rule, following a breach of this information, the covered entity would have to provide breach notification to affected individuals. For example, a covered entity that has a large database of protected health information may choose, based on their risk assessment under the Security Rule, to rely on firewalls and other access controls to make the information inaccessible, as opposed to encrypting the information. While the Security Rule permits the use of firewalls and access controls as reasonable and appropriate safeguards, a covered entity that seeks to ensure breach notification is not required in the event of a breach of the information in the database would need to encrypt the information pursuant to the guidance.

The Interim Final Rule can be very difficult to follow, but this much is clear: They consistently refer judgment to the appropriate government agency – NIST. As the National Institute of Standards and Technology, it is their experts that set the benchmarks, procedures for implementation, and decide what is approved and what is not. At the end of the day, NIST is very black and white with their program for testing and validating encryption modules. Everything that appears on the public validation list is approved, and everything else is not. For the needs of federal agencies and the military, NIST goes so far as to say that any unvalidated encryption is considered to be the equal of plaintext. Essentially, without validation, it cannot be trusted, not even a little bit. The privacy and security of citizens’ health information has rightfully been deemed a priority and should be treated with the same respect.

If you are a Covered Entity, you need to do a complete audit of the encryption in use throughout your organization. Every software solution from every vendor being used by every authorized individual and every Business Associate should contain a certified encryption module that appears on NIST’s public validation list. If it’s not there, start asking direct questions of the vendor. Start with “Why isn’t it validated?” and don’t let them dodge the question. The validation should clearly display the vendor’s name and the operating environment that you are using. If it’s not clear that they are FIPS 140-2 validated and have a certificate number to reference, they probably aren’t validated and you are in that gray area, subject to interpretation by the HHS. Nobody wants to be in that gray area when a device is lost, so get FIPS validated!

Thank you for reading my four episode opus. If you have any questions or feedback, please email me at Walt@SafeLogic.com or ping me on Twitter @SafeLogic_Walt.

BlogFooterWalt

14 Jul 2016

Achieving Safe Harbor in Healthcare – Episode 3

Welcome back! If you need to catch up, please see Episode 1 and Episode 2.

Yesterday, we established that our interest in the HITECH Breach Notification for Unsecured Protected Health Information; Interim Final Rule was limited to part (a), which refers to the cryptographic protection of actively-accessed PHI. We discarded part (b) for our purposes, because it only covers devices that have been decommissioned. For your reference, here is the passage again:

Protected health information (PHI) is rendered unusable, unreadable, or indecipherable to unauthorized individuals if one or more of the following applies:

(a) Electronic PHI has been encrypted as specified in the HIPAA Security Rule by ‘‘the use of an algorithmic process to transform data into a form in which there is a low probability of assigning meaning without use of a confidential process or key’’ and such confidential process or key that might enable decryption has not been breached. To avoid a breach of the confidential process or key, these decryption tools should be stored on a device or at a location separate from the data they are used to encrypt or decrypt. The encryption processes identified below have been tested by the National Institute of Standards and Technology (NIST) and judged to meet this standard.

(i) Valid encryption processes for data at rest are consistent with NIST Special Publication 800–111, Guide to Storage Encryption Technologies for End User Devices.

(ii) Valid encryption processes for data in motion are those which comply, as appropriate, with NIST Special Publications 800–52, Guidelines for the Selection and Use of Transport Layer Security (TLS) Implementations800– 77, Guide to IPsec VPNs; or 800–113, Guide to SSL VPNs, or others which are Federal Information Processing Standards (FIPS) 140–2 validated.

Breach Safe HarborMoving forward! Within part (a), the Interim Final Rule refers to NIST for judgment and testing in two categories: data at rest and data in motion.

First, part (i) for data at rest, refers to NIST Special Publication 800–111, Guide to Storage Encryption Technologies for End User Devices. Yes, NIST governs this category (spoiler alert – they govern them all!) so expect more cross-referencing. In this case, to another Special Publication.

Organizations should select and deploy the necessary controls based on FIPS 199’s categories for the potential impact of a security breach involving a particular system and NIST Special Publication 800-53’s recommendations for minimum management, operational, and technical security controls.

FIPS Publication 199 dates back to 2004, but is still widely used. It’s a relatively short document and is a reference guide provided by NIST to assist with control classifications. 800-111 explains further.

Organizations should select and deploy the necessary security controls based on existing guidelines. Federal Information Processing Standards (FIPS) 199 establishes three security categories – low, moderate, and high – based on the potential impact of a security breach involving a particular system. NIST SP 800-53 provides recommendations for minimum management, operational, and technical security controls for information systems based on the FIPS 199 impact categories. The recommendations in NIST SP 800-53 should be helpful to organizations in identifying controls that are needed to protect end user devices, which should be used in addition to the specific recommendations for storage encryption listed in this document.

So depending on the FIPS 199 classifications, you should consult NIST Special Publication 800-53 and act accordingly.  This is even more confusing, because 800-53 is a catalog-style document used to map controls from a variety of other Special Publications, so it does not have breadcrumbs to lead us directly from Interim Final Rule to Safe Harbor. Luckily, SafeLogic’s whitepaper on HIPAA security controls covers this exact topic. Rest assured, NIST connects every encryption requirement back to their own standard which they certify – FIPS 140-2. Go ahead and download the whitepaper and review at your leisure. Regardless of the FIPS 199 classification, SP 800-53 is satisfied by deploying FIPS 140-2 encryption. In the interest of space and time, I will not rehash all of the controls, but it’s all in the whitepaper.

Part (ii) is for data in motion and is subdivided into four categories as applicable: TLS, IPsec VPN, SSL VPN, or else the catch-all “others”, which goes straight to – yes, you guessed it – FIPS 140-2. Have I already mentioned that NIST wants everyone to use FIPS 140-2 validated encryption? It’s almost like NIST is promoting the use of their own standard…

These categories will be covered tomorrow, with excerpts from the referenced NIST Special Publications, in the final episode. Kudos if you’re still with me!

BlogFooterWalt

13 Jul 2016

Achieving Safe Harbor in Healthcare – Episode 2

Yesterday, I posted Episode 1, discussing some of the terminology and background for this discussion. Between the reputation hit and the financial penalties, we’ve established that achieving Safe Harbor should be a priority for every healthcare provider. Enduring a PHI breach is just no fun and it’s not worth the risk. Now that Business Associates are included in the liability of a Covered Entity (see CHCS breach), it’s more important than ever to know exactly whether appropriate encryption is always being used when accessing your patients’ data, so let’s cut to the chase. You need to confirm that every device that is authorized to access PHI is encrypting the data in full compliance with the Safe Harbor rule.

Healthcare Safe HarborHow do you know if the deployed encryption qualifies for Safe Harbor?

The HITECH Breach Notification for Unsecured Protected Health Information; Interim Final Rule was issued in August 2009 by HHS, stating that even in the event of device hardware being lost or stolen, it is not considered a breach if the data is fully obscured from intruding eyes. Here is the complete passage:

Protected health information (PHI) is rendered unusable, unreadable, or indecipherable to unauthorized individuals if one or more of the following applies:

(a) Electronic PHI has been encrypted as specified in the HIPAA Security Rule by ‘‘the use of an algorithmic process to transform data into a form in which there is a low probability of assigning meaning without use of a confidential process or key’’ and such confidential process or key that might enable decryption has not been breached. To avoid a breach of the confidential process or key, these decryption tools should be stored on a device or at a location separate from the data they are used to encrypt or decrypt. The encryption processes identified below have been tested by the National Institute of Standards and Technology (NIST) and judged to meet this standard.

(i) Valid encryption processes for data at rest are consistent with NIST Special Publication 800–111, Guide to Storage Encryption Technologies for End User Devices.

(ii) Valid encryption processes for data in motion are those which comply, as appropriate, with NIST Special Publications 800–52, Guidelines for the Selection and Use of Transport Layer Security (TLS) Implementations800– 77, Guide to IPsec VPNs; or 800–113, Guide to SSL VPNs, or others which are Federal Information Processing Standards (FIPS) 140–2 validated.

(b) The media on which the PHI is stored or recorded have been destroyed in one of the following ways:

(i) Paper, film, or other hard copy media have been shredded or destroyed such that the PHI cannot be read or otherwise cannot be reconstructed. Redaction is specifically excluded as a means of data destruction.

(ii) Electronic media have been cleared, purged, or destroyed consistent with NIST Special Publication 800–88, Guidelines for Media Sanitization, such that the PHI cannot be retrieved.

Let’s break this down.

Safe Harbor applies if:
(a) Proper encryption is used, or
(b) Data has been destroyed properly.

We aren’t talking about old PHI that was shredded on a discarded hard drive, we’re talking about active data in use. The stuff that is being actively accessed and leveraged by healthcare workers on the devices that are being lost and stolen. So we can ignore part (b) for the purposes of this discussion, which will continue tomorrow when I post Episode 3, focusing on this active encryption. We will look at the actual verbiage used in each of NIST’s referenced Special Publications referenced in part (a) and exactly how they require you, your facility, your competitors, your vendors, and your BAs to deploy encryption for each scenario.

Note that the Interim Final Rule even says that “The encryption processes identified below have been tested by the National Institute of Standards and Technology (NIST) and judged to meet this standard.” It doesn’t mention any other sanctioning body or agency that is authorized to assess whether the standards have been met. The buck stops with NIST.

Stay tuned!

BlogFooterWalt

12 Jul 2016

Achieving Safe Harbor in Healthcare – Episode 1

After I cross-posted Ray Potter’s HealthITSecurity.com editorial to one of my LinkedIn groups (HIPAA Survival Guide), it spawned a great conversation about the lack of clarity on the topic of encryption in healthcare IT. Ray’s article and the subsequent debate raised new questions and new confusion, even among the most qualified and experienced folks in the group. It became obvious that additional guidance would be helpful, so here goes. This is Episode 1 of 4, to be posted each day this week.

First, let’s do a rundown on a few of the acronyms and terminology crucial to understanding encryption in healthcare.

Safe Harbor in HealthcareNIST is the National Institute of Standards and Technology. They are responsible for setting benchmarks and providing guidance for technological implementations. They also set minimum requirements for federal government agencies to follow. Because of this role, private sector companies use NIST guidance as a template even when it’s not legally mandated.

FIPS 140-2 is the Federal Information Processing Standard 140 (Version 2). It governs cryptography, was written by NIST and is possibly their most well-known standard. NIST even established a department dedicated to the validation of encryption modules that meet the standard. FIPS 140-1 refers to the first version of the standard, while FIPS 140-2 is the current version. FIPS 140-3 is not an official version, although many use it to refer to a theoretical future revision.

FIPS 140-2 Level 1 is the appropriate validation level for software. Levels 2-4 are concerned with increasing levels physical security of hardware, including tamper resistance and seals that show evidence of attempted access. Cool stuff, but irrelevant for a software solution like those being deployed on healthcare providers’ laptops and mobile devices.

CMVP is the Cryptographic Module Validation Program. It is the NIST department (mentioned above) that handles the testing and certification of encryption modules from the private sector. Their little sister department, CAVP, is the Cryptographic Algorithm Validation Program, which provides the certification of individual algorithms as a building block towards CMVP’s FIPS 140-2 validation.

The CMVP maintains a public list of all modules that have been validated. If it’s not listed, it didn’t get tested, didn’t qualify, or at least hasn’t completed the process. (It used to take 12-18 months, before SafeLogic cut it down to 8 weeks.) Any technology vendor that has received FIPS 140-2 validation will proudly provide their certificate number upon request, and you should absolutely cross-reference it with the public list to confirm.

“FIPS validated” is the term for a certified algorithm or module. It will have a unique certificate number on the public list. “FIPS compliant” means that it is ready to be tested, but has not completed the process, so it cannot be confirmed. Likewise, “FIPS ready” or “designed for FIPS” or “pre-validated for FIPS” are not actual, verifiable claims.

So why do you care? Well, if you’re working in health, a little thing called Safe Harbor provides a big motivation. This is the legal avoidance of notifying patients that their PHI (Protected Health Information) was exposed. Let’s say that a physician’s laptop is stolen. If you qualify under Safe Harbor, you’re all good, just need to procure a new laptop for the poor doctor. If you don’t qualify, you’re embarking on a pretty terrible journey, letting people know that their data was lost and their identity is at risk of theft, and waiting for the penalty assessment from the Department of Health and Human Services (HHS) Office of Civil Rights (OCR). The most recent announcement was a $650k settlement with a nursing home. [Correction 7/15/16 – the nursing home had to report the breach, but the settlement was paid by the Business Associate directly responsible for the lost iPhone.] Not good. Even worse, this one was due to a Business Associate (BA) decision not to encrypt a device that had access to patient data. Organizations are no longer insulated from liability due to Business Associate Agreements (BAAs), so the stakes are very high. You need to know exactly what your vendors, partners, and Business Associates (BAs) are deploying, since they must be included in your risk assessments now.

Tomorrow, I will post Episode 2, diving into the HITECH Breach Notification for Unsecured Protected Health Information; Interim Final Rule itself. Stay tuned!

BlogFooterWalt

1 Jul 2016

How Unvalidated Encryption Threatens Patient Data Security

HealthcareGraphic2Originally posted in its entirety at HealthITSecurity.com.

Proper healthcare encryption methods can be greatly beneficial to organizations as they work to improve patient data security.

Technology vendors building solutions for deployment in healthcare love to talk about encryption and how it can help patient data security. It’s the silver bullet that allows physicians and patients alike to embrace new apps and tools. Symptoms may include increased confidence, decreased stress, and a hearty belief in the power of technology.

But what if that encryption was creating a false sense of security? What if the technology wasn’t providing a shield for ePHI at all?

Say goodbye to privacy, say goodbye to HIPAA compliance… and say hello to breach notifications and financial penalties.

Safe Harbor, as outlined by the HITECH Act, provides for the good faith determination of whether ePHI has indeed been exposed when a device with access has been stolen or misplaced.

It is based on the concept that strong encryption, properly deployed, would thwart even a determined attacker with physical access to an authorized device. Thus, even when a laptop or mobile device or external hard drive is lost, the data is considered to be intact and uncompromised inside the device if the data was properly encrypted.

This is a key distinction, and it is the difference between a breach notification (causing a significant hit to the brand and future revenues as well as serious financial penalties) and Safe Harbor (causing a large exhale of relief and a flurry of high-fives).

Click to Tweet: #FIPS140 #encryption: the difference between breach notification & Safe Harbor #HIPAA #Healthcare #Privacy

Here’s the rub – how is strong encryption differentiated from weak encryption for the purposes of HIPAA compliance?

Keep reading at HealthITSecurity.com!

BlogFooterRay2