Blog | SafeLogic

Blog | SafeLogic

13 Sep 2016

Format Change for Modules In Process List at CMVP

Modules In Process ListThere has been a fairly significant change in the way that the NIST website displays the status of encryption modules that are undergoing FIPS 140-2 testing and validation. The NIST Modules in Process List website now contains two separate reports, drawing a clear distinction between Implementation Under Test (IUT) and Modules in Process (MIP).

The FIPS 140-2 Implementation Under Test List (IUT List) contains cryptographic modules that are in the testing process with a FIPS Laboratory. The IUT Date indicates when the cryptographic module was first added to the list.

Sample IUT List entries:

CMVP Modules In Process List

Once a report package has been submitted to the CMVP by the FIPS Laboratory, a cryptographic module will be removed from the IUT List and then added to the MIP List.

The FIPS 140-2 Modules In Process List (MIP List) contains the cryptographic modules that are stepping through the following milestones:

  • Review Pending – The CMVP received a complete report package
  • In Review – Report Reviewers assigned at the CMVP
  • Coordination – CMVP comments returned to the FIPS Laboratory
  • Finalization – Administrative processing to post the certificate

Sample MIP List entries:

CMVP Modules In Process List

Both lists are updated daily and available as PDFs from the NIST website. Note that participation is optional, and a vendor may elect to not be listed on one or either list.

What does this mean for my FIPS 140-2 strategy?

Essentially, the IUT status loses its luster. By drawing a clear differentiation between IUT and MIP, the former becomes simply a voluntary “We’re working on it!” claim while the latter signifies actual progress. Federal procurement officers used to check the In Process List and would be encouraged by any company appearing there, but the IUT list will become less important, especially for module entries that are months old and their progress has stagnated.

For SafeLogic customers, IUT status was never relevant in the first place. RapidCert catapults clients directly to MIP status because there is no delay between initiating the process and delivering documentation to CMVP. With our project management team and the processes already arranged with our preferred testing laboratories, SafeLogic customers will appear only on the MIP List during their brief waiting period for validation. Unless, of course, you prefer stealth mode. Imagine the looks on your rivals’ faces when you appear on the Validated List before they even make it off the IUT List!

As always, feel free to contact me with any questions. We’re ready when you are.

7 Sep 2016

How to Read a FIPS 140-2 Validation Listing

I’m pleased to provide a breakdown of exactly what you will find on the NIST website when reviewing a FIPS 140-2 validation listing. Whether you are a federal procurement officer, a technical consultant, a vendor representative, an end user, or really any role that may deal with FIPS 140-2, you should be able to interpret and verify the information on these certificates after reading this post. Bookmark this page for future reference, in fact. If you have any further questions, please don’t hesitate to contact me directly at I’m here to help.

Here is a screen-captured example of a FIPS 140-2 validation listing, as shown on the NIST website. I will note where other validated modules may differ, but this is a good sample of a typical Software Level 1 certificate, the specialty of SafeLogic’s RapidCert program. (If you click here or anywhere on the image below, it will open full-size in a new tab.)

How to Read a FIPS 140-2 Validation Listing

  1. The unique FIPS 140-2 validation listing number assigned to this cryptographic module. This is the number that a vendor should reference when relevant. In this example, FinalCode would announce, “Our products use FIPS 140-2 validated cryptography, see certificate #2717.”
  2. This is the validation owner. Company names include an embedded link to their website, and the physical address is provided by the vendor. It may not always be headquarters – sometimes it is a development office or similar.
  3. Every validation listing includes contact information. Often it is the product manager, CTO or another development stakeholder. In this example, it is a general mailbox and central phone number, which is also acceptable. Note the embedded link for direct email.
  4. This is the independent third party testing laboratory. Every validation has one, and it’s not possible to earn your FIPS 140-2 without an accredited lab. This particular example was tested by Acumen Security, which has done a fantastic job on many SafeLogic’s RapidCert efforts. Information on all the accredited labs can be found here: and you can cross-reference the unique NVLAP (National Voluntary Laboratory Accreditation Program) code if you like.
  5. Every FIPS 140-2 validation listing has a name. They are usually pretty generic, just for simplicity, but federal agencies must verify that the specific version information matches the module version implemented by the product(s) that they are using.
  6. The caveat section contains information required by the CMVP for the cryptographic module. Common caveats describe “FIPS mode” and entropy, a hot button issue of late. CMVP also recently added a new reference, if another validated module has provided a basis for this certificate.
  7. A link to the consolidated validation certificate. CMVP realized that it was a real time suck to create individual certificates (and send them via snail mail), so instead, they publish a single certificate each calendar month, which lists the validations completed during that period. The PDF certificate includes signatures from both NIST and CSE and it looks pretty, but they are rarely referenced because the public website listing includes more information.
  8. Each validation includes a required Security Policy, which is linked via PDF. This documentation includes technical parameters for the cryptographic operations in FIPS mode and represents a significant portion of the time and effort wasted by vendors who insist on handling their validation in-house. With RapidCert, this documentation is already prepared for CryptoComply modules and is updated for client needs. Much more simple than starting from scratch.
  9. This FIPS 140-2 validation listing example features a Software validation, but CMVP also validates Hardware, Firmware and Hybrid modules.
  10. This is the completion date of the validation. If multiple dates are listed, those represent approved updates. Note that beginning in 2017, CMVP will be removing validations that are not dated within the preceding 5 years. This is an important step to ensure that all validated crypto modules are being maintained for compliance with current standards and requirements.
  11. FIPS 140-2 validations can be completed for Level 1, 2, 3, or 4. While Level 1 is appropriate for Software, the advanced levels feature increasing amounts of physical security, including tamper-evident seals and tamper response. These are key facets for Hardware validations, in particular.
  12. This is an area for Security Levels that differ from the Overall Level (see 11) or additional information. These may include notes in the following categories:

– Roles, Services, and Authentication
– Physical Security
– Cryptographic Module Specification
– EMI/EMC (electromagnetic interference/electromagnetic compatibility)
– Design Assurance
– Mitigation of Other Attacks

  1. The Operational Environment is a crucial section for Software validations. This is where it becomes explicit which platforms were tested within the scope of the validation. This example includes both Android and Apple iOS mobile operating systems. Note that it may be permissible to operate FIPS mode on other operating environments that are not listed here (by vendor affirmation that the module did not require modification for the unlisted environment).
  2. The FIPS Approved algorithms section lists the specific cryptographic algorithms Approved for use in the FIPS mode of operation, as well as references (but not embedded hyperlinks, unfortunately) to the CAVP certificates for each. This is the evidence that each algorithm was successfully tested by the lab as a prerequisite for the module testing.
  3. Other algorithms are included on the FIPS 140-2 validation listing if they are implemented in the module but are not specifically listed as FIPS Approved algorithms (#14). This list includes algorithms allowed for use in the FIPS mode of operation as well as any algorithms contained in the module that are not to be used in the FIPS mode of operation. The latter category may be algorithms that have been phased out or are included for other strategic reasons.
  4. This is a categorization of the module. Multi-Chip Stand Alone, Multi-Chip Embedded, or Single Chip. Software modules are classified as Multi-Chip Stand Alone since they run in a general purpose computer or mobile device.
  5. This is a brief summary of the role of the cryptographic module. Some are extremely brief, while zealous marketing folks have written others, but the vendor always provides it to offer some context.



30 Aug 2016

Still Not Validated

McLovinAES 256 is a fantastic cryptographic algorithm. I highly recommend it. Be aware, however, that deploying an algorithm that is approved for use within a FIPS 140-2 validated crypto module is NOT the same as holding a validation. Likewise, just because you’re over 16 years old and know how to operate a vehicle does NOT mean that you have a driver’s license. You may be eligible, but there are steps that must be taken to prove that you meet all of the requirements before you are issued that certificate.

If you get pulled over on the freeway, you had better produce a valid and current driver’s license. (No, McLovin, a real license.) In technology, you had better be able to produce a valid and current listing on the NIST website, showing completion of the Cryptographic Module Validation Program (CMVP) and a confirmed FIPS 140-2 validation.

In order to do so, you must use take your implementation of AES 256 (or another approved algorithm) and undergo thorough testing with the CAVP (NIST’s Cryptographic Algorithm Validation Program) as a prerequisite before your module can even enter the CMVP queue. Once that is complete, then the entire module can be tested to meet the FIPS 140-2 benchmark. Without the independent third party laboratory, without NIST involvement and without a posted validation, you do NOT have FIPS 140-2 validated encryption, you’re NOT eligible for federal procurement, and you’re NOT in compliance for HITECH Safe Harbor in healthcare.

If you are shopping for a solution and need FIPS 140-2, you need it to be validated and posted on the NIST website. Don’t be fooled by phrases like “FIPS compliant algorithms” or “conforming to FIPS standards”. Either it has been validated or it has not. Next time, we will tackle exactly what a FIPS 140-2 validation looks like and what it means, explaining each piece of the certificate listings publicly posted by NIST. Until then, enjoy a quick laugh about the difference between eligibility and certification.

[Click to enlarge.]In the Car



24 Aug 2016

How does the SWEET32 Issue (CVE-2016-2183) affect SafeLogic’s FIPS Modules?

Executive Summary:

SWEET32 issueA newly demonstrated attack, SWEET32: Birthday attacks on 64-bit block ciphers in TLS and OpenVPN, shows that a network attacker monitoring an HTTPS session secured by Triple-DES can recover sensitive information. The attack was performed in a lab setting in less than two days by capturing 785 GB of traffic over a single HTTPS connection.

Sounds scary at first.

The good news: No action is required by SafeLogic customers for the SWEET32 issue.


My FIPS 140-2 Module is not Broken?

Correct. Triple-DES [1] is a FIPS Approved algorithm and Triple-DES is expected to remain a FIPS Approved algorithm for the foreseeable future. Triple-DES uses 64-bit block sizes which makes it vulnerable to this attack. Cryptographers have long been aware of this type of vulnerability in ciphers designed with small block sizes.

The AES symmetric cipher (also a FIPS Approved algorithm) is not vulnerable to this attack.

[1] Two-key Triple-DES may only be used for decryption purposes in the FIPS mode of operation. Three-key Triple-DES may be used for encryption and decryption purposes in the FIPS mode of operation.

What Might NIST Do?

Since a considerable amount of ciphertext needs to be captured to make this attack possible, this is a low security concern for nearly every use of TLS. We anticipate that CMVP (NIST/CSE) may publish future guidance limiting the amount of plaintext that is encrypted using a single Triple-DES key, but we do not expect the CMVP to remove Triple-DES from the list of FIPS Approved algorithms due to this reported attack.


Should I Turn Off Triple-DES to be Safe?

That depends on your company’s security policy for addressing vulnerabilities. The SWEET32 issue does not make Triple-DES itself any less secure than it was yesterday and the method of attack is not new. You may need to continue supporting Triple-DES in order to allow TLS connections that are not able to negotiate use of the AES cipher. (Note that good security practices always negotiate AES at a higher priority than Triple-DES). In short, there is no need to turn off the use of Triple-DES in your application.


What If I Still Have Questions?

Please contact me. I am happy to be a resource to you.


18 Aug 2016

Encryption Concerns in the UK

This is a guest post from Amazing Support’s David Share as a special contribution to SafeLogic.

BlogFooter_Guest_DavidShareIn the early days of 2015, the British Prime Minister at the time, David Cameron, put forth an idea to ban all forms of encryption in the United Kingdom (UK) dealing with software and especially embedded in messaging applications. This proposal to ban encryption followed Paris’ Charlie Hebdo massacre, in which the attackers were thought to have been communicating with each other using apps similar to WhatsApp and iMessage. Were this ban to be realized, a backdoor would have to be created into any and all apps, whether web or mobile-based, that utilise end-to-end encryption.

Encryption has become a battleground as of late. Government bodies and those who fear that apps are being utilised for the propagation of terrorism seem to be firmly entrenched of the idea of creating backdoors in these apps. Technology companies, like Apple, and those who are trying to preserve what they perceive as the last vestiges of civil rights and privacies, are fighting to maintain encryption’s independence. Needless to say, both sides have their pros and cons.

Creating a backdoor, according to proponents like Cameron and current British Prime Minister Theresa May, would ensure that law enforcement and government agencies are able to monitor and act upon those that would cause harm to the UK. When using the Charlie Hebdo massacre as an example of how a ban on encryption could have helped, it does make sense.

However, tech companies and cryptography experts fear that the creation of a backdoor does not ensure that it could only be used by the “good guys”. To them, a backdoor is a legitimate vulnerability that could be equally exploited by foreign spies and corrupt police, among others. Businesses are concerned that it may portend the end of ecommerce as we currently know it, since almost all credit card transactions online are done through encrypted channels. If that encryption had a backdoor, it may create a sense of distrust among the consumer base and scare off business. Finally, there is the matter of privacy. If the encryption walls did fall by government command, then users are left terribly exposed and would have to endlessly worry if what they say online can be misconstrued as dangerous or worse, an act of terror.

UK Prime Minister Theresa May

UK Prime Minister Theresa May

The proposal has been legitimised and is known as the Investigatory Powers Bill (IPB) under Theresa May’s leadership. According to May, the bill does not state that tech companies are forced to create backdoors in their encryptions. However, it does require companies to provide decrypted messages upon the presentation of a warrant. This is a problem in and of itself, as the messages from apps that utilise end-to-end encryption cannot be accessed by anyone without a proper password or code, and that includes the software publisher. So to comply with IPB and present a decrypted message, some sort of backdoor will be needed. Through the use of sly wording, May and the IPB is effectively forcing tech companies to create backdoors afterall, lest they face a potential ban from operating within the confines of the UK.

Already known as the Snooper’s Charter, the IPB will test the limits to which tech companies and citizens are willing to relinquish a portion of their privacy. If the IPB ever becomes law, the government or any law enforcement agency must simply find cause to issue a warrant to gain access to any citizen’s message history. May and her supporters insist that they will only do this to people who may pose a risk to the safety of the nation, but who is deemed to be a threat can take on many meanings. The opponents of the IPB are afraid that this could and would lead to breaches in privacy laws, even going so far as to say that it would go against portions of the European Convention on Human Rights. Those challenging the bill are questioning Britons about whether they want to join the ranks of countries such as China and Russia, which closely monitor and even dictate what sites can be browsed, what data can be accessed and what messages can be sent.

It seems that May and the current government are selling the IPB under the guise of improving national security. However, they have failed to answer opponents’ concerns about the negative effects of the bill – the potential invasion of privacy and the creation of a new vector of attack for malicious hackers. May says that the bill does not infringe on the rights and privacies of the citizens but experts on the matter believe otherwise. More frighteningly, May and her party have yet to come up with a rational solution to the security problems that the creation of a backdoor poses.

If Britons were to stand up and made their voices heard they should do it sooner rather than later. The bill has already made it to the House of Lords and passed its second reading, and is now headed to the committee stage on the 5th of September. As it is, and without strong opposition from within the House or the people, the IPB will almost surely be passed and become law.

2 Aug 2016

Why Should We Get Our Own FIPS Certificate?

Why Should We Get Our Own FIPS Certificate?After our big announcement with OpenSSL last week, we’ve had some interesting conversations with possible future SafeLogic clients. Several have asked pointed questions, like “Why should we get our own FIPS certificate, if OpenSSL will get one after all?” and “Why buy the cow when we can get the milk for free with open source?”

I love these questions. It tells me that our potential partners have a healthy dose of skepticism and really understand the need to extract value from their capital expenditures.

In a nutshell, the answer is: because your customers also have a healthy dose of skepticism and need to extract maximum value from their expenditures!

Let’s start at the beginning. Building early versions of your product with open source encryption, whether it’s OpenSSL or Bouncy Castle, is a smart move. Open source crypto provides functional, widely compatible, peer-reviewed cryptography and leaves your options open for future replacements. Locking into a proprietary module early in the development phase has proven to be problematic when it requires unique architecture. (RSA BSAFE is now defunct, of course.)

The problems begin when you leverage open source for FIPS 140-2. In order to properly deploy an open source FIPS module within conformance standards, you need to follow the exact recipe. That means having to follow the 221 page User Guide for the OpenSSL FIPS Object Module v2.0, for example. That’s a lot of work, only to be questioned by your own prospective customers. “Where is your FIPS certificate? Don’t you have one with your name on it?”

Luckily, that’s exactly what SafeLogic provides. You’re not dealing with a DIY effort with directions from the worst Ikea bookshelf you’ve ever built. You get strong technical support from the SafeLogic team, standing behind our CryptoComply modules. (No, we don’t just send you a massive PDF of directions.) And that elusive FIPS 140-2 certificate? RapidCert delivers it in just 8 weeks, explicitly displaying your company name and operating environments. “Just trust me” doesn’t belong in your salespeople’s vocabulary.

So when you’re selling to the federal government, financial institutions, healthcare providers, or other regulated industries, expect your customers to be skeptical of your open source usage. You also need to be cognizant of the competitive landscape. You do not want to be cutting off your nose to spite your face, saving a few bucks by skating on FIPS validation, only to lose deals to rivals carrying certificates. Invest in your product and win those head-to-head opportunities! We even have a Top 10 list of reasons to choose SafeLogic over open source.

The comic below is a humorous dramatization of a sales call going wrong, but your target customers (in the green cube) really just want confirmation that your company carries a FIPS 140-2 validation. No tricks, no technicalities, just a certificate on the NIST website. Real, valid, honest-to-goodness, easy to cross-reference and confirm.

Open source FIPS validations are important for the community to have. It’s a good starting point, and for some small companies it’s the best that they can access. Maybe it’s enough for you right now. But customers can’t nitpick if you have your own certificate, and that’s where SafeLogic knocks it out of the park. You won’t find an easier or faster way to add that FIPS 140-2 validation to your salespeople’s arsenal. We’ll be ready when you are.

[Click to enlarge.]Why Should We Get Our Own FIPS Certificate?


25 Jul 2016


2016 - Golden Bridge Award - SilverThis morning, I had a nice surprise waiting in my inbox. SafeLogic won a Golden Bridge Award!

Awards have never been a priority for us, in large part due to our positioning… and the fact that we are focused on revenue and customers, not our own ego. We are the vendor to the vendors, a key component but rarely the feature. Award nominations always ask about end users, such as in the Fortune 500. “Symantec uses SafeLogic encryption. BlackBerry uses SafeLogic encryption,” I usually respond. “We have a great roster of customers, but it’s ultimately their end users, not ours.” Then we inevitably get sorted to the back of the list. I never worried about it because yes, I know, tech vendor awards are often only as valuable as the paper that they’re printed on, and we knew that we didn’t need to conform to a traditional category to be successful.

This time was different. The Golden Bridge Award team got us! They understood the importance of our role, the innovation behind our products, and recognized that while Joe Schmo wouldn’t go download a copy of our software directly, it’s pretty damn likely that Joe is already using it, and that merits recognition.

So with great pride, the SafeLogic team announces that we have won Silver in the category of Security Software Startups!
It feels good to be an award-winning company.

Click to Tweet: #Crypto startup @SafeLogic pulls down a trophy at #GoldenWorldAwards!

Kudos also to our customer Securonix on winning a variety of awards, including a Grand Trophy, and Tanuj Gulati, their Co-founder & CTO, for winning a Gold for Executive of the Year in Security Services and a Silver for Most Innovative Executive of the Year. Well done!

Now with all this talk of Golds and Silvers, I’m ready for the Olympics to open in Rio. U-S-A! U-S-A!


19 Jul 2016

OpenSSL 1.1’s Big, Bright, FIPS Validated Future

SafeLogic is the Orange Knight!The OpenSSL project posted to their official blog today with some major news – OpenSSL 1.1 will be getting a FIPS 140-2 validated module! It’s a huge deal and the SafeLogic team is proud to be leading the effort.

In September, OpenSSL’s Steve Marquess explained in a blog post (FIPS 140-2: It’s Not Dead, It’s Resting) why the ubiquitous open source encryption provider would be hard-pressed to bring FIPS mode to the 1.1 release. With changes over the last few years at the CMVP, the viability of legacy OpenSSL FIPS module validations have been repeatedly threatened and the crypto community simply cannot accept the possibility of being without a certificate. An open source module with a communal certificate available is a crucial component that allows many start-up companies to test the waters in federal agencies and regulated industries before investing in a validation for themselves. Likewise, many major corporations have relied upon OpenSSL FIPS modules over the years as a building block for extensive engineering efforts. Without this commitment, many would have been caught in the dilemma whether to use the FIPS 140 validated open source module compatible with a rapidly aging, often-maligned older version of OpenSSL, or the new, sleek, secure OpenSSL 1.1, but without a FIPS validated module at its heart.

The choice will now be an obvious one, and the community can safely remove their heads from the sand and begin planning their future roadmap around a fully validated FIPS module for OpenSSL 1.1 and beyond.

As the OpenSSL team announced today, SafeLogic will sponsor the engineering work on the FIPS module and we will be handling the validation effort ourselves. (What, you expected us to hire an outside consultant? Surely you jest.) Acumen will be the testing laboratory, as they have been for many of our RapidCerts, and together we have high hopes for a smooth and relatively painless process.

Click to Tweet: Have you heard? @SafeLogic is leading #FIPS140 effort for new #OpenSSL #crypto module!

One key element in the OpenSSL blog post that will surprise some folks:

“This is also an all-or-nothing proposition; no one – including SafeLogic – gets to use the new FIPS module until and if a new open source based validation is available for everyone.”

Why would we agree to that? For that matter, why would we take on this project at all, while other “leaders” in the community relished the idea of a world without validated open source options?

At SafeLogic, we are true believers in the importance of open source, in encryption and elsewhere. Past versions of OpenSSL have provided a basis for SafeLogic’s CryptoComply modules, so you may ask why we’re doing this – why we’re not just building it ourselves and letting the open source community fend for themselves.

Well, we thought about doing just that, but we decided against it for both altruistic and strategic reasons. We believe that SafeLogic has the chance to help not only the OpenSSL team, but the tech community at large. We realize that product vendors, government entities, education institutions, and other organizations need validated open source modules, and not all of them can or will implement SafeLogic solutions.

As a team, we believe that a rising tide lifts all boats, and we are putting that philosophy into action. The availability of an OpenSSL 1.1 FIPS module will provide greater security in regulated verticals and more opportunities for everyone working in this community. SafeLogic will be at the epicenter of the effort, of course, and I would be remiss if I didn’t mention that our success in this endeavor will push SafeLogic even further forward as the true leader in providing validated crypto!

Our central role in the effort will ensure that nobody has more expertise or knowledge in the design, operation and validation of OpenSSL 1.1 modules than SafeLogic, and future versions of CryptoComply will be the best yet. Trust me, our customers will reap the benefits. We are happy to put in the sweat equity on the open source communal validation, knowing that when product teams need a FIPS 140-2 certificate in their own name, custom work, integration assistance, comprehensive support or anything else related to OpenSSL 1.1 and FIPS 140-2, SafeLogic will be the obvious choice.

We’re very excited to work with Steve, the OpenSSL team, and Acumen, as we join forces to lead the OpenSSL 1.1 FIPS module through FIPS 140-2 validation. Stay tuned for updates!

For more information about the project, how to contribute, the future roadmap, or media inquiries, please contact us at


15 Jul 2016

Achieving Safe Harbor in Healthcare – Episode 4

Thanks for returning for the final installment in this blog series! If you need to catch up, please see Episode 1Episode 2 and Episode 3, posted each of the last three days.

Here in Episode 4, we are tackling HITECH Breach Notification for Unsecured Protected Health Information; Interim Final Rule part (a)(ii), which covers data in motion. This will be the longest section, so grab a cup of coffee and let’s rock! For your reference, here is the full passage again:

Protected health information (PHI) is rendered unusable, unreadable, or indecipherable to unauthorized individuals if one or more of the following applies:

(a) Electronic PHI has been encrypted as specified in the HIPAA Security Rule by ‘‘the use of an algorithmic process to transform data into a form in which there is a low probability of assigning meaning without use of a confidential process or key’’ and such confidential process or key that might enable decryption has not been breached. To avoid a breach of the confidential process or key, these decryption tools should be stored on a device or at a location separate from the data they are used to encrypt or decrypt. The encryption processes identified below have been tested by the National Institute of Standards and Technology (NIST) and judged to meet this standard.

(i) Valid encryption processes for data at rest are consistent with NIST Special Publication 800–111, Guide to Storage Encryption Technologies for End User Devices.

(ii) Valid encryption processes for data in motion are those which comply, as appropriate, with NIST Special Publications 800–52, Guidelines for the Selection and Use of Transport Layer Security (TLS) Implementations800– 77, Guide to IPsec VPNs; or 800–113, Guide to SSL VPNs, or others which are Federal Information Processing Standards (FIPS) 140–2 validated.

Let’s go in order. First, for Transport Layer Security (TLS), we are directed to another NIST Special Publication, this time 800–52, Guidelines for the Selection and Use of Transport Layer Security (TLS) Implementations. Here’s a quote, directly from the document’s Minimum Requirements section:

The cryptographic module used by the server shall be a FIPS 140-validated cryptographic module. All cryptographic algorithms that are included in the configured cipher suites shall be within the scope of the validation, as well as the random number generator.

That’s pretty straight-forward. NIST wants you to use NIST validated encryption. Makes sense, we expected that. Okay, we’re off to a great start!

Let’s look at IPsec VPNs next, governed by NIST Special Publication 800– 77, Guide to IPsec VPNs. Here’s an excerpt from the Executive Summary. (Italics below are mine.)

NIST’s requirements and recommendations for the configuration of IPsec VPNs are:

  • – If any of the information that will traverse a VPN should not be seen by non-VPN users, then the VPN must provide confidentiality protection (encryption) for that information.
  • – A VPN must use a FIPS-approved encryption algorithm. AES-CBC (AES in Cipher Block Chaining mode) with a 128-bit key is highly recommended; Triple DES (3DES-CBC) 1 is also acceptable. The Data Encryption Standard (DES) is also an encryption algorithm; since it has been successfully attacked, it should not be used.
  • – A VPN must always provide integrity protection.
  • – A VPN must use a FIPS-approved integrity protection algorithm. HMAC-SHA-1 is highly recommended. HMAC-MD5 also provides integrity protection, but it is not a FIPS-approved algorithm.

That’s also pretty blunt, laid out right in the first few pages. The only discrepancy here from previous guidance is that they call it “FIPS-approved” instead of the usual “FIPS validated”. It’s still a clear reference to the NIST program and the only resources available to confirm approval – the public validation list. Note that CAVP is responsible for validating algorithms and maintains public lists for each algorithm. That would be enough to satisfy the minimum requirements here, although deploying FIPS validated algorithms is not enough to claim full conformance to the standard. So for comprehensive coverage, a FIPS validated module is still required.

Let’s go to the last alternative category, SSL VPNs, which refers to NIST Special Publication 800–113, Guide to SSL VPNs. This one is probably the most interesting (and confusing). SP 800-113 clearly states in several places that Federal agencies require FIPS 140-2 encryption, but never explicitly extends the requirement beyond those government offices. It gets complicated because the Breach Notification Final Interim Rule clearly refers to this document… which dedicates probably 80% of the content in the SP to discussing how to meet the FIPS 140-2 standard. So the bulk of the document concerns the validation, even though it does not call it a mandate. It’s like handing someone a copy of Rosetta Stone but not actually asking them to learn the language. It’s a strong implication, so let’s chalk this one up to ‘strongly recommended’ even if not ‘required’.

Anything that didn’t fall into one of those categories (TLS, IPsec VPN or SSL VPN) lands squarely on the “others which are Federal Information Processing Standards (FIPS) 140–2 validated” box.

So if you’re keeping score at home, part (a) lays out five scenarios:

  • – Data at rest, referencing one NIST publication that points to two others, ultimately mapping all security controls to FIPS 140-2 validated encryption.
  • – Data in motion with a TLS implementation, which is mandated to include FIPS 140-2 validated encryption.
  • – Data in motion with an IPsec VPN, which must be handled with a FIPS-approved algorithm and a FIPS-approved integrity protection algorithm.
  • – Data in motion with an SSL VPN, which was not explicitly required to be FIPS validated, but the referenced publication is about how to ensure that it is FIPS validated, so deviate at your own risk.
  • – All other active data, which must be encrypted with a FIPS 140-2 validated module.

All scenarios lead back to the same conclusion. HIPAA and HITECH are both pieces of federal legislation and enforced by a federal agency, which refers to another federal agency for judgment on encryption benchmarks. FIPS 140-2 is the only certification that unequivocally meets the demands of these government bodies.

HIPAA Safe Harbor

Please use these links and excerpts to complete your own research, but be sure to ask yourself, “Do I really want to risk liability by using anything less than FIPS 140-2 validated encryption?”

The Breach Notification for Unsecured Protected Health Information; Interim Final Rule itself makes a strong point – you can choose to skip encryption and still potentially comply with the HIPAA Security Rule. But if you are hoping to avoid breach notification and penalties, you will be out of luck. Here is another excerpt from the Interim Final Rule, explaining the disconnect and solution. (Italics below are mine.)

Under 45 CFR 164.312(a)(2)(iv) and (e)(2)(ii), a covered entity must consider implementing encryption as a method for safeguarding electronic protected health information; however, because these are addressable implementation specifications, a covered entity may be in compliance with the Security Rule even if it reasonably decides not to encrypt electronic protected health information and instead uses a comparable method to safeguard the information.

Therefore, if a covered entity chooses to encrypt protected health information to comply with the Security Rule, does so pursuant to this guidance, and subsequently discovers a breach of that encrypted information, the covered entity will not be required to provide breach notification because the information is not considered ‘‘unsecured protected health information’’ as it has been rendered unusable, unreadable, or indecipherable to unauthorized individuals.

On the other hand, if a covered entity has decided to use a method other than encryption or an encryption algorithm that is not specified in this guidance to safeguard protected health information, then although that covered entity may be in compliance with the Security Rule, following a breach of this information, the covered entity would have to provide breach notification to affected individuals. For example, a covered entity that has a large database of protected health information may choose, based on their risk assessment under the Security Rule, to rely on firewalls and other access controls to make the information inaccessible, as opposed to encrypting the information. While the Security Rule permits the use of firewalls and access controls as reasonable and appropriate safeguards, a covered entity that seeks to ensure breach notification is not required in the event of a breach of the information in the database would need to encrypt the information pursuant to the guidance.

The Interim Final Rule can be very difficult to follow, but this much is clear: They consistently refer judgment to the appropriate government agency – NIST. As the National Institute of Standards and Technology, it is their experts that set the benchmarks, procedures for implementation, and decide what is approved and what is not. At the end of the day, NIST is very black and white with their program for testing and validating encryption modules. Everything that appears on the public validation list is approved, and everything else is not. For the needs of federal agencies and the military, NIST goes so far as to say that any unvalidated encryption is considered to be the equal of plaintext. Essentially, without validation, it cannot be trusted, not even a little bit. The privacy and security of citizens’ health information has rightfully been deemed a priority and should be treated with the same respect.

If you are a Covered Entity, you need to do a complete audit of the encryption in use throughout your organization. Every software solution from every vendor being used by every authorized individual and every Business Associate should contain a certified encryption module that appears on NIST’s public validation list. If it’s not there, start asking direct questions of the vendor. Start with “Why isn’t it validated?” and don’t let them dodge the question. The validation should clearly display the vendor’s name and the operating environment that you are using. If it’s not clear that they are FIPS 140-2 validated and have a certificate number to reference, they probably aren’t validated and you are in that gray area, subject to interpretation by the HHS. Nobody wants to be in that gray area when a device is lost, so get FIPS validated!

Thank you for reading my four episode opus. If you have any questions or feedback, please email me at or ping me on Twitter @SafeLogic_Walt.


14 Jul 2016

Achieving Safe Harbor in Healthcare – Episode 3

Welcome back! If you need to catch up, please see Episode 1 and Episode 2.

Yesterday, we established that our interest in the HITECH Breach Notification for Unsecured Protected Health Information; Interim Final Rule was limited to part (a), which refers to the cryptographic protection of actively-accessed PHI. We discarded part (b) for our purposes, because it only covers devices that have been decommissioned. For your reference, here is the passage again:

Protected health information (PHI) is rendered unusable, unreadable, or indecipherable to unauthorized individuals if one or more of the following applies:

(a) Electronic PHI has been encrypted as specified in the HIPAA Security Rule by ‘‘the use of an algorithmic process to transform data into a form in which there is a low probability of assigning meaning without use of a confidential process or key’’ and such confidential process or key that might enable decryption has not been breached. To avoid a breach of the confidential process or key, these decryption tools should be stored on a device or at a location separate from the data they are used to encrypt or decrypt. The encryption processes identified below have been tested by the National Institute of Standards and Technology (NIST) and judged to meet this standard.

(i) Valid encryption processes for data at rest are consistent with NIST Special Publication 800–111, Guide to Storage Encryption Technologies for End User Devices.

(ii) Valid encryption processes for data in motion are those which comply, as appropriate, with NIST Special Publications 800–52, Guidelines for the Selection and Use of Transport Layer Security (TLS) Implementations800– 77, Guide to IPsec VPNs; or 800–113, Guide to SSL VPNs, or others which are Federal Information Processing Standards (FIPS) 140–2 validated.

Breach Safe HarborMoving forward! Within part (a), the Interim Final Rule refers to NIST for judgment and testing in two categories: data at rest and data in motion.

First, part (i) for data at rest, refers to NIST Special Publication 800–111, Guide to Storage Encryption Technologies for End User Devices. Yes, NIST governs this category (spoiler alert – they govern them all!) so expect more cross-referencing. In this case, to another Special Publication.

Organizations should select and deploy the necessary controls based on FIPS 199’s categories for the potential impact of a security breach involving a particular system and NIST Special Publication 800-53’s recommendations for minimum management, operational, and technical security controls.

FIPS Publication 199 dates back to 2004, but is still widely used. It’s a relatively short document and is a reference guide provided by NIST to assist with control classifications. 800-111 explains further.

Organizations should select and deploy the necessary security controls based on existing guidelines. Federal Information Processing Standards (FIPS) 199 establishes three security categories – low, moderate, and high – based on the potential impact of a security breach involving a particular system. NIST SP 800-53 provides recommendations for minimum management, operational, and technical security controls for information systems based on the FIPS 199 impact categories. The recommendations in NIST SP 800-53 should be helpful to organizations in identifying controls that are needed to protect end user devices, which should be used in addition to the specific recommendations for storage encryption listed in this document.

So depending on the FIPS 199 classifications, you should consult NIST Special Publication 800-53 and act accordingly.  This is even more confusing, because 800-53 is a catalog-style document used to map controls from a variety of other Special Publications, so it does not have breadcrumbs to lead us directly from Interim Final Rule to Safe Harbor. Luckily, SafeLogic’s whitepaper on HIPAA security controls covers this exact topic. Rest assured, NIST connects every encryption requirement back to their own standard which they certify – FIPS 140-2. Go ahead and download the whitepaper and review at your leisure. Regardless of the FIPS 199 classification, SP 800-53 is satisfied by deploying FIPS 140-2 encryption. In the interest of space and time, I will not rehash all of the controls, but it’s all in the whitepaper.

Part (ii) is for data in motion and is subdivided into four categories as applicable: TLS, IPsec VPN, SSL VPN, or else the catch-all “others”, which goes straight to – yes, you guessed it – FIPS 140-2. Have I already mentioned that NIST wants everyone to use FIPS 140-2 validated encryption? It’s almost like NIST is promoting the use of their own standard…

These categories will be covered tomorrow, with excerpts from the referenced NIST Special Publications, in the final episode. Kudos if you’re still with me!