Well, it’s that time of year. You know, the annual, happy-go-lucky, turn-the-page-on-the-calendar, celebrate-the-new-year, use-too-many-hyphens blog post.
I’ve been reflecting on the beginnings of SafeLogic – how we got here, where we’ve been, and where we are headed next. Most of those reflections have been pleasant, but certainly not all. There’s no need to put lipstick on it. The nearly two years that I went without salary weren’t exactly “fun” and I’m glad that’s in the past. Or the times I felt like an inadequate leader because it felt like we weren’t living up to the ridiculously overblown expectations of Silicon Valley society. Or the times we invested in new ideas only to find failure (which is not a bad word, by the way).
I’m still thankful for all of those things because it put SafeLogic on a path that almost leaves me (yes, even me!) speechless. Those sacrifices were made with the future in mind, and we are now reaping the benefits. We’ve had so many positives this year that bullet points hardly seem to give justice to the significant effort behind them, but here are some quick highlights:
– We added a dozen new customers and strengthened relationships with existing customers.
– Revenue doubled from last year. (That’s good, right?)
– The number of support tickets decreased over 50%, signaling that the growth of our self-serve knowledge base is paying off.
– Average Time to Resolution on those support tickets is a fraction of what it was last year, a testament to the increased effectiveness of our technical team.
– 100% of support contracts were renewed. Always a good sign of customer satisfaction!
– Strategic additions to the team fueled these successes, which of course will then make it possible for more expansion. A very positive cycle.
On a personal note, I left the corporate world nearly 12 years ago to work for myself and at this very instant, I’m the happiest I’ve ever been. This is a journey that I could not undertake alone, and this team is the real deal. We have great products that customers want and need, and we help them solve real problems in innovative ways. Internally, we’ve grown and matured to the point that we are able to handle roadmap items and customer requests much more aggressively and proactively (and in some ways, automatically, which is extra cool).
So does all this reflection mean that we’re hitting pause because the CEO is happy? Oh hell no. We are just hitting our stride! Being content is nice, but never complacent. 2017 will be the year of more business innovation, of more new capabilities, of more milestones. Of, well, more.
This leads me to the mushy part:
Thank you, SafeLogic customers. Thank you for believing in us and we promise not to let you down as we continue to grow. Thank you, SafeLogic team. Your hard work and commitment is appreciated more than I can express. Thank you, SafeLogic partners, friends, and allies for your support, advice, and contributions.
Here’s to a stellar 2016 and to keeping the momentum going in 2017!
The OpenSSL project posted to their official blog today with some major news – OpenSSL 1.1 will be getting a FIPS 140-2 validated module! It’s a huge deal and the SafeLogic team is proud to be leading the effort.
In September, OpenSSL’s Steve Marquess explained in a blog post (FIPS 140-2: It’s Not Dead, It’s Resting) why the ubiquitous open source encryption provider would be hard-pressed to bring FIPS mode to the 1.1 release. With changes over the last few years at the CMVP, the viability of legacy OpenSSL FIPS module validations have been repeatedly threatened and the crypto community simply cannot accept the possibility of being without a certificate. An open source module with a communal certificate available is a crucial component that allows many start-up companies to test the waters in federal agencies and regulated industries before investing in a validation for themselves. Likewise, many major corporations have relied upon OpenSSL FIPS modules over the years as a building block for extensive engineering efforts. Without this commitment, many would have been caught in the dilemma whether to use the FIPS 140 validated open source module compatible with a rapidly aging, often-maligned older version of OpenSSL, or the new, sleek, secure OpenSSL 1.1, but without a FIPS validated module at its heart.
The choice will now be an obvious one, and the community can safely remove their heads from the sand and begin planning their future roadmap around a fully validated FIPS module for OpenSSL 1.1 and beyond.
As the OpenSSL team announced today, SafeLogic will sponsor the engineering work on the FIPS module and we will be handling the validation effort ourselves. (What, you expected us to hire an outside consultant? Surely you jest.) Acumen will be the testing laboratory, as they have been for many of our RapidCerts, and together we have high hopes for a smooth and relatively painless process.
One key element in the OpenSSL blog post that will surprise some folks:
“This is also an all-or-nothing proposition; no one – including SafeLogic – gets to use the new FIPS module until and if a new open source based validation is available for everyone.”
Why would we agree to that? For that matter, why would we take on this project at all, while other “leaders” in the community relished the idea of a world without validated open source options?
At SafeLogic, we are true believers in the importance of open source, in encryption and elsewhere. Past versions of OpenSSL have provided a basis for SafeLogic’s CryptoComply modules, so you may ask why we’re doing this – why we’re not just building it ourselves and letting the open source community fend for themselves.
Well, we thought about doing just that, but we decided against it for both altruistic and strategic reasons. We believe that SafeLogic has the chance to help not only the OpenSSL team, but the tech community at large. We realize that product vendors, government entities, education institutions, and other organizations need validated open source modules, and not all of them can or will implement SafeLogic solutions.
As a team, we believe that a rising tide lifts all boats, and we are putting that philosophy into action. The availability of an OpenSSL 1.1 FIPS module will provide greater security in regulated verticals and more opportunities for everyone working in this community. SafeLogic will be at the epicenter of the effort, of course, and I would be remiss if I didn’t mention that our success in this endeavor will push SafeLogic even further forward as the true leader in providing validated crypto!
Our central role in the effort will ensure that nobody has more expertise or knowledge in the design, operation and validation of OpenSSL 1.1 modules than SafeLogic, and future versions of CryptoComply will be the best yet. Trust me, our customers will reap the benefits. We are happy to put in the sweat equity on the open source communal validation, knowing that when product teams need a FIPS 140-2 certificate in their own name, custom work, integration assistance, comprehensive support or anything else related to OpenSSL 1.1 and FIPS 140-2, SafeLogic will be the obvious choice.
We’re very excited to work with Steve, the OpenSSL team, and Acumen, as we join forces to lead the OpenSSL 1.1 FIPS module through FIPS 140-2 validation. Stay tuned for updates!
For more information about the project, how to contribute, the future roadmap, or media inquiries, please contact us at OpenSSL@SafeLogic.com.
Proper healthcare encryption methods can be greatly beneficial to organizations as they work to improve patient data security.
Technology vendors building solutions for deployment in healthcare love to talk about encryption and how it can help patient data security. It’s the silver bullet that allows physicians and patients alike to embrace new apps and tools. Symptoms may include increased confidence, decreased stress, and a hearty belief in the power of technology.
But what if that encryption was creating a false sense of security? What if the technology wasn’t providing a shield for ePHI at all?
Say goodbye to privacy, say goodbye to HIPAA compliance… and say hello to breach notifications and financial penalties.
Safe Harbor, as outlined by the HITECH Act, provides for the good faith determination of whether ePHI has indeed been exposed when a device with access has been stolen or misplaced.
It is based on the concept that strong encryption, properly deployed, would thwart even a determined attacker with physical access to an authorized device. Thus, even when a laptop or mobile device or external hard drive is lost, the data is considered to be intact and uncompromised inside the device if the data was properly encrypted.
This is a key distinction, and it is the difference between a breach notification (causing a significant hit to the brand and future revenues as well as serious financial penalties) and Safe Harbor (causing a large exhale of relief and a flurry of high-fives).
Last week, our blog featured information about the RNG issue identified for removal by NIST. It was written by Mark Minnoch, our new Technical Account Manager, and I’m totally pumped he’s joined the SafeLogic team. If his name is familiar, it’s because he used to lead the lab at Infogard and he’s a regular at the International Cryptographic Module Conference (ICMC) and other industry events. He also contributes to our company quota for follicle-challenged white guys over 6’5”, which is a severely under-represented demographic for us.
This week, I’d like to talk a bit about the other category of FIPS 140-2 certificates that have been slated for relocation to the archive list. These validations are doomed to begin expiring in January of 2017 and annually going forward for the most grave of offenses. Has a backdoor been discovered? No… Improper entropy seeding? Use of a non-approved algorithm? No, not those either. It’s because they hadn’t received an update within the last five years.
That’s right. The CMVP is now taking action and their plan is to simply chuck every certificate that doesn’t carry a validation date from recently enough. For reference, “quinquennial” is the official term which means “every five years”. I’m adding it to my list of relevant jargon for 2016.
This is the part where I remind you that SafeLogic doesn’t just provide a fantastic crypto module. We don’t just complete FIPS 140-2 validations in 8 weeks with RapidCert. We stick around! We offer free support for the first year, which includes integration, strategy and marketing assistance. Then we encourage customers to renew their support on an annual basis to take advantage of the patches that we provide upstream of our modules. Even better, smart clients opt for RapidCert Premium, which adds annual certificate updates. These reflect the newest release of iOS, for example, so that the validation is always in full compliance for the current version.
Now comes the part where I explain why this matters. FIPS 140-2 validation has always been a pain in the ass. The queue length spiked a few years ago due to increased demand, furloughs, agency shutdowns, lack of funding… pretty much everything that could go wrong, did go wrong. The queue has softened somewhat recently, thanks to renewed effort and a few Shark Weeks (you know… act like a predator, take no prisoners…) but it is still pretty diabolical and requires significant effort to survive the process. Now they are tightening the requirements and requiring updates on a five year interval, whether they’re actually necessary or not. The overhead needed to achieve validation has always been high, but now the maintenance needs are rising as well and revalidation is a real and ugly possibility.
It’s time to re-examine the costs associated with handling FIPS 140 validations in-house. Hiring a consultant once to push through the initial certificate has one set of calculations, but the days of “set it and forget it” validations are a thing of the past. Keeping those consultants on retainer for updates every five years (and likely much more often than that, to complete the now-frequent NIST changes) has the potential to destroy a budget. SafeLogic brings significant value to the table as we simply take care of it. We usher the original certificate through the CMVP, we maintain it for full perpetual compliance, and we guarantee that you won’t get removed from the validated list. It’s all part of your contract.
Whether your certificate is headed to the Legacy List or you’re planning a first foray into FIPS 140-2, contact our team immediately. The game has changed and SafeLogic has the answers you need. Whether you want to call it Validation-as-a-Service or Managed Certifications or something else… we call it RapidCert and it will save you time, money, stress and effort. I promise.
Ahh, the year-end crunch time is here. Closing and reconciling the books. Working with our customers to get in (or delay, when strategic, of course) last minute invoices and accruals. Making sure contracts are executed before the calendar flips over. Catching up. Projecting out. Forward planning. Requisite CEO year-end blog posts like this one. Check it off the list, Marketing Team!
To say that our 2015 was dynamic at SafeLogic is an understatement. As I’m recapping and reviewing our goals for 2015, I see areas where we “crushed it” (in the Silicon Valley lexicon), areas for improvement (yes, it’s a nice way to say that we dropped the ball on a few initiatives and no, I’m not too proud to admit it), and areas for new growth and development. I’m glad this year is behind us, because I’m just so damn ready for 2016.
SafeLogic’s 2016 campaign will be about growth, balance, and clarity. Almost like the plans of current Presidential candidates but without the lunacy and grandstanding, and a lot less spend on TV commercials (sorry, Marketing Team). So how will these elements unfold?
Well, we added some very high profile customers to our wall this year, and we’ll grow our share in the market. We’ll increase our team and improve our infrastructure to support those new clients. We will balance delivery, professional development, budgets, customer requirements, and every other moving part that defines a software company. We’ll move quickly but carefully. We’ll work on the right things for our customers and for the industry, while having clear communication internally and externally. We’ll have a lot of fun while delivering on very serious business-driven goals.
It’s going to be an exciting time. We’re launching some of our Skunk Works projects this year, and we’ve got new projects bidding to be added to the docket. It isn’t always easy to bring innovative and progressive new ideas to a field that is historically stagnant, challenging, and sometimes non-sensical (I’m talking to you, encryption, and you, regulatory compliance). But it’s what we do. And while I think we always have room for improvement, I think we do it pretty damn well, so expect more of the same next year, in higher dosages and more frequently.
I’m thrilled about the new year. We have the right priorities, the right team, the right solutions, and the right processes in place at SafeLogic. Now will someone please turn the calendar over to January? We’re ready to rock!
The Anthem breach encouraged me to wrap up this blog series and talk about key management in a genuine security context. When the Anthem breach first was public, it looked as if patient records were accessed because of lack of data encryption. Then Anthem stated the real reason for the breach: they only encrypt data in flight to/from the database(s) and rely on user credentials for access to data in the database. Why didn’t they encrypt the data in the database? Well, per Health Insurance Portability and Accountability Act (HIPAA) requirements, they don’t have to as long as they provide protection of the data via other means. Like elevated credentials.
That worked well, didn’t it?
They were compliant, but obviously not secure. To add more security to compliance programs like HIPAA, there have been some cries for enterprises to implement encryption. So how do you encrypt data properly? Well, it all depends on your environment, the sensitivity of the data, the threat models, and any tangible requirements for regulatory compliance. Here are some general guidelines:
Use validated encryption.
Use strong, well-generated keys.
Manage the keys properly.
Use validated encryption. Federal Information Processing Standard (FIPS) 140 is the gold standard. The Advanced Encryption Standard (AES) is one of the FIPS-approved algorithms for data encryption, and it is a better encryption algorithm than what Joe the Computer Science Intern presented in his thesis project. It just is. Plus, part of the FIPS 140 process involves strenuous black box testing of the algorithms to ensure they’re implemented properly. This is crucial for interoperability, and proper implementation of the AES standard also provides a measure of confidence that there aren’t leaks, faults, etc. Always look for the FIPS 140 certificate for your encryption solution.
Use well-generated keys. A password-based key (PBK) is crap. Here a key is derived from a password after it’s hashed with a message digest function. PBKs are crap because most passwords are crap. They’re subject to brute-force attack and just should not be used. Password-Based Key Derivation Function v2 (PBKDF2) makes password-based keys a bit stronger by conditioning the digest with random elements (called salt) to decrease the threat of brute force. But the threat is still there.
Keys should be as unpredictable and “random” as possible. Unfortunately in software environments it’s difficult to obtain truly random data because computers are designed to function predictably (if I do X, then Y happens). But let’s say you can get provable random data from your mobile device or your appliance. Use that to feed a conditioning algorithm and/or pseudorandom number generator. Then use that output for your key.
Use strong keys. The strength of a key depends on how it’s generated (see above) and how long the key is. For example, the AES algorithm can accommodate key sizes of 128-bits, 192-bits, or 256-bits. Consider using a key size that correlates to the overall sensitivity of your data. In Suite B, 256-bit keys can be used to protect classified data at the Top Secret level. Is your data tantamount to what the government would consider Top Secret?
Also consider the environment. Constrained and embedded environments (think wearables) may not have the processing power to handle bulk encryption with 256-bit keys. Or maybe data is ephemeral and wiped after a few seconds and therefore doesn’t need “top secret level” encryption. Or maybe there’s just not enough space for a 256-bit key.
Use a key that is strong enough to protect the data within the constraints of the environment and one that can counter the threats to that environment.
Manage your keys properly. You wouldn’t leave the key to your front door taped to the door itself. Hopefully you don’t put it under the doormat either. What would be the point of the lock? The same applies to information security. Don’t encrypt your data with a strong, properly generated data encryption key (DEK) then leave that key under the doormat.
Consider a key vault and use key encryption keys (KEK) to encrypt the data encryption keys. Access to this key vault or key manager should also be suitably locked down and tightly controlled (again, many different ways to do this). Otherwise you might as well just not encrypt your data.
While we’re at it: rotate your keys, especially your KEKs. Key rotation essentially means “key replacement” … and it’s a good idea in case the key or system is compromised. When you replace a key, be sure to overwrite with Fs or 0s to reduce any chance of traceability.
Store those DEKs encrypted with KEKs and protect those KEKs with tools and processes. And remember to balance security with usability: rotating your KEK every 2 seconds might be secure, but is your system usable?
Anthem wanted the data to be useful, which is why it wasn’t encrypted at the database. But that usability came at a high cost. The good news is that it is possible to encrypt data and have it be usable.
Encryption is a critical, necessary piece of a system’s overall security posture. But it’s not the sole answer. In Anthem’s case, records were accessed via those “elevated user credentials” … which means that malicious hackers were able to get in to the authentication server and raise privilege levels of user credentials (usernames/passwords) that they either knew or gleaned from the auth server. So in this case, it’s irrelevant if the breached data was encrypted; the hackers had authenticated and authorized access to it.
So what’s the answer?
When this was first reported I tweeted this:
Defense in depth means providing security controls to address all aspects of the system: people, process, and technology. Technology is the most difficult pillar to lock down because there are so many layers and threats, hence so many products such as firewalls, IDP, APT, IDS, SIEM, 2FA, AV, smart cards, cloud gateways, etc.
Encryption is a fundamental element for security of data at rest and data in motion (control plane and data plane). Even the strongest encryption with proper key management won’t protect data that is accessed by an authorized user, because it has to be usable. However, encrypted data and tight management of keys provides a critical, necessary piece to a robust security posture.
I hope this provides some guidance on how to think about encryption and key management in your organization.
Last week I met with a customer to help solve, among other things, some challenges around key management and key lifecycles. I thought I’d kick off a blog series on keys: what they are, their generation, use, recommended strength, etc.
First, let’s briefly address what a key is: a key is what protects your data. It’s a (hopefully!) secret parameter fed into an encryption algorithm to obfuscate data in a way that only someone with the same key can decrypt the data and read it as intended.*
Here’s how I explained it to my 10-year-old daughter:
Think about the door to our house. When the door is locked, only someone with a key can get inside. (Ok sounds more like authorization but stay with me). When inserted and turned, the key hits the pins that triggers the locking mechanism and unlocks the door. That key is the only key that can lock and unlock our door.
While quite elementary in my mind, it’s a relatively good example of the value and importance of the key lifecycle, which I briefly discussed with my daughter after she asked the following questions:
What if someone copies the key?
What if our neighbors lose our spare key?
How do we know if someone else used our key?
Does someone else’s key work in our lock?
All are relevant questions in relation to cryptography as well. Over the next couple of weeks, we’ll talk about how keys should be generated, ideal key sizes, and general key management issues and best practices.
Fair warning: there is no single, correct answer. We’ll use this series to address dependencies and variables such as environments, data sensitivity, and threat models.
*This is known as symmetric encryption, where one key encrypts and decrypts data. In asymmetric encryption a public key is used to encrypt data and only its associated private key can decrypt the data.
Let’s face it – 2014 was pretty bad from an information security perspective, and I believe we will see a rise in the frequency, severity, and publicity of malicious hacks and breaches in 2015.
I’m worried that as a community, hell, as a society, we won’t see enough progress in this uphill battle of infosec. I’m not blaming anyone or pointing fingers. Security is hard because every organization is different: different people, different policies, different network topologies, different vendors, different missions, etc. (and that is why there is no silver bullet for security). In general, I’m worried about some SMBs that lack the resources to set up a proactive security posture. I’m concerned about some large enterprises that will continue to lag and not fully embrace security.
But… I’m optimistic. Security is at the tip of everyone’s tongue now. It’s “cool” … and cool is good.
SMBs have options for cloud productivity and storage solutions with security built in – at the very least, better security than what they could do themselves. Larger organizations can integrate many different solutions to enable their security posture.
Security is about defense-in-depth, which is to say having security at all layers, from policy and training to two-factor auth and encryption. Aggregate organizational differences can be met with the right technologies in the right place.
I’m optimistic because there are so many good and talented people working very hard to stay ahead of the bad guys. There are new technologies and new ways of thinking. There are VCs willing to fund such companies. There is more adoption and acceptance of security in the marketplace. There are companies with an assigned CISO to keep their business focused on security and out of the news.
So how do we make 2015 better to ease my worrying and reinforce my optimism?
Everyone: keep evangelizing. We have to keep talking about security and encouraging it. We need to think about security in new and emerging markets like wearables and IoT. I think after all the news in 2014, stakeholders are starting to get it. Perhaps we need better / tighter regulations. We need to talk about what’s real, what’s viable, and what’s manageable.
Product vendors: build security into your lifecycle. It’s doable. Microsoft initiated the Security Development Lifecycle with impressive if not astounding results. Cisco is doing it, along with many others. Security is a process. Bake it in to your software development. It’s good for you and your customers.
Buyers: check for the right encryption. Not all encryption is equal. Is your vendor using homegrown encryption written by Joe the Intern? Or is it standards-based? Just because a vendor says they implement AES doesn’t mean they do it correctly. Encryption needs to be correct to be true and interoperable. Look for FIPS 140 validation on your preferred vendor’s encryption library or ask for the certificate number.
All businesses: Assess the value of your data and where it resides. Then deploy the right products. Security is a process. Organizational security starts with security risk management, which guides the organization in protecting its assets. Before selecting security controls, the organization must know what data it needs to protect, the value of that data, and the lifecycle of that data. Whether protecting credit card numbers, user files, intellectual property, internal emails, provocative Mardi Gras photos, product roadmaps, money… all of that needs to be protected in an organized and actionable way.
Over time, we’ll explore more in each of these areas. In the meantime, this worrier is optimistic that we will stay focused, deliver, and do our best to make 2015 better.
Several times this year we’ve heard about hacks and compromised systems (more so than I can remember in recent history), and I have to say I’m truly amazed at all the press on the Sony hack. But why is this garnering so much attention?
Simply put, its effects are felt by a wider audience.
Sony cares because of loss of revenue and tarnished reputation.
Movie stakeholders (the producers, actors, etc.) care because it could impact them financially. I have never read the relevant agreements for this industry, but I’m sure there is a force majeure clause that will now be subject to an unprecedented interpretation and a great deal of legal precedence going forward.
Theater owners / workers care because of supposed threats against their establishment, loss of revenue, and the inconvenience of replacing a movie in their lineup.
Consumers care because they can’t see a movie with some very funny comedians.
Banks or retailers get hacked and it makes the news for a couple of days and fades. Maybe it’s not serious enough? The Home Depot, Target, and Staples attacks don’t really take anything away from the consumer. They can still shop at those places, albeit with new credit card numbers. So they don’t really feel the effects. An entertainment company is hacked and it’s an act of war cyber-vandalism. So much so that the President has weighed in and vowed a response. I guess compromising a retailer is just a nuisance.
Finally, there is breach that consumers actually care about. The JPMorgan breach doesn’t directly affect the average family. We are, sadly, getting accustomed to being issued new credit cards and putting band aids on breaches in that industry. We can tolerate the Fortune 50 losing money, but don’t mess with our entertainment. That is intrinsically American.
Perhaps I should rethink this title, as now attackers may have found an avenue that will encourage even more attacks. And let’s face it: we have thoughts of actual war dancing through our heads. This isn’t script kiddies and folks just looking to make a quick buck. These are hackers with nukes.
At SafeLogic we’ve done a fair bit of evangelizing this year, trying to get makers of IoT devices and health wearables to build security in as opposed to treating it as a cost center and a reactive initiative. So with that in mind, let’s think about this:
If halting the release of a movie gets this much attention and buzz , what happens if critical infrastructure is compromised? What if people can’t get water? Or they get only contaminated water? What if the power grid is blacked out? What happens when connected “things” are compromised? These are the absolute scariest scenarios, the effects of which are far more impactful than what you’ve been reading about this week. These effects are real.
Let’s not discover what happens in these “what if” scenarios. We need awareness and we need plans and we need action. I’m hoping that everyone takes the Sony hacks to heart and thinks about what truly matters… Especially this time of year.
Oh, and encrypt your data with SafeLogic’s validated and widely-deployed encryption solutions.
You may have seen the news about POODLE recently. The good news is that it’s not as severe as Heartbleed, which affected server-side SSL implementations and had repercussions across most web traffic. The bad news is that it’s still seriously nasty.
POODLE is an acronym for Padding Oracle On Downgraded Legacy Encryption and essentially allows an attacker to decrypt SSL v3.0 browser sessions. This man-in-the-middle attack has one major constraint: the attacker has to be on the same wireless network.
That renders POODLE irrelevant because everyone locks down their wireless networks, right? Oh yeah, except those customer-friendly coffee shops with public wifi. In places like Palo Alto, you can bet there is a *lot* of interesting information going over the air there. Or at conferences, where diligent employees handle pressing business and aggressive stock traders log in to their account to buy the stock of the keynote speaker (or short it if his presentation lacks luster). The threat is real – session hijacking and identity theft are just the tip of the iceberg.
It’s worth noting that this is a protocol-specific vulnerability and not tied to vendor implementation (such as Heartbleed with OpenSSL and the default Dual_EC_DRBG fiasco with RSA). That makes it a mixed bag. The issue affects a wide variety of browsers and servers (Twitter, for example, scrambled to disable SSLv3 altogether), but users do have some control. This is because SSLv3 can also be disabled in the client within some browser configurations, so check your current settings for vulnerability at PoodleTest.com and install any patches when available for your browser.
Some browser vendors have already made moves to patch against this threat and permanently disable SSLv3. Meanwhile, others have dubbed server-side vulnerability “Poodlebleed” and offer a diagnostic tool to assess connectivity.
From a government and compliance perspective, Federal agencies should be using TLS 1.1 according to Special Publication 800-52 Rev 1. TLS 1.1 is not susceptible to POODLE. FIPS 140 validations and SafeLogic customers are not affected.