Ray Potter, Author at SafeLogic

All posts by author

7 Mar 2017

The CMVP Historical Validation List Is Here with a Vengeance

SunsetEditor’s note: This post was updated on March 14, 2017 to reflect a distinction that came to light in dialogue with CMVP – validations moved to the Historical List have not been revoked outright. The validation still exists, but are not for Federal Agencies to include in a new procurement. Agencies are recommended to conduct a risk determination on whether to continue using existing deployments that include modules on the Historical List.

Over a year ago, our blog featured posts about the RNG issue that was leading to certain FIPS 140-2 validations moving to the Historical List and the 5 year sunset policy that CMVP was adopting. [Geez, was that really more than a year ago? Crazy.]

Now the hammer has dropped, and the industry is seeing modules routinely relegated to the Historical List each month. The sunset policy created a waterfall in January 2017, and as of today, there are 1,914 modules on the Historical List, representing approximately 2/3rds of the total validations completed in the history of the CMVP.

Let me repeat that for emphasis. 1,914 modules.
Approximately 2/3rds of all modules ever validated by NIST to meet the FIPS 140 standard are no longer on the active validation list.

This includes some modules that were updated in 2016, and a few were even just revised in 2017! Many of these are hardware, so they are often more static and harder to update, but certainly not all. Check out the entire Historical Validation List for yourself. It’s a veritable “Who’s Who” of once-proudly validated companies. Big names, hot startups, none are immune. Between the sunset timeline and the active removal of modules that are no longer compliant, the herd has been severely thinned.

The takeaway? Maintaining FIPS 140 validation is really hard! It’s not “just one big push” to get on the list anymore. It requires constant vigilance to stay on top of the updates and to keep up with NIST’s reinvigorated policies. A more active CMVP can seem like a pain in the ass at first glance, but it is ultimately better for the industry. Nobody (except for lazy vendors) benefited from old, insecure, ‘grandfathered’ modules remaining on the active validation list. A stringent, active CMVP has embraced their role as a clearing house and it increases the value of the modules that do satisfy current standards. And I think they’re doing a great job.

This underscores the strategic significance of relying upon SafeLogic to complete and maintain FIPS 140-2 validation. As I tell folks every day, this is our focus. Our business is based upon the proper production of FIPS-compliant modules and their subsequent validation. Our customers reap the benefits of our work, and we succeed by scaling it, replicating our effort and leveraging our niche expertise for each client. CryptoComply is smooth and RapidCert accelerates the initial validation, but our customers have really appreciated the value of offloaded maintenance for the certificate. We talk a lot about the time, money, and effort with the traditional process, and the savings realized when using SafeLogic are growing. The delta is getting wider.

I scratch my head when a product manager boasts that they plan to roll their own crypto and get it validated. There are no bonus points for suffering in-house or for reinventing the wheel. When you hire consultants to complete a validation, you’re paying a premium for a single push, when the maintenance really is a constant effort. Consider those costs in time, money, and effort to complete your initial validation – and then add a multiplier for every revalidation you anticipate. It will be at minimum a quinquennial (every five years) project, and that’s if you’re lucky enough to avoid any other pitfall. The math doesn’t lie – the traditional path to FIPS 140-2 validation has become cost prohibitive. And if you’re pursuing Level 2 or Level 3, you still need a solid crypto module at the heart of the product. Using CryptoComply ensures that component meets the necessary requirements, again saving time, money, and effort.

CryptoComply is proven, again and again, to continually meet standards and retain its validated status with NIST. This is one of those situations where you don’t need to be creative. Choose SafeLogic, let us take care of the crypto, and you can get back to doing what you do best.

Written by Ray Potter

29 Dec 2016

Happy New Year from SafeLogic

Well, it’s that time of year. You know, the annual, happy-go-lucky, turn-the-page-on-the-calendar, celebrate-the-new-year, use-too-many-hyphens blog post.

I’ve been reflecting on the beginnings of SafeLogic – how we got here, where we’ve been, and where we are headed next. Most of those reflections have been pleasant, but certainly not all. There’s no need to put lipstick on it. The nearly two years that I went without salary weren’t exactly “fun” and I’m glad that’s in the past. Or the times I felt like an inadequate leader because it felt like we weren’t living up to the ridiculously overblown expectations of Silicon Valley society. Or the times we invested in new ideas only to find failure (which is not a bad word, by the way).

high-fiveI’m still thankful for all of those things because it put SafeLogic on a path that almost leaves me (yes, even me!) speechless. Those sacrifices were made with the future in mind, and we are now reaping the benefits. We’ve had so many positives this year that bullet points hardly seem to give justice to the significant effort behind them, but here are some quick highlights:

– We added a dozen new customers and strengthened relationships with existing customers.

– Revenue doubled from last year. (That’s good, right?)

– The number of support tickets decreased over 50%, signaling that the growth of our self-serve knowledge base is paying off.

– Average Time to Resolution on those support tickets is a fraction of what it was last year, a testament to the increased effectiveness of our technical team.

– 100% of support contracts were renewed. Always a good sign of customer satisfaction!

– Strategic additions to the team fueled these successes, which of course will then make it possible for more expansion. A very positive cycle.

On a personal note, I left the corporate world nearly 12 years ago to work for myself and at this very instant, I’m the happiest I’ve ever been. This is a journey that I could not undertake alone, and this team is the real deal. We have great products that customers want and need, and we help them solve real problems in innovative ways. Internally, we’ve grown and matured to the point that we are able to handle roadmap items and customer requests much more aggressively and proactively (and in some ways, automatically, which is extra cool).

So does all this reflection mean that we’re hitting pause because the CEO is happy? Oh hell no. We are just hitting our stride! Being content is nice, but never complacent. 2017 will be the year of more business innovation, of more new capabilities, of more milestones. Of, well, more.

This leads me to the mushy part:

Thank you, SafeLogic customers. Thank you for believing in us and we promise not to let you down as we continue to grow. Thank you, SafeLogic team. Your hard work and commitment is appreciated more than I can express. Thank you, SafeLogic partners, friends, and allies for your support, advice, and contributions.

Here’s to a stellar 2016 and to keeping the momentum going in 2017!

eft_letterhead_usd_pdf__1_page_

 

BlogFooterRay3

19 Jul 2016

OpenSSL 1.1’s Big, Bright, FIPS Validated Future

SafeLogic is the Orange Knight!The OpenSSL project posted to their official blog today with some major news – OpenSSL 1.1 will be getting a FIPS 140-2 validated module! It’s a huge deal and the SafeLogic team is proud to be leading the effort.

In September, OpenSSL’s Steve Marquess explained in a blog post (FIPS 140-2: It’s Not Dead, It’s Resting) why the ubiquitous open source encryption provider would be hard-pressed to bring FIPS mode to the 1.1 release. With changes over the last few years at the CMVP, the viability of legacy OpenSSL FIPS module validations have been repeatedly threatened and the crypto community simply cannot accept the possibility of being without a certificate. An open source module with a communal certificate available is a crucial component that allows many start-up companies to test the waters in federal agencies and regulated industries before investing in a validation for themselves. Likewise, many major corporations have relied upon OpenSSL FIPS modules over the years as a building block for extensive engineering efforts. Without this commitment, many would have been caught in the dilemma whether to use the FIPS 140 validated open source module compatible with a rapidly aging, often-maligned older version of OpenSSL, or the new, sleek, secure OpenSSL 1.1, but without a FIPS validated module at its heart.

The choice will now be an obvious one, and the community can safely remove their heads from the sand and begin planning their future roadmap around a fully validated FIPS module for OpenSSL 1.1 and beyond.

As the OpenSSL team announced today, SafeLogic will sponsor the engineering work on the FIPS module and we will be handling the validation effort ourselves. (What, you expected us to hire an outside consultant? Surely you jest.) Acumen will be the testing laboratory, as they have been for many of our RapidCerts, and together we have high hopes for a smooth and relatively painless process.

Click to Tweet: Have you heard? @SafeLogic is leading #FIPS140 effort for new #OpenSSL #crypto module! https://www.SafeLogic.com/openssl-1-1-future/

One key element in the OpenSSL blog post that will surprise some folks:

“This is also an all-or-nothing proposition; no one – including SafeLogic – gets to use the new FIPS module until and if a new open source based validation is available for everyone.”

Why would we agree to that? For that matter, why would we take on this project at all, while other “leaders” in the community relished the idea of a world without validated open source options?

At SafeLogic, we are true believers in the importance of open source, in encryption and elsewhere. Past versions of OpenSSL have provided a basis for SafeLogic’s CryptoComply modules, so you may ask why we’re doing this – why we’re not just building it ourselves and letting the open source community fend for themselves.

Well, we thought about doing just that, but we decided against it for both altruistic and strategic reasons. We believe that SafeLogic has the chance to help not only the OpenSSL team, but the tech community at large. We realize that product vendors, government entities, education institutions, and other organizations need validated open source modules, and not all of them can or will implement SafeLogic solutions.

As a team, we believe that a rising tide lifts all boats, and we are putting that philosophy into action. The availability of an OpenSSL 1.1 FIPS module will provide greater security in regulated verticals and more opportunities for everyone working in this community. SafeLogic will be at the epicenter of the effort, of course, and I would be remiss if I didn’t mention that our success in this endeavor will push SafeLogic even further forward as the true leader in providing validated crypto!

Our central role in the effort will ensure that nobody has more expertise or knowledge in the design, operation and validation of OpenSSL 1.1 modules than SafeLogic, and future versions of CryptoComply will be the best yet. Trust me, our customers will reap the benefits. We are happy to put in the sweat equity on the open source communal validation, knowing that when product teams need a FIPS 140-2 certificate in their own name, custom work, integration assistance, comprehensive support or anything else related to OpenSSL 1.1 and FIPS 140-2, SafeLogic will be the obvious choice.

We’re very excited to work with Steve, the OpenSSL team, and Acumen, as we join forces to lead the OpenSSL 1.1 FIPS module through FIPS 140-2 validation. Stay tuned for updates!

For more information about the project, how to contribute, the future roadmap, or media inquiries, please contact us at OpenSSL@SafeLogic.com.

BlogFooterRay2

1 Jul 2016

How Unvalidated Encryption Threatens Patient Data Security

HealthcareGraphic2Originally posted in its entirety at HealthITSecurity.com.

Proper healthcare encryption methods can be greatly beneficial to organizations as they work to improve patient data security.

Technology vendors building solutions for deployment in healthcare love to talk about encryption and how it can help patient data security. It’s the silver bullet that allows physicians and patients alike to embrace new apps and tools. Symptoms may include increased confidence, decreased stress, and a hearty belief in the power of technology.

But what if that encryption was creating a false sense of security? What if the technology wasn’t providing a shield for ePHI at all?

Say goodbye to privacy, say goodbye to HIPAA compliance… and say hello to breach notifications and financial penalties.

Safe Harbor, as outlined by the HITECH Act, provides for the good faith determination of whether ePHI has indeed been exposed when a device with access has been stolen or misplaced.

It is based on the concept that strong encryption, properly deployed, would thwart even a determined attacker with physical access to an authorized device. Thus, even when a laptop or mobile device or external hard drive is lost, the data is considered to be intact and uncompromised inside the device if the data was properly encrypted.

This is a key distinction, and it is the difference between a breach notification (causing a significant hit to the brand and future revenues as well as serious financial penalties) and Safe Harbor (causing a large exhale of relief and a flurry of high-fives).

Click to Tweet: #FIPS140 #encryption: the difference between breach notification & Safe Harbor #HIPAA #Healthcare #Privacy

Here’s the rub – how is strong encryption differentiated from weak encryption for the purposes of HIPAA compliance?

Keep reading at HealthITSecurity.com!

BlogFooterRay2

19 Jan 2016

The CMVP Legacy List Returns

Last week, our blog featured information about the RNG issue identified for removal by NIST. It was written by Mark Minnoch, our new Technical Account Manager, and I’m totally pumped he’s joined the SafeLogic team. If his name is familiar, it’s because he used to lead the lab at Infogard and he’s a regular at the International Cryptographic Module Conference (ICMC) and other industry events. He also contributes to our company quota for follicle-challenged white guys over 6’5”, which is a severely under-represented demographic for us.

This week, I’d like to talk a bit about the other category of FIPS 140-2 certificates that have been slated for relocation to the archive list. These validations are doomed to begin expiring in January of 2017 and annually going forward for the most grave of offenses. Has a backdoor been discovered? No… Improper entropy seeding? Use of a non-approved algorithm? No, not those either. It’s because they hadn’t received an update within the last five years.

That’s right. The CMVP is now taking action and their plan is to simply chuck every certificate that doesn’t carry a validation date from recently enough. For reference, “quinquennial” is the official term which means “every five years”. I’m adding it to my list of relevant jargon for 2016.

twitter-graphic_Lock2This is the part where I remind you that SafeLogic doesn’t just provide a fantastic crypto module. We don’t just complete FIPS 140-2 validations in 8 weeks with RapidCert. We stick around! We offer free support for the first year, which includes integration, strategy and marketing assistance. Then we encourage customers to renew their support on an annual basis to take advantage of the patches that we provide upstream of our modules. Even better, smart clients opt for RapidCert Premium, which adds annual certificate updates. These reflect the newest release of iOS, for example, so that the validation is always in full compliance for the current version.

Now comes the part where I explain why this matters. FIPS 140-2 validation has always been a pain in the ass. The queue length spiked a few years ago due to increased demand, furloughs, agency shutdowns, lack of funding… pretty much everything that could go wrong, did go wrong. The queue has softened somewhat recently, thanks to renewed effort and a few Shark Weeks (you know… act like a predator, take no prisoners…) but it is still pretty diabolical and requires significant effort to survive the process. Now they are tightening the requirements and requiring updates on a five year interval, whether they’re actually necessary or not. The overhead needed to achieve validation has always been high, but now the maintenance needs are rising as well and revalidation is a real and ugly possibility.

It’s time to re-examine the costs associated with handling FIPS 140 validations in-house. Hiring a consultant once to push through the initial certificate has one set of calculations, but the days of “set it and forget it” validations are a thing of the past. Keeping those consultants on retainer for updates every five years (and likely much more often than that, to complete the now-frequent NIST changes) has the potential to destroy a budget. SafeLogic brings significant value to the table as we simply take care of it. We usher the original certificate through the CMVP, we maintain it for full perpetual compliance, and we guarantee that you won’t get removed from the validated list. It’s all part of your contract.

Whether your certificate is headed to the Legacy List or you’re planning a first foray into FIPS 140-2, contact our team immediately. The game has changed and SafeLogic has the answers you need. Whether you want to call it Validation-as-a-Service or Managed Certifications or something else… we call it RapidCert and it will save you time, money, stress and effort. I promise.

BlogFooterRay2

30 Dec 2015

Bring on 2016!

Jan1
Ahh, the year-end crunch time is here. Closing and reconciling the books. Working with our customers to get in (or delay, when strategic, of course) last minute invoices and accruals. Making sure contracts are executed before the calendar flips over. Catching up. Projecting out. Forward planning. Requisite CEO year-end blog posts like this one. Check it off the list, Marketing Team!

To say that our 2015 was dynamic at SafeLogic is an understatement. As I’m recapping and reviewing our goals for 2015, I see areas where we “crushed it” (in the Silicon Valley lexicon), areas for improvement (yes, it’s a nice way to say that we dropped the ball on a few initiatives and no, I’m not too proud to admit it), and areas for new growth and development. I’m glad this year is behind us, because I’m just so damn ready for 2016.

SafeLogic’s 2016 campaign will be about growth, balance, and clarity. Almost like the plans of current Presidential candidates but without the lunacy and grandstanding, and a lot less speJanuarynd on TV commercials (sorry, Marketing Team). So how will these elements unfold?

Well, we added some very high profile customers to our wall this year, and we’ll grow our share in the market. We’ll increase our team and improve our infrastructure to support those new clients. We will balance delivery, professional development, budgets, customer requirements, and every other moving part that defines a software company. We’ll move quickly but carefully. We’ll work on the right things for our customers and for the industry, while having clear communication internally and externally.  We’ll have a lot of fun while delivering on very serious business-driven goals.

It’s going to be an exciting time. We’re launching some of our Skunk Works projects this year, and we’ve got new projects bidding to be added to the docket. It isn’t always easy to bring innovative and progressive new ideas to a field that is historically stagnant, challenging, and sometimes non-sensical (I’m talking to you, encryption, and you, regulatory compliance). But it’s what we do. And while I think we always have room for improvement, I think we do it pretty damn well, so expect more of the same next year, in higher dosages and more frequently.

I’m thrilled about the new year. We have the right priorities, the right team, the right solutions, and the right processes in place at SafeLogic. Now will someone please turn the calendar over to January? We’re ready to rock!

BlogFooterRay2

8 Feb 2015

On Encryption Keys (and Anthem) – Part 2 of 2

SafeHealth_option2_orangeThe Anthem breach encouraged me to wrap up this blog series and talk about key management in a genuine security context. When the Anthem breach first was public, it looked as if patient records were accessed because of lack of data encryption. Then Anthem stated the real reason for the breach: they only encrypt data in flight to/from the database(s) and rely on user credentials for access to data in the database. Why didn’t they encrypt the data in the database? Well, per Health Insurance Portability and Accountability Act (HIPAA) requirements, they don’t have to as long as they provide protection of the data via other means. Like elevated credentials.

That worked well, didn’t it?

They were compliant, but obviously not secure. To add more security to compliance programs like HIPAA, there have been some cries for enterprises to implement encryption. So how do you encrypt data properly? Well, it all depends on your environment, the sensitivity of the data, the threat models, and any tangible requirements for regulatory compliance. Here are some general guidelines:

  • Use validated encryption.
  • Use strong, well-generated keys.
  • Manage the keys properly.

Use validated encryption. Federal Information Processing Standard (FIPS) 140 is the gold standard. The Advanced Encryption Standard (AES) is one of the FIPS-approved algorithms for data encryption, and it is a better encryption algorithm than what Joe the Computer Science Intern presented in his thesis project. It just is. Plus, part of the FIPS 140 process involves strenuous black box testing of the algorithms to ensure they’re implemented properly. This is crucial for interoperability, and proper implementation of the AES standard also provides a measure of confidence that there aren’t leaks, faults, etc. Always look for the FIPS 140 certificate for your encryption solution.

Use well-generated keys. A password-based key (PBK) is crap. Here a key is derived from a password after it’s hashed with a message digest function. PBKs are crap because most passwords are crap. They’re subject to brute-force attack and just should not be used. Password-Based Key Derivation Function v2 (PBKDF2) makes password-based keys a bit stronger by conditioning the digest with random elements (called salt) to decrease the threat of brute force. But the threat is still there.

Keys should be as unpredictable and “random” as possible. Unfortunately in software environments it’s difficult to obtain truly random data because computers are designed to function predictably (if I do X, then Y happens). But let’s say you can get provable random data from your mobile device or your appliance. Use that to feed a conditioning algorithm and/or pseudorandom number generator. Then use that output for your key.

Use strong keys. The strength of a key depends on how it’s generated (see above) and how long the key is. For example, the AES algorithm can accommodate key sizes of 128-bits, 192-bits, or 256-bits. Consider using a key size that correlates to the overall sensitivity of your data. In Suite B, 256-bit keys can be used to protect classified data at the Top Secret level. Is your data tantamount to what the government would consider Top Secret?

Also consider the environment. Constrained and embedded environments (think wearables) may not have the processing power to handle bulk encryption with 256-bit keys. Or maybe data is ephemeral and wiped after a few seconds and therefore doesn’t need “top secret level” encryption. Or maybe there’s just not enough space for a 256-bit key.

Use a key that is strong enough to protect the data within the constraints of the environment and one that can counter the threats to that environment.

Manage your keys properly. You wouldn’t leave the key to your front door taped to the door itself. Hopefully you don’t put it under the doormat either. What would be the point of the lock? The same applies to information security. Don’t encrypt your data with a strong, properly generated data encryption key (DEK) then leave that key under the doormat.

Consider a key vault and use key encryption keys (KEK) to encrypt the data encryption keys. Access to this key vault or key manager should also be suitably locked down and tightly controlled (again, many different ways to do this). Otherwise you might as well just not encrypt your data.

While we’re at it: rotate your keys, especially your KEKs. Key rotation essentially means “key replacement” … and it’s a good idea in case the key or system is compromised. When you replace a key, be sure to overwrite with Fs or 0s to reduce any chance of traceability.

Store those DEKs encrypted with KEKs and protect those KEKs with tools and processes. And remember to balance security with usability: rotating your KEK every 2 seconds might be secure, but is your system usable?

Anthem wanted the data to be useful, which is why it wasn’t encrypted at the database. But that usability came at a high cost. The good news is that it is possible to encrypt data and have it be usable.

 


Encryption is a critical, necessary piece of a system’s overall security posture. But it’s not the sole answer. In Anthem’s case, records were accessed via those “elevated user credentials” … which means that malicious hackers were able to get in to the authentication server and raise privilege levels of user credentials (usernames/passwords) that they either knew or gleaned from the auth server. So in this case, it’s irrelevant if the breached data was encrypted; the hackers had authenticated and authorized access to it.

So what’s the answer?

When this was first reported I tweeted this:

Editing_Encryption_Keys — Part_1__What_Are_Keys_Exactly_

Defense in depth means providing security controls to address all aspects of the system: people, process, and technology. Technology is the most difficult pillar to lock down because there are so many layers and threats, hence so many products such as firewalls, IDP, APT, IDS, SIEM, 2FA, AV, smart cards, cloud gateways, etc.

Encryption is a fundamental element for security of data at rest and data in motion (control plane and data plane). Even the strongest encryption with proper key management won’t protect data that is accessed by an authorized user, because it has to be usable. However, encrypted data and tight management of keys provides a critical, necessary piece to a robust security posture.

I hope this provides some guidance on how to think about encryption and key management in your organization.

 

BlogFooter_Ray

24 Jan 2015

On Encryption Keys – Part 1 – What Is a Key?

Last week I met with a customer to help solve, among other things, some challenges around key management and key lifecycles. I thought I’d kick off a blog series on keys: what they are, their generation, use, recommended strength, etc.

First, let’s briefly address what a key is: a key is what protects your data. It’s a (hopefully!) secret parameter fed into an encryption algorithm to obfuscate data in a way that only someone with the same key can decrypt the data and read it as intended.*

Here’s how I explained it to my 10-year-old daughter:

Think about the door to our house. When the door is locked, only someone with a key can get inside. (Ok sounds more like authorization but stay with me). When inserted and turned, the key hits the pins that triggers the locking mechanism and unlocks the door. That key is the only key that can lock and unlock our door.

While quite elementary in my mind, it’s a relatively good example of the value and importance of the key lifecycle, which I briefly discussed with my daughter after she asked the following questions:

  • What if someone copies the key?
  • What if our neighbors lose our spare key?
  • How do we know if someone else used our key?
  • Does someone else’s key work in our lock?

All are relevant questions in relation to cryptography as well. Over the next couple of weeks, we’ll talk about how keys should be generated, ideal key sizes, and general key management issues and best practices.

Fair warning: there is no single, correct answer. We’ll use this series to address dependencies and variables such as environments, data sensitivity, and threat models.

*This is known as symmetric encryption, where one key encrypts and decrypts data. In asymmetric encryption a public key is used to encrypt data and only its associated private key can decrypt the data.

 

BlogFooter_Ray

5 Jan 2015

My Worry and Optimism for Cybersecurity in 2015

toughroad8ball

Let’s face it – 2014 was pretty bad from an information security perspective, and I believe we will see a rise in the frequency, severity, and publicity of malicious hacks and breaches in 2015.

I’m worried that as a community, hell, as a society, we won’t see enough progress in this uphill battle of infosec. I’m not blaming anyone or pointing fingers. Security is hard because every organization is different: different people, different policies, different network topologies, different vendors, different missions, etc. (and that is why there is no silver bullet for security). In general, I’m worried about some SMBs that lack the resources to set up a proactive security posture. I’m concerned about some large enterprises that will continue to lag and not fully embrace security.

But… I’m optimistic. Security is at the tip of everyone’s tongue now. It’s “cool” … and cool is good.

SMBs have options for cloud productivity and storage solutions with security built in – at the very least, better security than what they could do themselves. Larger organizations can integrate many different solutions to enable their security posture.

Security is about defense-in-depth, which is to say having security at all layers, from policy and training to two-factor auth and encryption. Aggregate organizational differences can be met with the right technologies in the right place.

I’m optimistic because there are so many good and talented people working very hard to stay ahead of the bad guys. There are new technologies and new ways of thinking. There are VCs willing to fund such companies. There is more adoption and acceptance of security in the marketplace. There are companies with an assigned CISO to keep their business focused on security and out of the news.

So how do we make 2015 better to ease my worrying and reinforce my optimism?

Everyone: keep evangelizing. We have to keep talking about security and encouraging it. We need to think about security in new and emerging markets like wearables and IoT. I think after all the news in 2014, stakeholders are starting to get it. Perhaps we need better / tighter regulations. We need to talk about what’s real, what’s viable, and what’s manageable.

Product vendors: build security into your lifecycle. It’s doable. Microsoft initiated the Security Development Lifecycle with impressive if not astounding results. Cisco is doing it, along with many others. Security is a process. Bake it in to your software development. It’s good for you and your customers.

Buyers: check for the right encryption. Not all encryption is equal. Is your vendor using homegrown encryption written by Joe the Intern? Or is it standards-based? Just because a vendor says they implement AES doesn’t mean they do it correctly. Encryption needs to be correct to be true and interoperable. Look for FIPS 140 validation on your preferred vendor’s encryption library or ask for the certificate number.

All businesses: Assess the value of your data and where it resides. Then deploy the right products. Security is a process. Organizational security starts with security risk management, which guides the organization in protecting its assets. Before selecting security controls, the organization must know what data it needs to protect, the value of that data, and the lifecycle of that data. Whether protecting credit card numbers, user files, intellectual property, internal emails, provocative Mardi Gras photos, product roadmaps, money… all of that needs to be protected in an organized and actionable way.


Over time, we’ll explore more in each of these areas. In the meantime, this worrier is optimistic that we will stay focused, deliver, and do our best to make 2015 better.

 

BlogFooter_Ray

22 Dec 2014

The Sony Hack Just Does Not Matter

Several times this year we’ve heard about hacks and compromised systems (more so than I can remember in recent history), and I have to say I’m truly amazed at all the press on the Sony hack. But why is this garnering so much attention?

Simply put, its effects are felt by a wider audience.The_Interview_2014_poster

  • Sony cares because of loss of revenue and tarnished reputation.
  • Movie stakeholders (the producers, actors, etc.) care because it could impact them financially. I have never read the relevant agreements for this industry, but I’m sure there is a force majeure clause that will now be subject to an unprecedented interpretation and a great deal of legal precedence going forward.
  • Theater owners / workers care because of supposed threats against their establishment, loss of revenue, and the inconvenience of replacing a movie in their lineup.
  • Consumers care because they can’t see a movie with some very funny comedians.

Banks or retailers get hacked and it makes the news for a couple of days and fades. Maybe it’s not serious enough? The Home Depot, Target, and Staples attacks don’t really take anything away from the consumer. They can still shop at those places, albeit with new credit card numbers. So they don’t really feel the effects. An entertainment company is hacked and it’s an act of war cyber-vandalism. So much so that the President has weighed in and vowed a response. I guess compromising a retailer is just a nuisance.

Finally, there is breach that consumers actually care about. The JPMorgan breach doesn’t directly affect the average family. We are, sadly, getting accustomed to being issued new credit cards and putting band aids on breaches in that industry. We can tolerate the Fortune 50 losing money, but don’t mess with our entertainment. That is intrinsically American.

Perhaps I should rethink this title, as now attackers may have found an avenue that will encourage even more attacks. And let’s face it: we have thoughts of actual war dancing through our heads. This isn’t script kiddies and folks just looking to make a quick buck. These are hackers with nukes.

At SafeLogic we’ve done a fair bit of evangelizing this year, trying to get makers of IoT devices and health wearables to build security in as opposed to treating it as a cost center and a reactive initiative. So with that in mind, let’s think about this:

If halting the release of a movie gets this much attention and buzz , what happens if critical infrastructure is compromised? What if people can’t get water? Or they get only contaminated water? What if the power grid is blacked out? What happens when connected “things” are compromised? These are the absolute scariest scenarios, the effects of which are far more impactful than what you’ve been reading about this week. These effects are real.

Let’s not discover what happens in these “what if” scenarios. We need awareness and we need plans and we need action. I’m hoping that everyone takes the Sony hacks to heart and thinks about what truly matters… Especially this time of year.

Oh, and encrypt your data with SafeLogic’s validated and widely-deployed encryption solutions.

BlogFooter_Ray