May 2014 | SafeLogic

Archive for May, 2014

28 May 2014

The Upside of the Heartbleed Bug

heartbleedlogoHeartbleed was huge.  Massive.  A giant, gaping hole that was able to be exploited in several ways and somehow was unnoticed for over two years.  It was an embarrassment, a black eye for the OpenSSL Foundation and really all who use OpenSSL for encryption… which is the majority of the Internet, and most of the world’s internal sites and apps as well.

The first confirmed data losses due to the Heartbleed Bug were on April 14th, when the Canadian Revenue Service lost 900 social insurance numbers (the equivalent of a Social Security Number) in six hours to a determined college student.  Bad?  Yes.  But destructive at the worldwide level that we believed possible?  Not even close.

So here’s my point.  Heartbleed had a big, fat, silver lining.  In the span of a few days, millions of administrators reset their private keys and reissued their SSL certificates.  We have confirmed very little actual harm caused by the vulnerability, and we have documented millions of websites and apps applying patches, updating their software, resetting their private keys and reissuing certificates.  If only we could inspire this type of prophylactic activity on a regular basis.  It’s like pulling teeth to get users to reset passwords, but one well-publicized breach and folks are clamoring for it.  Many consumers are being proactive and using tools to specifically avoid unpatched websites.  These are steps in the right direction.

Don’t get me wrong.  I won’t be wishing for another Heartbleed.  We have our hands full as it is with the eBays and Targets of the world.  But I’m absolutely certain that there will be another bug… probably worse/bigger/more widespread/more exploited/etc than Heartbleed, and it will be exposed in the fairly near future.  Such is life in this industry.  The ‘next big thing’ always includes the raised stakes inherent in our bigger Big Data, our faster connectivity, and our multiplying endpoints.  Luckily, we are making leaps forward every time we are faced with these threats, and we have very very very smart folks on our side.

My bigger concern had been that we will become jaded and tuned out to the dangers.  Target and eBay dropped the ball on their crisis responses, but banks and credit card companies responded swiftly and effectively.  Anecdotally, I have talked to a lot of people who were prompt to reset personal passwords and treat their identity protection with the proper level of respect and attention that it deserves.  The strong performance of site administrators and product architects worldwide in their response to Heartbleed has shown me that we have many reasons to be optimistic.  Here at SafeLogic, we had patches rolling out within hours of the announcement, and we were not alone.  As we approach the tipping point toward the Internet of Things, our vigilance must remain strong, and the industry’s unified response to Heartbleed has actually helped me sleep better at night.


21 May 2014

IoT: The Internet of Toilets?!

I recently read a humorous but forward-thinking post on Wired, espousing the potential use cases for an internet-connected toilet, complete with various sensors and capabilities.  The writer, Giles Crouch, nailed a few awesome scenarios, such as pregnancy detection, stool analysis, and hangover cures.  Yes, I’m a sucker for technology and I already want an iToilet, Giles… but only if they build it with security in mind.  The alternative brings to mind the 1937 Donald Duck cartoon, ‘Modern Inventions’.  You know how it ends… one disaster after another.


For example, early pregnancy detection is brilliant!  Until you leave your pregnant wife home while on a business trip, and some criminal genius figures out that he can scan the neighborhood for homes in which the only urine collected belongs to a pregnant woman.  That would be valuable information for someone with ill intentions and should be encrypted and guarded like your better half herself.  [Note: The same hormone levels could indicate testicular cancer in a man as well, but it would be a statistical long shot.  Not enough to discourage a criminal from playing the odds.]

The automatic stool sample is an excellent feature.  It’s the hypochondriac’s dream.  Every sample submitted would be analyzed and advisories would be offered regularly.  Well, as regularly as the patient, at least.  The rate of car accidents may rise, as Mr. John Doe rushes home at lunchtime to make sure his contribution wouldn’t be wasted on the traditional ‘dumb’ toilet at the office.  But potentially more dangerous, when humans take medical advice from a machine, you better be sure that the machine can’t be hacked.

“Mr. Doe, your sample shows a few deficiencies.  Please drink one quart of Draino to rebalance your system.”
Hey, if my iToilet told me, it must be accurate.  Draino… whodathunkit.
That’s a mistake you can’t make twice.

Further, if that smart toilet is connected to both your calendar and your doctor’s appointment book, just imagine the sh!t show (pun intended) if this was intercepted in plain text by a malicious third party.  You might spend all day in the waiting room of a doctor that does not have you on the calendar, while your house is raided because your door lock app was compromised as well.

Ah, yes.  The future holds a great deal of creature comforts in automation… if we can just get the security dialed in first.

Now without further ado (or toilet jokes), here’s the one and only Donald Duck in ‘Modern Inventions’.  Cheers!



14 May 2014

The Real Truth About Wearables

I keep reading about Wearable tech’s ‘Dirty Little Secret’… the fact that most Wearable devices are shelved within three months of initial use.

Does this shock you?  No?  Good.  Me neither.
And I’m not worried about it.


If you’re reading this post, you’re no stranger to the phenomenon of the Consumerization of IT, or CoIT.  (It almost looks naked without the hashtag!  #CoIT.  That’s better.)  It’s also referred to as the ITization of Consumers, which doesn’t have the same ring to it, but is actually more accurate when describing the shift towards more sophisticated and savvy users.  Today’s enterprise employees don’t need a designated geek to configure and deploy a piece of equipment.  In fact, they usually prefer to set it up themselves, since nobody knows their needs and preferences better.  Some blame the millenials, but that’s just not the full picture.  This trend was manifesting as Shadow IT since before the millenials went to prom.

I bring up CoIT because it is the embodiment of today’s tech culture.  Everyone wants to use the newest, hottest devices, and they prove it everyday, with or without IT’s help or blessing.  Everyone wants to be an early adopter now.  Everyone wants to try the latest and greatest, which is absolutely stellar.  Not every device is going to be a hit, but we are okay with that.  At this point, a wearable device with strong universal adoption would be the exception to the rule.  So in this period of ‘fail fast’ versions, who better to beta test new wearables and subject them to real world conditions than us?

The same research that presents the three month interval of abandonment also puts forward an estimate that over 10% of adult Americans have purchased at least one of these devices.  If we included Bluetooth devices, you better believe that number would skyrocket.  Subtract the population that is – sorry, I’ll just say it – too damn old to mess with these new-fangled doohickeys, and we are approaching an impressive market penetration for wearables without any delusions that it is a matured technology.  As a culture, we have demonstrated our appetite for wearables by continuing to buy and try them.  There is a certain sense of pride associated with being an Explorer, Pilot, or Kickstarter participant.

Bottom line – I’m not surprised by, or discouraged by, this report.  Wearables are still nascent, like a recent graduate backpacking through Europe, searching for motivation and identity in an existential haze.  We should embrace it as it is formed, molding it to our vision.  We shouldn’t push it away and complain that it is undeveloped.  We need to try every device that we can get our hands on.  We need to speak up and give strong feedback.  Offer opinions publicly, so that others can echo or debate, in the plain view of the innovators who will give us exceptional, can’t-live-without-them wearables one day soon.

And of course, don’t forget to demand strong security in every piece of technology that we carry on our bodies.  Don’t forget how crucial it is to protect ourselves, and that includes our personal data.

We can make a difference in wearables.  Try, test, and critique.  Rinse and repeat.


6 May 2014

Securing the Internet of Things

Today’s blog entry is from our partners at Weaved.Weaved_LogoResize

Weaved is a cloud services company that provides nearly 4 million IoT device connections per month over the Internet.  We published a joint press release in April, announcing the partnership between SafeLogic and Weaved, and describing how we are working together to make the IoT secure.


The Internet of Things holds tremendous promise for driving the next wave of economic growth for Internet connected devices and applications.  Our smart phones have become the remote control for our lives and give us access to the Internet and our networked devices 24/7.   It’s easy to see that soon nearly every industrial and consumer electronics product will require some kind of app control as a standard feature.  Unfortunately, the Internet remains a publicly-accessible and unsecure environment for devices and every network is only as secure as its weakest link.

Right now, IoT devices are notorious for being that weakest link.  They have earned this reputation by ignoring security best practices and focusing only on local connectivity.  As a result, malicious tools have been developed, like search engines on the public internet that scan and search for open ports on devices.  So for mass market consumer adoption of IoT, device makers must really step up their efforts to apply some well established security best-practices and win back public trust.

At Safelogic and Weaved, we believe that a common sense approach to security in IoT must include:

1.  No Port Forwarding and No Open Ports on Devices

Port forwarding allows remote computers on the Internet to connect to a specific device within a private local-area network (LAN).  It’s an open door to your LAN from the outside and there is a surprisingly large installed base of devices that use this technique.  Weaved has developed a proprietary method of addressing and securely accessing any TCP service (Port) over the Internet without the use of port forwarding.  With Weaved’s technology, ports can even be shut down and appear as invisible to malicious “port-sniffers” and search engines.

2.  Trusted and Validated Encryption End-to-End

A lot of IoT devices today are storing or sending data across the Internet with weak encryption or even in the clear.  Even trusted companies like Skype have been criticized for allowing unencrypted media in their data path.  Weaved’s cloud services are already using unique, encrypted session keys per connection.   Going forward, Weaved and SafeLogic will collaborate to bring SafeLogic’s trusted and verified encryption engines to the platform for applications that demand that level of security.

These are just a couple of measures needed to protect your local network from being compromised.  There’s much more to cover on this topic, so expect to hear more from Weaved and SafeLogic in the near future, as we define and deploy our joint roadmap and services.


2 May 2014

Warning: Plan Your Validation Carefully

I’m always interested in the comments of engineers who recently completed a FIPS 140-2 evaluation.  It’s like the entire team had a meeting and played ‘Not It’, sealing the poor bastard’s fate for the last year-and-a-half or so.  Seems fair, right?



It really isn’t their fault.  Maybe they contributed to a NIST evaluation early in their career, and they made the mistake of putting it on their resumé.  Maybe they were cocky and volunteered, figuring that it couldn’t be ‘that hard’ or ‘that time consuming’.  Or maybe they simply had the misfortune of being late that day.  Regardless, they became responsible for a process that doesn’t always make logical sense to an engineer and seemingly small early decisions have major ramifications for the entire lifespan of the product in question.

In some cases, veteran engineers with a pedigree in cryptography still get aggravated and befuddled by the inner workings at the CMVP.  The inspiration for this blog entry came from our friends at Oracle.  Darren Moffat, a Senior Principal Software Engineer based in the UK, vented about his experience in a post titled ‘Is FIPS 140-2 actively harmful to software?‘.

Before we go any further, the answer is no.  FIPS 140-2 is definitely not harmful.

Darren’s frustration centers around the establishment of validation boundaries.

Why does the FIPS 140-2 boundary matter?  Well unlike in Common Criteria with flaw remediation in the FIPS 140-2 validation world you can’t make any changes to the compiled binaries that make up the boundary without potentially invalidating the existing valiation. Which means having to go through some or all of the process again and importantly this cost real money and a significant amount of elapsed time.

He’s absolutely on point. The boundary is a crucial strategy point for every validation, and as a vendor pursuing a FIPS certificate, you want to set it carefully.  There’s no sense validating and locking in features that will require future updates.  From a user standpoint, this is exactly as it should be.  By insuring that any changes within the boundary require re-testing, buyers can be confident that a product’s encryption module has been fully vetted in its current form.

Moffat goes on, asserting that engineers ought to be able to issue patches and bug fixes without invalidating the FIPS certificate.  I agree completely!  This is precisely why SafeLogic’s CryptoComply family of validated cryptographic modules maintains a tight boundary.  The core crypto libraries are tested and validated, then left intact while the rest of the vendor’s product can be updated as needed.  Users know that the encryption within is of the highest quality, and there are no negative side effects of active updates from the provider.  This is a win-win, and it all stems from establishing the correct boundary for the CMVP.

I don’t agree with Moffat that most customers don’t care about FIPS 140-2, and I don’t agree that customers that care are only checking the box and don’t worry whether the certificate is still valid.  Oracle’s commitment to updating and patching their software is fantastic, but it should not come at that cost.  They invested a great deal in getting that certificate, and it should not be pushed aside so easily.  Earning a FIPS 140-2 validation requires time, money and commitment.  (Significantly less of all three if you use SafeLogic’s RapidCert, but still enough to be relevant.)  If done correctly, there should not be a choice between validated crypto and properly updated software.  This is a toxic ultimatum for both the provider and the user, and it should be avoided.

To share your thoughts and stories from the trenches, tweet at us @SafeLogic!