Category Archives: Uncategorized

ResortQuest, Wyndham, Home Away, 2 inches of water, vomit stench and bad management.

We recently had a family vacation in Miramar Beach Florida, where we stayed at the SurfSide in unit #502. The experience was, shall we say less than we expected. The management company was incompetent, did not meet their legal requirements when responding to an emergency and left us in a bad situation despite a ton of attempts on our side to work with them.

To top things off HomeAway refused to provide even the most minimal levels of assistance when the management company failed to live up to their obligations.

Bellow is my unedited review of that experience, hopefully, it will help someone in the future.

This is our second stop while in the Destin area. Our first was as a described and great experience provided by Southern Vacation Rentals, this stay, however, left us, literally with a bad taste in our mouth.

Pulling up to the building it was obvious it had seen better days but on other vacations, we have had similar thoughts and were pleasantly surprised when we got to the room.

This time, however, when we opened the door a sour smell reminiscent of dried vomit🤮 engulfed us. A quick inspection revealed that it originated from the couches and the rugs. They, like everything in this unit, are well used and poorly taken care of. It’s clear this is a rental unit where only tenants or service people visit as I have to believe a owner would take care of the things we had to deal with.

We contacted the management firm and they said they would send house cleaning to take care of the smell.  House cleaning did not show up, so to manage the smell we had to put the couch cushions and slips on the deck as it would not be possible to stay in here with that odor. We called again that night and were told someone would be here in the morning. They did not show.

The furniture is ready to be replaced, the refrigerator was dirty inside and out, to top it off the wires that power the light inside it are hanging out due to a cracked housing.

The unit appears to have been remodeled over a decade ago but only minimal maintenance has been done since then.

Though it is clear the room was lightly cleaned prior to our visit I doubt it’s had a real deep cleaning in a long time as there was splattered food on the wall and the drinking glasses were sticky so we washed them all on arrival.

To top off the above we tried to do a load of laundry and the washer flooded the apartment bathroom, hallway, master bedroom and the hallway leading to the unit.

We called the management firm and they said they could not get ahold of maintenance and asked us to spend our evening to clean up the water as best we can, which we did. They did offer to have someone come in the morning (sound familiar?) to take care of what’s left.

It was clear the management firm was concerned with the potential damage but they did not seem to care about our situation at all. Since they couldn’t reach maintenance we also could get no replacement towels so no showers in the morning.

I should note that it’s clear this flooding has happened before because the trim in the hallway of the unit shows clear signs of past water damage.

We asked the management firm to move us to a different unit, after all, if vomit stench and a flood were not enough to justify that what would be? Unfortunately, the best they could offer was one day at another unit a unit 30 minutes away having to return the next day. Since it was already midnight and it would have only been for that night we passed.

If you recall they said someone would come in the morning to take care of the flooding, you guessed it — they never showed up.

We called again in the evening and spoke to the manager for the site and he apologized for the lack of response on prior calls and promised someone would be here to clean up and provide us towels tonight since we have six people and no towels. Of course, no one showed with towels.

We also tried to warm milk in the microwave today but it too doesn’t work,  yet another work order has been filed.

On our last day, the manager contacted us asking if we had gotten the towels he had sent, we had not. A few minutes later towels do show up, we now have had enough towels for a week but we leave in the morning.

The manager did finally offer a concession for this ridiculousness, $150 for the inconvenience. It took three of us Three hours to clean up the water alone. So it looks like they are valuing our time at a little over $15 an hour, and could care less about the inconvenience (no towels, no clean clothes,  no microwave, disgusting odor, time lost doing basic housekeeping, time wasted trying to get them to do their jobs, etc) and then there is the intangible damage they did to our vacation they place no value on.

The reality is beyond the wasted time and inconvenience we were able to use less than half of this unit. The living room was largely unusable due to lack of a place to sit, the deck was at least 1/4 unusable as it held the stinky couch bits, there was no laundry, no cooking, and no towels.

I guess it’s only fair to share the good stuff too, while dealing with vomit stench,  cleaning up a massive amount of water and failing to use the appliances we had an opportunity to think about the  locations fantastic view, if you choose to stay here despite what I shared you will rest assured you will have an amazing view of the gulf and a nice deck to enjoy it from while your not cleaning up a mess.

 

CAs and SSL and Phishing Oh My!

 

 

 

 

 

 

NOTE: This post reflects my personal beliefs and is not necessarily those of my employer Google, or Let’s Encrypt where I am a member of their Technical Advisory Board.

Introduction

Recently Vincent from The SSL Store published a blog post calling out Let’s Encrypt for issuing certificates to domains that contain the world PayPal.

The TL;DR for his post is he believes that Let’s Encrypt is enabling phishers by issuing them SSL certificates that contain the word “PayPal” and then refusing to revoke them when arbitrary third-parties ask them to.

As a result of his post, several news sources have decided to write articles about how “Let’s Encrypt” is acting as an enabler of these Phishers [1] [2].

Unfortunately, Vincent’s post and the associated articles don’t cover this in the most complete and balanced way so over my morning coffee today I decided to write this post to discuss the other side of the argument.

If this is a topic that interests you please also check out the Let’s Encrypt blog post where they talk about why they have taken this position.

Exploration

Let’s explore the opportunities CAs have to check for phishing, the tools they have available to them, the effectiveness of those tools, the consequences of this approach, how complete a solution based on the tools available to them would be and what the resulting experience would be for users.

Opportunities

The WebPKI’s CAs role, historically, has been that of a Passport office, you present proof you control a domain, and possibly that you are an authorized member of an organization and you get a digital certificate that attests to that.

This certificate could be valid for up to 1095 days. Once the certificate is issued the CA, largely speaking, has no natural opportunity to verify this information again. It is worth noting that this month the CABForum voted to shorten this period to 825 days.

Tools

In the event a CA determines it made a mistake in the issuance of a certificate or has been notified by the subscriber they would like to see a certificate marked invalid, the tool they have available to them is called “revocation”.

The two types of revocation that are under the control of a Certificate Authority are called Certificate Revocation Lists and OCSP responses. The first is a like a phonebook of all known “revoked” certificates while the last is more like a lookup that it enables User Agents to ask the status of a particular set of certificates.

Effectiveness

Earlier we discussed the lifetime of certificates, this is important to understand because the large majority of phishing sites do not start out as Phishing sites, as such issuance time checks seldom net positive results.

After issuance, this leaves you with periodic checks of the site,  third-party reports of phishing and relying on revocation checking as an enforcement mechanism. This is a recipie for failure, there are a few reasons for this, but one of the more significant is the general ineffectiveness of revocation checking.

Revocation checking is the most taxing thing a CA does. This is because the revocation mechanisms available to them will result in every relying party contacting them to download a OCSP response or CRL covering that certificate.

As a result, OCSP has a tendency to be both slow and unreliable. This forced browsers to implement this check as a “soft fail”, in other words, if the connection times out or fails for some reason they assume the certificate as good.

To give that some context about 8% of all revocation checks done by Firefox fail and the median response time is over 200ms.

As a result of this in 2012 Chrome, which is used by about 50% of all users, more-or-less disabled revocation checking except for exceptional circumstances.

What this means is that revocation checking, even for its intended purpose, is far from an effective tool. Expanding its use to include protecting users from phishers would not improve its effectiveness and arguably it would (due to the infrastructure implications) make it even less reliable.

It is also important to note that every wildcard certificate can be used for a hostname containing “PayPal” without the CA ever being made aware, a good example is https://paypal.github.io/ which is protected by a wildcard certificate issued to Github.

Consequences

To understand the consequences of expanding the CAs role include protecting us from phishing we first need to understand what a certificate represents, or more importantly what it does not represent. It does not represent the content, it represents the host that is serving content and it is the content that “phishes”.

Today, in the age of cloud services, there is a good chance the host that is serving the content is a service operated by WordPress, or maybe Amazon’s S3. These services allow users to sign up and post arbitrary content for free or very little money.

If we decide that revocation checking is the right tool to get phishing content off the web we would be saying a CA should revoke WordPress’s certificate if one of it’s users posted something someone reported as phishing content. That would, for the situations where revocation checking takes place and happens to work, take WordPress off the Internet. Is that what we want to happen?

If so, who is it we are asking to perform this task? There are well over 400 CAs in the Microsoft Root Program do we believe these are the right organizations to be policing the internet for the appropriateness of content?

If so what criteria should they use to do so and what do we do if they abuse this censorship role?

Completeness

It is easy to say that a CA should not issue a certificate if it contains the word “PayPal”. I could even see an argument that those that would be hurt by such a rule, for example, http://www.PayPalSucks.com and (a theoretical) PayPalantir.com are an acceptable loss.

This would, however not catch homoglyphs like when a Cyrillic “a” is used instead of the latin “a” which would very likley require a manual review of the name and content to determine the intent of the domain owner which is near impossible to do with any level of accuracy or fairness.

Even with that, what about ING, as one of the world’s largest banks, they too are commonly phished, should a CA be able to issue a certificate to https://www.fishing.com. And if they do and the issuing CA receives a complaint that it is Phishing ING what should they do?

And what about global markets and languages? In Romania there is a company called Amazon that is a cleaning company, should anyone be able to request their website be revoked because it contains the word Amazon?

If we promote the CA to content police, how do we do so in a complete way?

User Experience

With CAs acting as the content police, what would a user see when they encounter a revoked site? While it varies browser to browser the experience is almost always a blocking “interstitial”, for example:

 chrome revoked firefox revoked


If you look closely you will see these are not screens that you can bypass, revoked sites are effectively removed from the internet.

This is in contrast to Safe Browsing and Smartscreen which were designed for this particular problem set and therefore provide the user a chance to visit the site after a contextually relevant warning:

 SafeBrowsing smartscreen

Conclusion

I hope you see from the above that relying on Certificate Authorities as content police as a means to protect users from phishers a bad idea, at a minimum, it would be:

  • Ineffective,
  • Incomplete,
  • Unmanageable,
  • and Duplicative.

But more importantly it would be establishing a large loosely managed group as the de-facto content censors on the internet and as Steven Spielberg said, there is a fine line between censorship, good taste, and moral responsibility.

So what should CAs do about phishing then? It is my position they should check the Google Safe Browsing API prior to issuance (which by the way, Let’s Encrypt does), and they should report Phishers to the Safe Browsing service if they encounter any.

It is also important to answer the question about what users should do to protect themselves from phishing. I understand the desire to say there is only one indicator they need to be worried about, it’s just not realistic.

When I talk to regular users I tell them to do three things, the first of which is to use an up-to-date and modern browser that uses Smart Screen or Safe Browsing. Second, you should only provide data to sites you know and only over SSL. And finally, try to only provide sites information when it was you initiated the exchange of information.

P.S.

Thanks To Vincent Lynch and the others who were kind enough to proof this post before publishing.

How do you show someone they are important to you?

Showing someone they are important to you is not always straightforward. Hackers, on the other hand, seem to have figured this out.

Over the years I have designed, built and operated a number of WebPKI Certificate Authorities. In these roles, one thing that has been common is that hackers are always willing to use their finest exploits for a chance at the keys that would let them impersonate any site on the Internet.

That is why it’s not uncommon for an attacker to spend a zero-day on a WebPKI CA, and when they do I would argue that is their way of showing you they think you are important.

The black market for high-quality remote exploits that have never seen the light of day is not exactly small. Exploits like this are essentially a rapidly depreciating asset, once they become known the clock starts ticking and their value decreases until it hits zero.

In a past role, over a period of about three years, we had at least three `zero-days` spent on us, and I was flattered.

Attackers are no different than everyone else, they prefer the easy path so they often go after oldest and cruftiest stuff you are dependent on, and Adobe Acrobat was one such component for us.

Acrobat has had nearly 800 CVEs in the last 18 years. Each CVE often covers many related vectors but if we are generous and say one CVE equals one exploit it averages over 40 a year. Now not all of those issues are of the same bar, but it’s one of the most attacked pieces of software out there.

What got me thinking about this was a great post on mitigation vs isolation and the different strategies used by Chrome and Edge in their security program.

While you clearly need to employ both strategies, perfection is impossible to achieve which means isolation and least privilege ends up being one of the most powerful tools you have in your mitigation toolbox.

This is one of the reasons I like projects like the PDF.js project, Electron and Progressive Web Applications, they inherently get the benefit of browser isolation and mitigation.

The trend to use web applications like these make it harder on the attackers and that’s a good thing.

Do PDFs have to be so frustrating?

Did you know PDF format is almost as old as the internet itself? Despite its age, it is still the only true cross-platform “paper like” experience available on the web. Unfortunately, it is also one of the most frustrating file formats on the web.

The primary reason for this frustration is rooted in the format itself. Its maintainers, like most maintainers of old technology still in use, have made a half-hearted attempt to keep pace with the internet as it evolved. The goal of those efforts typically rooted in the philosophy of wanting to have your cake and eat it too. Specifically, they have tried to build a mega-format that retains 20+ years of backward compatibility while bolting more features on.

This combination has resulted in a very complicated format with more options than you can shake a stick at. Many of these features are either not documented or worse, are documented incorrectly. ISO has been making an effort to at address the documentation issue and the latest documents are much better but the fact remains that PDF is both old and complicated.

Generally, this means application developers that need to support PDF either have to choose between defining some minimal profile of the standard that they feel fits their needs and building their own libraries or licensing a more complete library with onerous terms and fees associated with them.

To complicate things further these commercial libraries are usually written in languages like Java, C and PHP which don’t exactly represent the most modern development platforms for the web.

As a result, almost all solutions that work with PDFs, where the PDF itself is not the “product” do the natural thing, create an image and wrap it in a PDF file. They call this approach “flattening” the document, I can only assume this is to make it sound less hacky.

While there are legitimate cases to flatten a document in this way it causes a few problems, for example:
– Accessibility tools like screen readers no longer work,
– You can no longer select text,
– You can no longer extract the field data entered into forms,
– The document can no longer reflow to be readable on smaller screens,
– And more…

An interesting observation is that nearly every document signing solution I have seen seems to flatten the document also. From an engineering standpoint I understand why they make this decision, it’s much easier to do and there is an argument to be made that long term images are easier to handle, with that said the downsides of this approach are significant.

At Microsoft we used to talk about being “authentically digital”, the idea being you want to embrace the good things about the physical worlds way of doing things, but you also need to be true to the technology.

The technical baggage of the PDF format and the lack of freely available SDKs basically put developers in a no-win situation forcing them to give up the best parts of the digital medium if they want to work with these documents.

Are these non-flattened documents a replacement for the more modern file formats? No, they are not, but equally so, at least when you consider cross-platform needs, neither are they a for PDF.

In short, I think PDFs do not need to be so frustrating! However, if we are going to keep using the format we need to go the extra mile to retain their digital goodness.

The PKCS#12 standard needs another update

PKCS#12 is the defacto file format for moving private keys and certificates around. It was defined by RSA and Microsoft in the late 90s and is used by Windows extensively. It was also recently added to KIMP as a means to export key material.

As an older format, it was designed with support for algorithms like MD2, MD5, SHA1, RC2, RC4, DES and 3DES. It was recently standardized by IETF RFC 7292 and the IETF took this opportunity to add support for SHA2 but have not made an accommodation for any mode of AES.

Thankfully PKCS #12 is based on CMS which does support AES (see RFC 3565 and RFC 5959). In theory, even though RFC 7292 doesn’t specify a need to support AES,  there is enough information to use it in an interoperable way.

Another standard used by PKCS#12 is PKCS #5 (RFC 2898), this specifies the mechanics of password-based encryption. It provides two ways to do this PBES1 and PBES2, more on this later.

Despite these complexities and constraints, we wanted to see if we could provide a secure and interoperable implementation of PKCS#12 in PKIjs since it is one of the most requested features. This post documents our findings.

Observations

Lots of unused options

PKCS#12 is the swiss army knife of certificate and key transport. It can carry keys, certificates, CRLs and other metadata. The specification offers both password and certificate-based schemes for privacy and integrity protection.

This is accomplished by having an outer “integrity envelope” that may contain many “privacy envelopes”.

When using certificates to protect the integrity of its contents, the specification uses the CMS SignedData message to represent this envelope.

When using passwords it uses an HMAC placed into its own data structure.

The use of an outer envelope containing many different inner envelopes enables the implementor to mix and match various types of protection approaches into a single file. It also enables implementers to use different secrets for integrity and privacy protection.

Despite this flexibility, most implementations only support the following combination:

  1. Optionally using the HMAC mechanism for integrity protection of certificates
  2. Using password-based privacy protection for keys
  3. Using the same password for privacy protection of keys

NOTE: OpenSSL was the only implementation we found that supports the ability to use a different password for the “integrity envelope” and  “privacy envelope”. This is done using the “twopass” option of the pkcs12 command.

The formats flexibility is great. We can envision a few different types of scenarios one might be able to create with it, but there appears to be no documented profile making it clear what is necessary to interoperate with other implementations.

It would be ideal if a future incarnation of the specification provided this information for implementers.

Consequences of legacy

As mentioned earlier there are two approaches for password based encryption defined in PKCS#5 (RFC 2989).

The first is called PBES1, according to the RFC :

PBES1 is recommended only for compatibility with existing
applications, since it supports only two underlying encryption
schemes, each of which has a key size (56 or 64 bits) that may not be
large enough for some applications.

The PKCS#12 RFC reinforces this by saying:

The procedures and algorithms
defined in PKCS #5 v2.1 should be used instead.
Specifically, PBES2 should be used as encryption scheme, with PBKDF2
as the key derivation function.

With that said it seems, there is no indication in the format on which scheme is used which leaves an implementor forced with trying both options to “just see which one works”.

Additionally it seems none of the implementations we have encountered followed this advice and only support the PBES1 approach.

Cryptographic strength

When choosing cryptographic algorithm one of the things you need to be mindful of is the minimum effective strength. There are differing opinions on what each algorithm’s minimum effective strength is at different key lengths, but normally the difference not significant. These opinions also change as computing power increases along with our ability to break different algorithms.

When choosing which algorithms to use together you want to make sure all algorithms you use in the construction offer similar security properties If you do not do this the weakest algorithm is the weakest link in the construction. It seems there is no guidance in the standard for topic, as a result we have found:

  1. Files that use weak algorithms protection of strong  keys,
  2. Files that use a weaker algorithm for integrity than they do for privacy.

This first point is particularly important. The most recently available guidance for minimum effective key length comes from the German Federal Office for Information Security, BSI. These guidelines, when interpreted in the context of PKCS #12, recommend that a 2048-bit RSA key protection happens using SHA2-256 and a 128-bit symmetric key.

The strongest symmetric algorithm specified in the PKCS #12 standard is 3DES which only offers an effective strength of 112-bits.  This means, if you believe the BSI, it is not possible to create a standards compliant PKCS#12 that offer the effective security necessary to protect a 2048-bit RSA key.

ANSI on the other hand, currently recommends that 100-bit symmetric key is an acceptable minimum strength to protect a 2048-bit RSA key (though this recommendation was made a year earlier than the BSI recommendation).

So if you believe ANSI, the strongest suite offered by RFC 7292 is strong enough to “adequately” protect such a key, just not a larger one.

The unfortunate situation we are left with in is that it is not possible to create a “standards compliant” PKCS12 that support modern cryptographic recommendations.

Implementations

OpenSSL

A while ago OpenSSL was updated to support AES-CBC in PKCS#8 which is the format that PKCS#12 uses to represent keys.  In an ideal world, we would be using AES-GCM for our interoperability target but we will take what we can get.

To create such a file you would use a command similar to this:

rmh$ openssl genrsa 2048|openssl pkcs8 -topk8 -v2 aes-256-cbc -out key.pem

Generating RSA private key, 2048 bit long modulus

……………+++

……………….+++

e is 65537 (0x10001)

Enter Encryption Password:

Verifying – Enter Encryption Password:
rmh$ cat key.pem

—–BEGIN ENCRYPTED PRIVATE KEY—–

…

…

—–END ENCRYPTED PRIVATE KEY—–

If you look at the resulting file with an ASN.1 parser you will see the file says the Key Encryption Key (KEK) is aes-256-cbc.

It seems the latest OpenSSL (1.0.2d) will even let us do this with a PKCS#12, those commands would look something like this:

openssl genrsa 2048|openssl pkcs8 -topk8 -v2 aes-256-cbc -out key.pem

openssl req -new -key key.pem -out test.csr

openssl x509 -req -days 365 -in test.csr -signkey key.pem -out test.cer

openssl pkcs12 -export -inkey key.pem -in test.cer -out test.p12 -certpbe AES-256-CBC -keypbe AES-256-CBC

NOTE: If you do not specify explicitly specify the certpbe and keypbe algorithm this version defaults to using pbewithSHAAnd40BitRC2-CBC to protect the certificate and pbeWithSHAAnd3-KeyTripleDES-CBC to protect the key.

RC2 was designed in 1987 and has been considered weak for a very long time. 3DES is still considered by many to offer 112-bits of security though in 2015 it is clearly not an algorithm that should still be in use.

Since it supports it OpenSSL should really be updated to use aes-cbc-256 by default and it would be nice if  support for AES-GCM was also added.

NOTE: We also noticed if you specify “-certpbe NONE and -keypbe NONE” (which we would not recommend) that OpenSSL will create a PKCS#12 that uses password-based integrity protection and no privacy protection.

Another unfortunate realization is OpenSSL uses an iteration count of 2048 when deriving a key from a password, by today’s standards is far too small.

We also noticed the OpenSSL output of the pkcs12 command does not indicate what algorithms were used to protect the key or the certificate, this may be one reason why the defaults were never changed — users simply did not notice:

rmh$ openssl pkcs12 -in windows10_test.pfx

Enter Import Password:

MAC verified OK

Bag Attributes

   localKeyID: 01 00 00 00

   friendlyName: {4BC68C1A-28E3-41DA-BFDF-07EB52C5D72E}

   Microsoft CSP Name: Microsoft Base Cryptographic Provider v1.0

Key Attributes

   X509v3 Key Usage: 10

Enter PEM pass phrase:

Bag Attributes

   localKeyID: 01 00 00 00

subject=/CN=Test/O=2/OU=1/[email protected]/C=<\xD0\xBE

issuer=/CN=Test/O=2/OU=1/[email protected]/C=<\xD0\xBE

—–BEGIN CERTIFICATE—–

…

—–END CERTIFICATE—–

Windows

Unfortunately, it seems that all versions of Windows (even Windows 10) still produces PKCS #12’s using pbeWithSHAAnd3-KeyTripleDES-CBC for “privacy” of keys and privacy of certificates it uses pbeWithSHAAnd40BitRC2-CBC. It then relies on the HMAC scheme for integrity.

Additionally it seems it only supports PBES1 and not PBES2.

Windows also uses an iteration count of 2048 when deriving keys from passwords which is far too small.

It also seems unlike OpenSSL, Windows is not able to work with files produced with more secure encryption schemes and ciphers.

PKIjs

PKIjs has two, arguably conflicting goals, the first of which is to enable modern web applications to interoperate with traditional X.509 based applications. The second of which is to use modern and secure options when doing this.

WebCrypto has a similar set of guiding principles, this is why it does not support weaker algorithms like RC2, RC4 and 3DES.

Instead of bringing in javascript based implementations of these weak algorithms into PKIjs we have decided to only support the algorithms supported by webCrypto (aes-256-cbc, aes-256-gcm with SHA1 or SHA2 using PBES2.

This represents a tradeoff. The keys and certificates and exported by PKIjs will be protected with the most interoperable and secure pairing of algorithms available but the resulting files will still not work in any version of Windows (even the latest Windows 10 1511 build).

The profile of PKCS#12 PKIjs creates that will work with OpenSSL will only do so if you the -nomacver option:

openssl pkcs12 -in pkijs_pkcs12.pfx -nomacver


This is because OpenSSL uses the older PBKDF1 for integrity protection and PKIjs is using the newer PBKDF2, as a result of this command integrity will not be checked on the PKCS#12.

With that caveat, here is an example of how one would generate a PKCS#12 with PKIjs.

Summary

Despite its rocky start, PKCS#12 is still arguably one of the most important cryptographic message formats. An attempt has been made to modernize it somewhat, but they did not go far enough.

It also seems OpenSSL has made an attempt to work around this gap by adding support for AES-CBC to their implementation.

Windows on the other hand still only appear to support only the older encryption construct with the weaker ciphers.

That said even when strong, modern cryptography are in use we must remember any password-based encryption scheme will only be as secure as the password that is used with it. As a proof point, consider these tools for PKCS#12 and PKCS#8 that make password cracking trivial for these formats.

We believe the storage and transport of encrypted private keys is an important enough topic that it deserves a modern and secure standard. With that in mind recommend the following changes to RFC 7292 be made:

  • Deprecate the use of all weaker algorithms,
  • Make it clear both AES-CBC and AES-GCM should be supported,
  • Make it clear what the minimal profile of this fairly complex standard is,
  • Require the support of PBES2,
  • Be explicit and provide modern key stretching guidance for use with PBKDF2,
  • Clarify how one uses PBMAC1 for integrity protection,
  • Require that the certificates and the keys are both protected with the same or equally secure mechanisms.

As for users, there are a few things you can do to protect yourself from the associated issues discussed here, some of which include:

  • Do not use passwords to protect your private keys. Instead generated symmetric keys or generated passwords of an appropriate lengths (e.g. “openssl rand -base64 32”),
  • When using OpenSSL always specify which algorithms are being used when creating your PKCS#12 files and double check those are actually the algorithms being used,
  • Ensure that the algorithms you are choosing to protect your keys offer a minimum effective key length equal to or greater than the keys you will protect,
  • Securely delete any intermediate copies of keys or inputs to the key generation or export process.

Ryan & Yury

 

PKCS #11, Javascript and Nodejs

Javascript has become the most popular language on the Internet. Until now there has not been a way to directly use cryptographic devices that provide PKCS#11 interfaces natively within NodeJS based applications.

The best you could do was to use the Node ability to use OpenSSL and OpenSSL’s ability to use the OpenSC PKCS#11 engine which would then wrap the vendor provided PKCS#11 library. That clearly is a convoluted mess.

We wanted to let Node developers use these devices directly. With that in mind we created Graphene which uses the node-ffi module to call into these libraries directly.

Our goal was to expose all of PKCS#11 while adopting the NodeJS “style” as appropriate. There is still work to do but we think it is now to the point where others may find value in it so we have made it public as of today.

Ryan

Uniform Electronic Legal Material Act and Digital Signatures

Apparently one of the reasons states have been reluctant to publish legal material online is that there is a concern over how relying parties can tell if the material is authentic and has not been tampered with.

In an attempt to address this concern a law has been proposed called the Uniform Electronic Legal Material Act (UELMA) the text of which at a high-level states this must be addressed.

“An official publisher of legal material in an electronic record that is designated as official under Section 4 shall authenticate the record. To authenticate an electronic record, the publisher shall provide a method for a user to determine that the record received by the user from the publisher is unaltered from the official record published by the publisher.”

UELMA which was proposed in 2011 has been enacted into law in 12 states (including California, Colorado, Connecticut, Delaware, Hawaii, Idaho, Illinois, Minnesota, Nevada, North Dakota, Oregon, and Pennsylvania). With that said it looks like it may be yet another an unfunded mandate in that there doesn’t appear to be much activity in the way of publishing data signed data.

As with most US laws UELMA doesn’t specify how one would meet this requirement but the most obvious way would be to publish these documents as PDF files and sign them using PAdES. In many cases (especially legal text) this would be the ideal solution given how easy it is  to both apply and verify signatures thanks to the broad support of the standard.

But why is there such broad support for this standard? It’s simple the EU takes a totally different approach to the problem of specifying what makes a “legal” electronic signature than we do. The US basically doesn’t specify any format or requirements for signatures while the EU specifies 4 formats (each with a different use cases) that are allowable of which PAdES is one.

But why did they choose four formats and not just one? That is easy. A signed PDF may be an great way to make content accessible and verifiable to people it is not a good solution for structured data that would be parsed by machines. In these machine readable cases the Europeans rely on CAdES, XAdES and ASiC which are better signature formats for machine readable data.

Since the US doesn’t specify how one should address this problem a non-profit called US Open Data is advocating a solution they helped develop called Data Seal which is a web application that sits on top of PGP to verify files to be used for all of the above cases.

In my opinion this is a bad approach, here are just a few reasons:

  • PGP is 24 years old and has a wonderful mix of usability and interoperability issues that have not been solved in a meaningful way (though there are many who are trying [like Data Seal] but even many of these supporters now see it a lost cause).
  • Dependency on what is today in-essence a single vendor commercial solution, even if is based on an open standard and open sourced means that if these tiny vendors go out of business there is no practical way for “real users” to verify the authenticity of the documents/data.
  • Legal documents need to be verifiable long into the future and and this approach does not consider the concept of long term signature verification (time-stamping, crypto-periods, etc).
  • Pushing for the adoption of a single machine readable signature format (PGP) across the board at the expense of providing an easy-to-use and verify human readable solution is a short-sighted and bad tradeoff.
  • The world is getting smaller, interoperability is more important today than ever. If were going to adopt a different way of solving the same problem than a large majority of the globe it should provide sufficient material benefits to offset the interoperability and accessibility impacts such a decision caries with it.

I could even argue the that as architected Data Seal actually doesn’t even meet the ULEMA requirements in that ULEMA requires that the solution preserves the data and makes it permanently available but the solution does not provide a way for the signatures themselves to be verified long-term.

Anyway all of this is an interesting side-effect of the US approach to legislature. We try to allow innovation by not overly specifying how the market solves a problem while the EU tends to be overly specific and restrictive which tends to hurt innovation. I am almost always a fan of the US approach as governments move much much slower than the market and tend to create structural barriers to innovation. With that said I think interoperability is a case where standards are needed and when it comes to how governments publish and authenticate documents there should be a standard.

Help Wanted: Apprentice to learn trade

I have taken the “non-traditional path” in both my education and career. At age eight my parents discovered my aptitude and (more importantly) interest in programming. My mother was always learning new things and as a result when she got our first computer and started to learn to program it gave me access to everything I needed to teach myself.

I remember vividly when she purchased our first modem it was a 150 bits per second acoustic coupler. To put this in perspective COMCAST’s lower tier is 106 times faster than my first network connection. Even then it was painfully slow but it opened an entire new world to me – one I never knew existed.

At some point that year I decided I wanted to host a Bulletin Board System of my own (a BBS is very similar to a forum website today) so I asked my parents to buy me the software and telephone line to do this — they of course laughed and said no after all it would cost close to $1000 just for the software.

I had read enough of my moms programming books that I realized that I didn’t need to buy the software I could just make it myself. As a child my mother would always tell me “No does not mean no. It means find another way.” so thats what I did. I completed every exercise in every programming book she had along with a few others from the local library and set off to make my own BBS.

I made very quick progress. I implemented forums, chat, multiline, a download library, ZModem, XModem and more. I remember printing out the source on reams of continuous feed paper using our dot-matrix printer. My father heard the printer going for quite a while so he came in to stop me because he thought I was wasting ink and paper. As an aeronautical engineer by training and former Air Force officer even though he was not a “computer guy” after a few minutes of looking at what I was printing he recognized what I had accomplished and immediately he and my mother began the process of getting get me in programming  classes at the local colleges.

This moment was probably the most significant contributor to where I am today. It was possible because I was lucky enough to find myself in a situation I was given:

  1. Access;
  2. Direction;
  3. Challenges;
  4. Support.

This set me up for what I now think of as a series of unpaid internship and apprenticeships. I helped my professors and teachers teach their classes, grade homework, help students and create courseware. I also helped a few small businesses create automation to help with inventory management and invoicing — all for free.

The system of apprenticeships has been around since the middle ages. A cobbler might teach their children or someone else’s (in exchange for pay) their trade.  In essence these experiences allowed me to learn my trade.

My parents wanted nothing more than for me to go to University and get a degree. The problem was the independence of the path I was on made it hard for me to do give up control and go this route. I also wanted to learn everything I could about computers, programing, applied cryptography, security and realistically not even the most prestigious schools had much to offer in these areas at the time.

This resulted in me dropping out of high school and college where I was taking classes that interested me. My parents didn’t exactly approve and I was a bit rebellious at this point in my life so I got a job in technology and moved out.

This choice came with a set of unique challenges; for example some who looked at my resume would ask “Where did you get your graduate degree?” and when they heard I didn’t even have a diploma many would essentially look the other way. Fortunately computers were still relatively new and I was able to demonstrate my raw abilities which meant I still had plenty of opportunities I just had to look a little harder.

Two years after I moved out my first son came along. At this point I understood the benefits and challenges of the path I had chosen for myself but like all parents I wanted more for my children. I remember watching a television show called Gilmore Girls which was about a single mom who had her own realization along the same lines. She was also a drop-out but decided her daughter would go to University so she could have the benefits that path represented but still wanted her daughter to embrace the benefits of her personal approach to life.

I had decided this is what I wanted for my own children. But as they say they say “the best-laid plans of mice and men often go awry” and my oldest is on a path much closer to my own. He finished high school and moved on to being a software developer in Silicon Valley.

As a parent if my goal was to “get him into University” I made a fundamental mistake. That is by exposing him to an extensive computer science education at home by the time he was ready for college the only schools that looked challenging in computer science were out of reach due to admission requirements. It wasn’t that he wasn’t capable of the better scores and grades that were necessary to get into these schools but instead we got him unpaid internships where he could hone his skills and his grades suffered as a result.

Is this a failure in parenting? A failure in the school system? A little of both? Probably a little of both but a parent’s goal should not be to “get their children into university”. There are lots of ways to find success but what is important that we help them have choices in life and find happiness. The path he is on gives him that and while I still hold out hope that he goes to university the reality is he has the job that most Computer Science graduates dream of after four years of university and doesn’t have the associated debt.

Don’t get me wrong — there are many merits to University (which is why I think he should still go) but the reality is it is not the only path to success.

I bring all of this up because the other day Bill Gates, someone I really admire, blogged about the abysmal college completion rates.  In this post there is a quote that stands out:

By 2025, two thirds of all jobs in the US will require education beyond high school.

As a hiring manager in technology I know how hard it is today find people with the right skills and experiences to build products and services the market demands (Don’t get me started on our visa system!). As a parent I also know the school system is still failing our kids so this talent drain is surely going to get worse.

With that said I think we are not looking at the problem holistically. There are lots of ways to get the skills that are necessary to have options in life — Universities do not have a monopoly on success. Thats not to say University isn’t a good option or that there are not careers where a degree is both useful and/or necessary. It is just that there are lots of ways to get our children choices and we should be embracing them as well.

In my mind the apprenticeship is still one of the best ways to get a practical education. It works exceedingly well in technology. I also know a number of lawyers who have passed the bar without having gone to law school as well as a number of small business owners who essentially got their start as apprentices.

Unfortunately the unpaid apprenticeship is under attack and when combined with recent living wage initiatives it makes it hard for those with the interest and skills to offer these apprenticeships. This the most damning element of this attack is a court has ruled that an employer can derive no immediate advantage as a result of the relationship.

Now to be clear I am not arguing the path I went on is right for everyone and I am a believer in formal education (my great grandmother and wife were teachers) but we have to look at this problem more holistically than we have been if we want to help our children and grandchildren to have choices.

Removing Friction From Online Signatures

Today there are broadly two different types of signatures done online, electronic signatures and digital signatures. Electronic signatures are a synthetic version of the wet signatures we use in the physical world and digital signatures are a re-envisioning of the idea of signatures that leverage strong cryptography to make an even stronger signature.

But if electronic signatures are the lesser form of the two why do they exist at all? The answer to that question is friction.

In many respects that friction is a self-inflicted wound that is a result of the industry not looking at the problem they are solving holistically. For example today in Adobe Reader it is possible to do both electronic signatures and digital signatures. They have gone out of their way to make these electronic signatures as easy to apply as possible and taken what they likely argued was a principled position and reserved the use of digital signatures for what they considered the “ideal” case where the signer’s private key is on a FIPS 140-2 Level 3 certified key management device.

As a result of this the large majority of “digital signatures” do not actually contain the identity of the signer and instead are simply notarizations of a synthetic wet signature. This is because the user experience available to users for the creation of these synthetic wet signatures is better than what they made available to those doing digital signatures.

I am sure they would argue this is an artifact of the limitations of the technologies but I would argue that is not the case. It is totally possible to apply digital signatures in such a way that it is no more burdensome to a user than a synthetic wet signature.

In prior posts I have discussed the example of key protection; by mandating key compromise can only be mitigated by using FIPS 140-2 Level 3 certified devices they created a structural barrier to vendors from creating a solution that used alternative approaches such as limiting the validity of keys to just a few minutes.

The same holds true of identity, by saying only legal identity can be used in in the credentials used in digital signatures they prevented alternate approaches such as the issuance of a email only credential that is later validated to a higher level or even a pseudo anonymous credential that is later authenticated to a higher level.

Digital signatures can be as usable as the synthetic wet signatures in use today and with the recent changes in the EU with eIDAS we are seeing some of these structural limitations being removed and we can only hope that Adobe follows suit and revises their policies to remove those structural barriers that hold back these alternative approaches.

Has identity verification on the web become a glass ceiling?

As of 2013 here are 7.125 billion people in the world (World Bank) 39% of which are using the Internet (ITU). 318.9 million of these people live in the United States where as many as 74% use the Internet (Census).

Increasingly these people are accessing services that require them to prove their identity over the internet. This manifests itself in many ways, commonly in the United States this is done through use of Knowledge Based Authentication (KBA) where knowledge of details from users credit reports are leveraged to authenticate users. This approach has several serious problems:

  • In the United States alone 29% of people have no credit history at all (Gallup) making this approach inaccessible for these users,
  • A number likely much larger than this have such limited credit histories this approach to authentication is ineffective for them,
  • Numerous studies show the usability characteristics of these solutions are poor and result in user abandonment,
  • The limited data available in these credit reports and the way KBA is integrated into these services reduces both the security and privacy each time the information is used.

As a result services often times attempt to leverage a person’s pre-existing relationships with other services such as banks. This approach also have serious failings:

  • In the United States 7.7% of people are unbanked (FDIC) and 20% are underbanked,
    World-wide the number of unbanked is 35%,
  • For liability and business interest reasons almost no financial services organizations offer federated identity services for their customers,
  • When banks are used a concept of a “penny-test” is often used requiring disclosing sufficient information to enable them to potentially draw electronic checks from the persons account,
  • The infrequent nature of this transaction and inherent complexity of the task again has poor usability characteristics and results in transaction abandonment,
  • This leaves services attempting to rely on binding multiple social “identities” together to authenticate the user. Unfortunately these social “identities” are often no more than pseudonyms which do not meet the regulatory obligations that many businesses and agencies must meet. Additionally the binding of these identities together reduces the users privacy significantly in that it becomes trivial to track activities of that user across services.

This situation creates a socioeconomic glass ceiling where those who can not participate in these authentication systems do not have access to the lower cost and generally higher value services available on the Internet.

Additionally there is still a class of transactions where the existing mechanisms do not work (such as a person establishing their first bank account) and others that require the disclosure of more information than necessary to meet the authentication requirements (for example age verification).

Outside the United States the situation is even more grim where the the numbers of the unbanked are significantly higher and often privacy regulations prevent the use of many of the above approaches. As a result many services can not be brought online and those that can commonly rely on the lowest common denominator – proof of control of a simple email address.

This problem is made even more complicated when services need to verify professional accreditations or roles within an organization.

What do you think? Is this a real problem?

I think it is. I also think this is a solvable problem (for some value of solvable) but as of yet I do not see anyone building solutions that address this problem of initial identity verification effectively.