Monthly Archives: April 2017

Revocation reasons don’t make sense for the WebPKI of today

As the old saying goes, to err is human but to forgive is divine, in WebPKI you deal with errors through the concept of revocation.

In the event a CA determines it made a mistake in the issuance of a certificate or has been notified by the subscriber they would like to see a certificate marked invalid, they revoke the corresponding certificate.

The two mechanisms they have available to them to do this are called Certificate Revocation Lists and OCSP responses. The first is a like a phonebook of all known “revoked” certificates while the last is more like a lookup that it enables User Agents to ask the status of a particular set of certificates.

In both cases they have the option to indicate why the certificate was revoked, the available options include:

Reason Description
Key Compromise We believe the key to have been stolen or can no longer be sure it has not been.
CA Compromise The issuing CA itself has been compromised (think DigiNotar).
Affiliation Changed The certificate contained affiliation information, for example, it may have been an EV certificate and the associated business is no longer owned by the same entity.
Privilege Withdrawn The right to represent the given host was revoked for some reason.

There are a few other reasons but they are not materially relevant to WebPKI use cases so I left them out.

CA Compromise is clearly a special and exceptional case, so much so that browsers have developed their own mechanism for dealing with it (see OneCRL and CRLSets) so it’s not particularly relevant to the options available to CAs anymore, at least in the WebPKI.

Another thing to consider when looking at revocation reasons is that about 90% of all certificates in used in the WebPKI (for TLS) are considered to be Domain Validated (DV) certificates, this limits the applicability of the remaining reasons substantially.

DV OV EV
Key Compromise Yes Yes Yes
Affiliation Changed Maybe Yes Yes
Privilege Withdrawn Maybe Yes Yes


As you can see, the available “reasons” for DV certificates are largely reduced to just “Key Compromise”. There is an argument that both “Privilege Withdrawn” and “Affiliation Changed” could be applicable to DV but in my opinion, they are a bit of a stretch. If nothing else the difference between these two reasons in the case of DV is so subtle you will almost never see these two usages be used on a DV certificate.

So what about Key Compromise?  How useful is it? Clearly, it is an important use case, your trust in an SSL certificate is frankly only as good as your confidence in how well the key is protected!

The reality is that most key compromise cases can not be detected and worse yet is that it’s far too easy to compromise an SSL key. Take for example the Heartbleed (http://heartbleed.com/) bug from last year, SSL keys are often kept in the process and relatively minor coding mistakes, as a result, can potentially expose that key and users won’t necessarily know about it.

Some platforms go out of their way to prevent such attacks at the expense of performance like we did at Microsoft with SCHANNEL where we keept the key out of process but this is the exception and not the rule.

Also popping a web server due to some third-party code vulnerability and getting to the shell in the context of the server is unfortunately far too easy. As a result, when an attacker does get a shell on a web server one of the first things they should be doing is taking that key so they can use it later, but again, how is this detected?

That is not to say “Key Compromise” as a revocation reason is not important when there is a large-scale vulnerability like Hearbleed but this is not exactly a common case.

The other cases for the “Key Compromise” revocation reason are notably dependent on the subscriber:

  • Knowing the compromise happened,
  • Being mindful enough to notify the CA,
  • Being willing to let the world know there was a compromise.

This last point of public signaling of “compromise” combined with the fact user agents treat all revocations the same (if they check at all) means that we seldom see subscribers even specify a reason when they ask for a certificate to be revoked.

As I look at why a certificate gets revoked today I think there are a few cases, these include:

  • CA has decided the certificate should be revoked,
  • The subscriber has decided the certificate should be revoked,
  • A root program has decided a certificate should be revoked.

These are far more meaningful signals to relying parties that the ones listed above and are not so specific that these is any negative signaling in their use.

Today it is not possible to specify these things but maybe we should take a second look at revocation reasons and if we continue to use them make them more relevant to the way the WebPKI works today.

Ryan

Secondary effects when encrypting the web

Secondary effects

Effects are often classified as having primary and secondary impacts. As we evolve the Internet we can see numerous examples of secondary effects, this is especially true as we look at how we evolve the way we secure it.

To put this in perspective consider the amazing milestone we have just hit, as of 2017 over half of the web is encrypted. Browsers will now be able to gradually switch from showing the “lock” as an exceptional and positive indicator to showing an error when a session is not encrypted. In fact, Chrome has already started this transition and as soon as October 2017 we will start to see it rollout.

I can not overstate how huge a change this is. For the last 23 years, SSL has been the exception and not the norm, as a result the best we could do as software developers was throw site developers a few pixels (the lock) for going out of their way to encrypt their sites. Now, we can do what we always wanted, warn the user when there is something to be concerned about.

It took a lot to get us to this point, some of which included:

  • Competition driving the cost of a certificate from thousands of dollars to as little as $0,
  • Evolution and standardization of certificate formats and of the TLS protocol,
  • Maturation and improving performance of TLS implementations,
  • Increase in average computational power so the overhead was negligible,
  • The definition and evolution of standardized practices and audits for certificate authorities,
  • The definition and standardization of a certificate enrollment protocol suitable for TLS use cases,
  • Formalization of user agent root programs and criteria in which CAs are managed to,
  • Establishment of technologies to effectively monitor and manage the CA ecosystem,
  • Migration to SaaS and the Cloud services where management of deployments can be handled in a seamless way,
  • And of course, education tooling to help the operators deploy TLS correctly.

While we are not done with this journey we have made it a long way and the end is in sight.

Each of these changes had secondary effects, even if small, but let’s look at the meta change to get a better understanding of what the consequences of change on the TLS ecosystem were.

Nobody ever did, or ever will escape the consequences of his choices. – Alfred A. Montapert

One obvious one that immediately comes to mind is that we have a generation of non-technical people who believe “the lock means it is secure” when it never really meant that.

As they say, correlation is not the same as causation, I would argue this is the reason the “Lock” myth came to be. That is because in the beginning it literally did cost thousands of dollars to get an SSL certificate, and if you were committed enough to sign up for that there was a good chance you were a big company who was at least thinking about security. This was especially true because those companies were also typically existing brick and mortar companies with reputations to protect.

From this myth, evolved another belief, that being that phishers don’t use SSL. This was true in the beginning, there were a few reasons for this, of course, the cost was one but it was also cumbersome and phishing is a numbers game so if users don’t notice the lock then why bother?

A few things have changed, through education more users today are aware of encryption and absence of the “lock” means more today than it did then. Additionally, the process of getting a certificate has been reduced to a task that can literally take about a minute and then of course cost has reduced to the point it’s inconsequential.

These things basically invalidate the once true assumption that phishers don’t bother to use SSL. Now, thankfully after the creation of SSL, we got technologies like SafeBrowsing and SmartScreen which are actually designed for, and are effective at, addressing this issue. Also with the lock disappearing over the next few years the “credibility” boost of the SSL lock will go away diminishing that value to the phisher. But is this true for other secondary effects?

It is worth noting that another secondary effect we will surely see is confusion from users as this change roles out, even though it is unquestionably better to warn users when they are at risk.

Let’s look at a larger example of a secondary effect of the move to encryption, specifically I believe BGP is a good example. The other day Dan Goodin over at ARS did a great article on a potentially malicious BGP rerouting event. In this article, he talked about how the attacker could have used the Logjam or DROWN attacks to attempt an MITM and that is true, but it’s not the only way. It no longer takes hours to get a certificate, you can get Domain Validation (DV) SSL certificate from many different providers in a matter of minutes now, a decade ago, or even just five years ago that could have taken hours.

This is important because to get an SSL certificate all you have to do is prove you control the associated host and one of the ways you can do that is by dropping a file with a pre-shared value in it on the host. If you can get the network traffic redirected to you for seven minutes like happened in this BGP rerouting case you can also get an SSL certificate for a host in the associated network segments.

In the past, the cumbersome and slow process involved in getting an SSL certificate made that practically impossible, but today it is not. This means that the broad use of SSL has, as a secondary effect, pushed the attacker down a level to BGP as a means to achieve its goal.

As a result, we will surely see BGP get more attention from attackers and eventually shore up its security so it is not as susceptible to these rerouting attacks. I also wouldn’t be surprised if we see the WebPKI evolve to make it slightly harder for this sort of attack to take place (by prohibiting certain types of validations) but that’s the game of evolution.

My friend Cem has a saying, security is often an exercise of re-arranging deck chairs on a sinking ship.

As we move closer to an encrypted web sometimes I am reminded of this, no matter what I am sure we will find more of these secondary effects which will give us an opportunity to focus on shoring up these other systems, just like we have with TLS.

Ryan

P.S. Thanks to Josh at ISRG and Andrew Whalley in the Chrome team at Google for some related conversations that helped me solidify my thoughts on this topic. And also thanks to Vincent Lynch for his review of the post.

Digital Signatures and the fallacy of Good, Cheap and Fast.

It is common to hear you can choose two of the following, “Good”, “Cheap” or “Fast”. While there is clearly some truth to this, it is not an absolute truth.

This is especially true in the context of Digital Signatures in the United States where the law allows for parties to agree to rely on virtually any mechanism to capture an agreement.  The rationale behind this is that in the event of a dispute it is the claimant in the dispute who will need to prove consent, as such, it is the parties of an agreement who must assume the burden of choice.

This introduces economic incentives to the decision-making process, specifically, it encourages you to consider which digital signature solution to apply to a given problem. This is an inherently good goal because over-specification in regulations can create market forces that result in the wrong solution being applied to a given business problem.

This flexibility also has downsides, specifically when people look to build a product to enable digital signatures this flexibility diminishes the economic incentives for them to build the most secure solution. Instead, their goal becomes to build the solution they think they can sell with minimal investment.

This, in theory, should still encourage providers to build a strong solution in that, if documents signed with their product end up being thrown out in court their business will ultimately fail. The problem with this motivation is that it can take a decade or more for this to become a concern that can hurt their business. As such, they double down on the creation of a solution that can be taken to market quickly.

It is easy to see this play out in the US Digital Signature market, the large majority of solutions do little more than produce an image that looks like “a signature” and then embeds it into a document. In the European Union, they call this a “Simple Signature”. To produce a fraudulent signature with such a scheme you typically need to select the signature with your mouse, right-click, select copy and then paste the image into the fraudulent document. These solutions are typically very easy to use and cheap to build but is clearly not a technique you would want to use on an agreement that might need to be enforced in a court of law.

Some solutions try to make these Simple Signatures stronger by taking a hash of the document with the signature in it and placing it in a database on their servers. This is done so that in the event of a dispute you can ask the service provider to go to court with you and act as an expert, stand by your side, and say “the hash is the same one we saw when they signed the document”. This is better than the pure image approach but not by much. This is true if for no other reason than they are unlikely to assign legal support to every contract dispute that happens involving contracts signed with their service. Additionally, if the legitimacy of the contract is put into question and the site has had any security incidents at all since it was signed, the claims they make would be questionable at best.

Other Simple Signature solutions try to mitigate these issues by applying a cryptographic notarization to the document after the basic Simple Signature was applied. This adds the concept of a trustworthy notarization of when the document was signed and to some degree, makes the document capable of standing on its own. Now if the original electronic document was produced with standards-based software anyone can confirm it has not been tampered with since it was produced. This helps you reduce the legal exposure relating to a compromise of the signing service which certainly helps with the enforceability but it does not eliminate this risk. This is because a slightly more difficult compromise of the service could still result in a bad actor producing a false document.

The next step up in enforceability is generally referred to as an “Advanced Signature”. These Advanced Signatures almost always start as a “Simple Signature” but they will also include signing the document with a cryptographic key under the sole control of the signer. This provides assurances that it was the user, and not the digital signature service, that signed the document.

Practically speaking the final step in enforceability when it comes to digital signatures is referred to as a “Qualified Signature”. This builds on the previous two types and of signatures and adds in two requirements. The first of which is that the key to being managed on a specialized cryptographic appliance such as a smart card or hardware security module. The rationale for this is that signing keys are typically long-lived and keys stored in software are exposed to theft. This is important because if an attacker steals one of these keys then they can produce fraudulently signed documents with it. The second requirement is that a the legal identity of the signer needs to be verified and included in the digital signature. This is in contrast to the other types of signatures which in essence only require the signer to be uniquely identified, in other words, something as basic as an email ping could be considered sufficient.

What has happened in the US is that since the law does not require anything stronger than a Simple Signature outside of areas like healthcare and finance where there are either regulatory requirements or business risks that justify a better solution we see only the Simple Signatures in use.

The vendors of these products would probably tell you they don’t do these more enforceable solutions because of you have to choose two of the following Fast, Cheap or Secure.

In the European Union, you have had the opposite problem, the economic incentives of the vendors and the associated regulatory frameworks have sent their market down the path of trying to apply Qualified Signatures to nearly all transactions. This reality has hampered the adoption of electronic signatures limiting their use to the highest value transactions due to the associated onboarding and use time friction this approach typically entails.

I believe this is an example of market failure and that it is possible to build a solution that scales across a businesses enforceability needs. One of the key reasons the market has not yet delivered such a solution is that the economic incentives for the vendors are not well aligned with those of their customers.

Tides are changing in both the US and in the European Union and I believe we will see a convergence of the best of both solutions in the coming years but as of yet, such a solution has not found its way to the market.

CAs and SSL and Phishing Oh My!

 

 

 

 

 

 

NOTE: This post reflects my personal beliefs and is not necessarily those of my employer Google, or Let’s Encrypt where I am a member of their Technical Advisory Board.

Introduction

Recently Vincent from The SSL Store published a blog post calling out Let’s Encrypt for issuing certificates to domains that contain the world PayPal.

The TL;DR for his post is he believes that Let’s Encrypt is enabling phishers by issuing them SSL certificates that contain the word “PayPal” and then refusing to revoke them when arbitrary third-parties ask them to.

As a result of his post, several news sources have decided to write articles about how “Let’s Encrypt” is acting as an enabler of these Phishers [1] [2].

Unfortunately, Vincent’s post and the associated articles don’t cover this in the most complete and balanced way so over my morning coffee today I decided to write this post to discuss the other side of the argument.

If this is a topic that interests you please also check out the Let’s Encrypt blog post where they talk about why they have taken this position.

Exploration

Let’s explore the opportunities CAs have to check for phishing, the tools they have available to them, the effectiveness of those tools, the consequences of this approach, how complete a solution based on the tools available to them would be and what the resulting experience would be for users.

Opportunities

The WebPKI’s CAs role, historically, has been that of a Passport office, you present proof you control a domain, and possibly that you are an authorized member of an organization and you get a digital certificate that attests to that.

This certificate could be valid for up to 1095 days. Once the certificate is issued the CA, largely speaking, has no natural opportunity to verify this information again. It is worth noting that this month the CABForum voted to shorten this period to 825 days.

Tools

In the event a CA determines it made a mistake in the issuance of a certificate or has been notified by the subscriber they would like to see a certificate marked invalid, the tool they have available to them is called “revocation”.

The two types of revocation that are under the control of a Certificate Authority are called Certificate Revocation Lists and OCSP responses. The first is a like a phonebook of all known “revoked” certificates while the last is more like a lookup that it enables User Agents to ask the status of a particular set of certificates.

Effectiveness

Earlier we discussed the lifetime of certificates, this is important to understand because the large majority of phishing sites do not start out as Phishing sites, as such issuance time checks seldom net positive results.

After issuance, this leaves you with periodic checks of the site,  third-party reports of phishing and relying on revocation checking as an enforcement mechanism. This is a recipie for failure, there are a few reasons for this, but one of the more significant is the general ineffectiveness of revocation checking.

Revocation checking is the most taxing thing a CA does. This is because the revocation mechanisms available to them will result in every relying party contacting them to download a OCSP response or CRL covering that certificate.

As a result, OCSP has a tendency to be both slow and unreliable. This forced browsers to implement this check as a “soft fail”, in other words, if the connection times out or fails for some reason they assume the certificate as good.

To give that some context about 8% of all revocation checks done by Firefox fail and the median response time is over 200ms.

As a result of this in 2012 Chrome, which is used by about 50% of all users, more-or-less disabled revocation checking except for exceptional circumstances.

What this means is that revocation checking, even for its intended purpose, is far from an effective tool. Expanding its use to include protecting users from phishers would not improve its effectiveness and arguably it would (due to the infrastructure implications) make it even less reliable.

It is also important to note that every wildcard certificate can be used for a hostname containing “PayPal” without the CA ever being made aware, a good example is https://paypal.github.io/ which is protected by a wildcard certificate issued to Github.

Consequences

To understand the consequences of expanding the CAs role include protecting us from phishing we first need to understand what a certificate represents, or more importantly what it does not represent. It does not represent the content, it represents the host that is serving content and it is the content that “phishes”.

Today, in the age of cloud services, there is a good chance the host that is serving the content is a service operated by WordPress, or maybe Amazon’s S3. These services allow users to sign up and post arbitrary content for free or very little money.

If we decide that revocation checking is the right tool to get phishing content off the web we would be saying a CA should revoke WordPress’s certificate if one of it’s users posted something someone reported as phishing content. That would, for the situations where revocation checking takes place and happens to work, take WordPress off the Internet. Is that what we want to happen?

If so, who is it we are asking to perform this task? There are well over 400 CAs in the Microsoft Root Program do we believe these are the right organizations to be policing the internet for the appropriateness of content?

If so what criteria should they use to do so and what do we do if they abuse this censorship role?

Completeness

It is easy to say that a CA should not issue a certificate if it contains the word “PayPal”. I could even see an argument that those that would be hurt by such a rule, for example, http://www.PayPalSucks.com and (a theoretical) PayPalantir.com are an acceptable loss.

This would, however not catch homoglyphs like when a Cyrillic “a” is used instead of the latin “a” which would very likley require a manual review of the name and content to determine the intent of the domain owner which is near impossible to do with any level of accuracy or fairness.

Even with that, what about ING, as one of the world’s largest banks, they too are commonly phished, should a CA be able to issue a certificate to https://www.fishing.com. And if they do and the issuing CA receives a complaint that it is Phishing ING what should they do?

And what about global markets and languages? In Romania there is a company called Amazon that is a cleaning company, should anyone be able to request their website be revoked because it contains the word Amazon?

If we promote the CA to content police, how do we do so in a complete way?

User Experience

With CAs acting as the content police, what would a user see when they encounter a revoked site? While it varies browser to browser the experience is almost always a blocking “interstitial”, for example:

 chrome revoked firefox revoked


If you look closely you will see these are not screens that you can bypass, revoked sites are effectively removed from the internet.

This is in contrast to Safe Browsing and Smartscreen which were designed for this particular problem set and therefore provide the user a chance to visit the site after a contextually relevant warning:

 SafeBrowsing smartscreen

Conclusion

I hope you see from the above that relying on Certificate Authorities as content police as a means to protect users from phishers a bad idea, at a minimum, it would be:

  • Ineffective,
  • Incomplete,
  • Unmanageable,
  • and Duplicative.

But more importantly it would be establishing a large loosely managed group as the de-facto content censors on the internet and as Steven Spielberg said, there is a fine line between censorship, good taste, and moral responsibility.

So what should CAs do about phishing then? It is my position they should check the Google Safe Browsing API prior to issuance (which by the way, Let’s Encrypt does), and they should report Phishers to the Safe Browsing service if they encounter any.

It is also important to answer the question about what users should do to protect themselves from phishing. I understand the desire to say there is only one indicator they need to be worried about, it’s just not realistic.

When I talk to regular users I tell them to do three things, the first of which is to use an up-to-date and modern browser that uses Smart Screen or Safe Browsing. Second, you should only provide data to sites you know and only over SSL. And finally, try to only provide sites information when it was you initiated the exchange of information.

P.S.

Thanks To Vincent Lynch and the others who were kind enough to proof this post before publishing.