Why bother with short-lived certificates and keys in TLS?

There seems to be a lot of confusion and misinformation about the idea of short-lived certificates and keys so I thought I would pen some thoughts about the topic in the hope of providing some clarification.

I have seen some argue the rationale behind short-lived certificates is to address the shortcoming in the CA and browser revocation infrastructure, I would argue this is not the case at all.

In my mind, the main reason for them is to address the issue of key compromise. Long-lived keys have a long period in which they are exposed to theft (see Heartbleed) and therefor are a higher value to an attacker since a stolen key enables the attacker to impersonate the associated website for that period.

The most important thing to keep in mind is that the nature of key compromise is such that you almost never know it has happened until it is too late (consider Diginotar as an extreme example).

The importance of protecting the SSL private key is why in the 90s when SSL deployment started in earnest, large companies tried to deploy SSL in their environment using SSL “accelerators” and “security modules” that protected the keys. This was the “right technical” thing to do not not the right “practical thing” because it significantly reduced the scalability of SSL protected services and at the same time massively increased the cost to deploy SSL.

By the mid 00’s we recognized this was not workable and the use of accelerators basically stopped outside a few edge cases and software keys were used instead. Some implementations, like the Microsoft SCHANNEL implementation, tried to protect the keys by moving them to a separate process mitigating the risk of theft to some degree. Others simply loaded the keys into the web servers process and as a result exposed them to Heartbleed like attacks.

What’s important here is that when this shift happened no one pushed meaningful changes to the maximum validity period of certificates and by association their private keys. This meant that we had software keys exposed to theft for five years (max validity period at the time) with no reliable way to detect they were stolen and in the unlikely event we did find out of the theft we would rely on the unreliable CA revocation infrastructure to communicate that issue to User Agents — clearly this was not an ideal plan.

In short, by significantly reducing the validity of the certificate and keys, we also significantly reduce the value to the attacker.

Another issue that short-lived certificates help mitigate is the evolution of the WebPKI, with a long-lived certificate you get virtually no security benefits of policy and technology improvements until the old certificates and keys are expunged from the ecosystem. Today this is rollout of new policy largely accomplished via “natural expiration” which means you have to wait until the last certificate that was issued under the old policies expires before absolute enforcement is possible.

So what is the ideal certificate validity period then? I don’t think there is a one-size-fits-all answer to that question. The best I can offer is:

As short as possible but no shorter.

With some systems it is not possible to deploy automation and until it is one needs to pick a validity period that is short enough to mitigate the key compromise and policy risks I discussed while long enough to make management practical.

It is probably easier to answer the question what is the shortest validity period we can reliably use. The answer to that question is buried in the clock skew of the relying parties. Chrome recently released a new clock synchronization feature that significantly reduces errors related certificate validity periods. But until that is fully deployed and other UAs adopt similar solutions you are probably best to keep certificate validity periods at 30 days to accommodate skew and potential renewal failures.

In-short, long-lived keys for SSL exist because we never re-visited the threat model of key compromise when we stopped using hardware-protected keys in our SSL deployments and short-lived keys help deal with the modern reality of SSL deployments.

Ryan

– I have added a few pictures for fun and made some minor text changes for clarity.

Revocation reasons don’t make sense for the WebPKI of today

As the old saying goes, to err is human but to forgive is divine, in WebPKI you deal with errors through the concept of revocation.

In the event a CA determines it made a mistake in the issuance of a certificate or has been notified by the subscriber they would like to see a certificate marked invalid, they revoke the corresponding certificate.

The two mechanisms they have available to them to do this are called Certificate Revocation Lists and OCSP responses. The first is a like a phonebook of all known “revoked” certificates while the last is more like a lookup that it enables User Agents to ask the status of a particular set of certificates.

In both cases they have the option to indicate why the certificate was revoked, the available options include:

Reason Description
Key Compromise We believe the key to have been stolen or can no longer be sure it has not been.
CA Compromise The issuing CA itself has been compromised (think DigiNotar).
Affiliation Changed The certificate contained affiliation information, for example, it may have been an EV certificate and the associated business is no longer owned by the same entity.
Privilege Withdrawn The right to represent the given host was revoked for some reason.

There are a few other reasons but they are not materially relevant to WebPKI use cases so I left them out.

CA Compromise is clearly a special and exceptional case, so much so that browsers have developed their own mechanism for dealing with it (see OneCRL and CRLSets) so it’s not particularly relevant to the options available to CAs anymore, at least in the WebPKI.

Another thing to consider when looking at revocation reasons is that about 90% of all certificates in used in the WebPKI (for TLS) are considered to be Domain Validated (DV) certificates, this limits the applicability of the remaining reasons substantially.

DV OV EV
Key Compromise Yes Yes Yes
Affiliation Changed Maybe Yes Yes
Privilege Withdrawn Maybe Yes Yes


As you can see, the available “reasons” for DV certificates are largely reduced to just “Key Compromise”. There is an argument that both “Privilege Withdrawn” and “Affiliation Changed” could be applicable to DV but in my opinion, they are a bit of a stretch. If nothing else the difference between these two reasons in the case of DV is so subtle you will almost never see these two usages be used on a DV certificate.

So what about Key Compromise?  How useful is it? Clearly, it is an important use case, your trust in an SSL certificate is frankly only as good as your confidence in how well the key is protected!

The reality is that most key compromise cases can not be detected and worse yet is that it’s far too easy to compromise an SSL key. Take for example the Heartbleed (http://heartbleed.com/) bug from last year, SSL keys are often kept in the process and relatively minor coding mistakes, as a result, can potentially expose that key and users won’t necessarily know about it.

Some platforms go out of their way to prevent such attacks at the expense of performance like we did at Microsoft with SCHANNEL where we keept the key out of process but this is the exception and not the rule.

Also popping a web server due to some third-party code vulnerability and getting to the shell in the context of the server is unfortunately far too easy. As a result, when an attacker does get a shell on a web server one of the first things they should be doing is taking that key so they can use it later, but again, how is this detected?

That is not to say “Key Compromise” as a revocation reason is not important when there is a large-scale vulnerability like Hearbleed but this is not exactly a common case.

The other cases for the “Key Compromise” revocation reason are notably dependent on the subscriber:

  • Knowing the compromise happened,
  • Being mindful enough to notify the CA,
  • Being willing to let the world know there was a compromise.

This last point of public signaling of “compromise” combined with the fact user agents treat all revocations the same (if they check at all) means that we seldom see subscribers even specify a reason when they ask for a certificate to be revoked.

As I look at why a certificate gets revoked today I think there are a few cases, these include:

  • CA has decided the certificate should be revoked,
  • The subscriber has decided the certificate should be revoked,
  • A root program has decided a certificate should be revoked.

These are far more meaningful signals to relying parties that the ones listed above and are not so specific that these is any negative signaling in their use.

Today it is not possible to specify these things but maybe we should take a second look at revocation reasons and if we continue to use them make them more relevant to the way the WebPKI works today.

Ryan

Secondary effects when encrypting the web

Secondary effects

Effects are often classified as having primary and secondary impacts. As we evolve the Internet we can see numerous examples of secondary effects, this is especially true as we look at how we evolve the way we secure it.

To put this in perspective consider the amazing milestone we have just hit, as of 2017 over half of the web is encrypted. Browsers will now be able to gradually switch from showing the “lock” as an exceptional and positive indicator to showing an error when a session is not encrypted. In fact, Chrome has already started this transition and as soon as October 2017 we will start to see it rollout.

I can not overstate how huge a change this is. For the last 23 years, SSL has been the exception and not the norm, as a result the best we could do as software developers was throw site developers a few pixels (the lock) for going out of their way to encrypt their sites. Now, we can do what we always wanted, warn the user when there is something to be concerned about.

It took a lot to get us to this point, some of which included:

  • Competition driving the cost of a certificate from thousands of dollars to as little as $0,
  • Evolution and standardization of certificate formats and of the TLS protocol,
  • Maturation and improving performance of TLS implementations,
  • Increase in average computational power so the overhead was negligible,
  • The definition and evolution of standardized practices and audits for certificate authorities,
  • The definition and standardization of a certificate enrollment protocol suitable for TLS use cases,
  • Formalization of user agent root programs and criteria in which CAs are managed to,
  • Establishment of technologies to effectively monitor and manage the CA ecosystem,
  • Migration to SaaS and the Cloud services where management of deployments can be handled in a seamless way,
  • And of course, education tooling to help the operators deploy TLS correctly.

While we are not done with this journey we have made it a long way and the end is in sight.

Each of these changes had secondary effects, even if small, but let’s look at the meta change to get a better understanding of what the consequences of change on the TLS ecosystem were.

Nobody ever did, or ever will escape the consequences of his choices. – Alfred A. Montapert

One obvious one that immediately comes to mind is that we have a generation of non-technical people who believe “the lock means it is secure” when it never really meant that.

As they say, correlation is not the same as causation, I would argue this is the reason the “Lock” myth came to be. That is because in the beginning it literally did cost thousands of dollars to get an SSL certificate, and if you were committed enough to sign up for that there was a good chance you were a big company who was at least thinking about security. This was especially true because those companies were also typically existing brick and mortar companies with reputations to protect.

From this myth, evolved another belief, that being that phishers don’t use SSL. This was true in the beginning, there were a few reasons for this, of course, the cost was one but it was also cumbersome and phishing is a numbers game so if users don’t notice the lock then why bother?

A few things have changed, through education more users today are aware of encryption and absence of the “lock” means more today than it did then. Additionally, the process of getting a certificate has been reduced to a task that can literally take about a minute and then of course cost has reduced to the point it’s inconsequential.

These things basically invalidate the once true assumption that phishers don’t bother to use SSL. Now, thankfully after the creation of SSL, we got technologies like SafeBrowsing and SmartScreen which are actually designed for, and are effective at, addressing this issue. Also with the lock disappearing over the next few years the “credibility” boost of the SSL lock will go away diminishing that value to the phisher. But is this true for other secondary effects?

It is worth noting that another secondary effect we will surely see is confusion from users as this change roles out, even though it is unquestionably better to warn users when they are at risk.

Let’s look at a larger example of a secondary effect of the move to encryption, specifically I believe BGP is a good example. The other day Dan Goodin over at ARS did a great article on a potentially malicious BGP rerouting event. In this article, he talked about how the attacker could have used the Logjam or DROWN attacks to attempt an MITM and that is true, but it’s not the only way. It no longer takes hours to get a certificate, you can get Domain Validation (DV) SSL certificate from many different providers in a matter of minutes now, a decade ago, or even just five years ago that could have taken hours.

This is important because to get an SSL certificate all you have to do is prove you control the associated host and one of the ways you can do that is by dropping a file with a pre-shared value in it on the host. If you can get the network traffic redirected to you for seven minutes like happened in this BGP rerouting case you can also get an SSL certificate for a host in the associated network segments.

In the past, the cumbersome and slow process involved in getting an SSL certificate made that practically impossible, but today it is not. This means that the broad use of SSL has, as a secondary effect, pushed the attacker down a level to BGP as a means to achieve its goal.

As a result, we will surely see BGP get more attention from attackers and eventually shore up its security so it is not as susceptible to these rerouting attacks. I also wouldn’t be surprised if we see the WebPKI evolve to make it slightly harder for this sort of attack to take place (by prohibiting certain types of validations) but that’s the game of evolution.

My friend Cem has a saying, security is often an exercise of re-arranging deck chairs on a sinking ship.

As we move closer to an encrypted web sometimes I am reminded of this, no matter what I am sure we will find more of these secondary effects which will give us an opportunity to focus on shoring up these other systems, just like we have with TLS.

Ryan

P.S. Thanks to Josh at ISRG and Andrew Whalley in the Chrome team at Google for some related conversations that helped me solidify my thoughts on this topic. And also thanks to Vincent Lynch for his review of the post.

Digital Signatures and the fallacy of Good, Cheap and Fast.

It is common to hear you can choose two of the following, “Good”, “Cheap” or “Fast”. While there is clearly some truth to this, it is not an absolute truth.

This is especially true in the context of Digital Signatures in the United States where the law allows for parties to agree to rely on virtually any mechanism to capture an agreement.  The rationale behind this is that in the event of a dispute it is the claimant in the dispute who will need to prove consent, as such, it is the parties of an agreement who must assume the burden of choice.

This introduces economic incentives to the decision-making process, specifically, it encourages you to consider which digital signature solution to apply to a given problem. This is an inherently good goal because over-specification in regulations can create market forces that result in the wrong solution being applied to a given business problem.

This flexibility also has downsides, specifically when people look to build a product to enable digital signatures this flexibility diminishes the economic incentives for them to build the most secure solution. Instead, their goal becomes to build the solution they think they can sell with minimal investment.

This, in theory, should still encourage providers to build a strong solution in that, if documents signed with their product end up being thrown out in court their business will ultimately fail. The problem with this motivation is that it can take a decade or more for this to become a concern that can hurt their business. As such, they double down on the creation of a solution that can be taken to market quickly.

It is easy to see this play out in the US Digital Signature market, the large majority of solutions do little more than produce an image that looks like “a signature” and then embeds it into a document. In the European Union, they call this a “Simple Signature”. To produce a fraudulent signature with such a scheme you typically need to select the signature with your mouse, right-click, select copy and then paste the image into the fraudulent document. These solutions are typically very easy to use and cheap to build but is clearly not a technique you would want to use on an agreement that might need to be enforced in a court of law.

Some solutions try to make these Simple Signatures stronger by taking a hash of the document with the signature in it and placing it in a database on their servers. This is done so that in the event of a dispute you can ask the service provider to go to court with you and act as an expert, stand by your side, and say “the hash is the same one we saw when they signed the document”. This is better than the pure image approach but not by much. This is true if for no other reason than they are unlikely to assign legal support to every contract dispute that happens involving contracts signed with their service. Additionally, if the legitimacy of the contract is put into question and the site has had any security incidents at all since it was signed, the claims they make would be questionable at best.

Other Simple Signature solutions try to mitigate these issues by applying a cryptographic notarization to the document after the basic Simple Signature was applied. This adds the concept of a trustworthy notarization of when the document was signed and to some degree, makes the document capable of standing on its own. Now if the original electronic document was produced with standards-based software anyone can confirm it has not been tampered with since it was produced. This helps you reduce the legal exposure relating to a compromise of the signing service which certainly helps with the enforceability but it does not eliminate this risk. This is because a slightly more difficult compromise of the service could still result in a bad actor producing a false document.

The next step up in enforceability is generally referred to as an “Advanced Signature”. These Advanced Signatures almost always start as a “Simple Signature” but they will also include signing the document with a cryptographic key under the sole control of the signer. This provides assurances that it was the user, and not the digital signature service, that signed the document.

Practically speaking the final step in enforceability when it comes to digital signatures is referred to as a “Qualified Signature”. This builds on the previous two types and of signatures and adds in two requirements. The first of which is that the key to being managed on a specialized cryptographic appliance such as a smart card or hardware security module. The rationale for this is that signing keys are typically long-lived and keys stored in software are exposed to theft. This is important because if an attacker steals one of these keys then they can produce fraudulently signed documents with it. The second requirement is that a the legal identity of the signer needs to be verified and included in the digital signature. This is in contrast to the other types of signatures which in essence only require the signer to be uniquely identified, in other words, something as basic as an email ping could be considered sufficient.

What has happened in the US is that since the law does not require anything stronger than a Simple Signature outside of areas like healthcare and finance where there are either regulatory requirements or business risks that justify a better solution we see only the Simple Signatures in use.

The vendors of these products would probably tell you they don’t do these more enforceable solutions because of you have to choose two of the following Fast, Cheap or Secure.

In the European Union, you have had the opposite problem, the economic incentives of the vendors and the associated regulatory frameworks have sent their market down the path of trying to apply Qualified Signatures to nearly all transactions. This reality has hampered the adoption of electronic signatures limiting their use to the highest value transactions due to the associated onboarding and use time friction this approach typically entails.

I believe this is an example of market failure and that it is possible to build a solution that scales across a businesses enforceability needs. One of the key reasons the market has not yet delivered such a solution is that the economic incentives for the vendors are not well aligned with those of their customers.

Tides are changing in both the US and in the European Union and I believe we will see a convergence of the best of both solutions in the coming years but as of yet, such a solution has not found its way to the market.

CAs and SSL and Phishing Oh My!

 

 

 

 

 

 

NOTE: This post reflects my personal beliefs and is not necessarily those of my employer Google, or Let’s Encrypt where I am a member of their Technical Advisory Board.

Introduction

Recently Vincent from The SSL Store published a blog post calling out Let’s Encrypt for issuing certificates to domains that contain the world PayPal.

The TL;DR for his post is he believes that Let’s Encrypt is enabling phishers by issuing them SSL certificates that contain the word “PayPal” and then refusing to revoke them when arbitrary third-parties ask them to.

As a result of his post, several news sources have decided to write articles about how “Let’s Encrypt” is acting as an enabler of these Phishers [1] [2].

Unfortunately, Vincent’s post and the associated articles don’t cover this in the most complete and balanced way so over my morning coffee today I decided to write this post to discuss the other side of the argument.

If this is a topic that interests you please also check out the Let’s Encrypt blog post where they talk about why they have taken this position.

Exploration

Let’s explore the opportunities CAs have to check for phishing, the tools they have available to them, the effectiveness of those tools, the consequences of this approach, how complete a solution based on the tools available to them would be and what the resulting experience would be for users.

Opportunities

The WebPKI’s CAs role, historically, has been that of a Passport office, you present proof you control a domain, and possibly that you are an authorized member of an organization and you get a digital certificate that attests to that.

This certificate could be valid for up to 1095 days. Once the certificate is issued the CA, largely speaking, has no natural opportunity to verify this information again. It is worth noting that this month the CABForum voted to shorten this period to 825 days.

Tools

In the event a CA determines it made a mistake in the issuance of a certificate or has been notified by the subscriber they would like to see a certificate marked invalid, the tool they have available to them is called “revocation”.

The two types of revocation that are under the control of a Certificate Authority are called Certificate Revocation Lists and OCSP responses. The first is a like a phonebook of all known “revoked” certificates while the last is more like a lookup that it enables User Agents to ask the status of a particular set of certificates.

Effectiveness

Earlier we discussed the lifetime of certificates, this is important to understand because the large majority of phishing sites do not start out as Phishing sites, as such issuance time checks seldom net positive results.

After issuance, this leaves you with periodic checks of the site,  third-party reports of phishing and relying on revocation checking as an enforcement mechanism. This is a recipie for failure, there are a few reasons for this, but one of the more significant is the general ineffectiveness of revocation checking.

Revocation checking is the most taxing thing a CA does. This is because the revocation mechanisms available to them will result in every relying party contacting them to download a OCSP response or CRL covering that certificate.

As a result, OCSP has a tendency to be both slow and unreliable. This forced browsers to implement this check as a “soft fail”, in other words, if the connection times out or fails for some reason they assume the certificate as good.

To give that some context about 8% of all revocation checks done by Firefox fail and the median response time is over 200ms.

As a result of this in 2012 Chrome, which is used by about 50% of all users, more-or-less disabled revocation checking except for exceptional circumstances.

What this means is that revocation checking, even for its intended purpose, is far from an effective tool. Expanding its use to include protecting users from phishers would not improve its effectiveness and arguably it would (due to the infrastructure implications) make it even less reliable.

It is also important to note that every wildcard certificate can be used for a hostname containing “PayPal” without the CA ever being made aware, a good example is https://paypal.github.io/ which is protected by a wildcard certificate issued to Github.

Consequences

To understand the consequences of expanding the CAs role include protecting us from phishing we first need to understand what a certificate represents, or more importantly what it does not represent. It does not represent the content, it represents the host that is serving content and it is the content that “phishes”.

Today, in the age of cloud services, there is a good chance the host that is serving the content is a service operated by WordPress, or maybe Amazon’s S3. These services allow users to sign up and post arbitrary content for free or very little money.

If we decide that revocation checking is the right tool to get phishing content off the web we would be saying a CA should revoke WordPress’s certificate if one of it’s users posted something someone reported as phishing content. That would, for the situations where revocation checking takes place and happens to work, take WordPress off the Internet. Is that what we want to happen?

If so, who is it we are asking to perform this task? There are well over 400 CAs in the Microsoft Root Program do we believe these are the right organizations to be policing the internet for the appropriateness of content?

If so what criteria should they use to do so and what do we do if they abuse this censorship role?

Completeness

It is easy to say that a CA should not issue a certificate if it contains the word “PayPal”. I could even see an argument that those that would be hurt by such a rule, for example, http://www.PayPalSucks.com and (a theoretical) PayPalantir.com are an acceptable loss.

This would, however not catch homoglyphs like when a Cyrillic “a” is used instead of the latin “a” which would very likley require a manual review of the name and content to determine the intent of the domain owner which is near impossible to do with any level of accuracy or fairness.

Even with that, what about ING, as one of the world’s largest banks, they too are commonly phished, should a CA be able to issue a certificate to https://www.fishing.com. And if they do and the issuing CA receives a complaint that it is Phishing ING what should they do?

And what about global markets and languages? In Romania there is a company called Amazon that is a cleaning company, should anyone be able to request their website be revoked because it contains the word Amazon?

If we promote the CA to content police, how do we do so in a complete way?

User Experience

With CAs acting as the content police, what would a user see when they encounter a revoked site? While it varies browser to browser the experience is almost always a blocking “interstitial”, for example:

 chrome revoked firefox revoked


If you look closely you will see these are not screens that you can bypass, revoked sites are effectively removed from the internet.

This is in contrast to Safe Browsing and Smartscreen which were designed for this particular problem set and therefore provide the user a chance to visit the site after a contextually relevant warning:

 SafeBrowsing smartscreen

Conclusion

I hope you see from the above that relying on Certificate Authorities as content police as a means to protect users from phishers a bad idea, at a minimum, it would be:

  • Ineffective,
  • Incomplete,
  • Unmanageable,
  • and Duplicative.

But more importantly it would be establishing a large loosely managed group as the de-facto content censors on the internet and as Steven Spielberg said, there is a fine line between censorship, good taste, and moral responsibility.

So what should CAs do about phishing then? It is my position they should check the Google Safe Browsing API prior to issuance (which by the way, Let’s Encrypt does), and they should report Phishers to the Safe Browsing service if they encounter any.

It is also important to answer the question about what users should do to protect themselves from phishing. I understand the desire to say there is only one indicator they need to be worried about, it’s just not realistic.

When I talk to regular users I tell them to do three things, the first of which is to use an up-to-date and modern browser that uses Smart Screen or Safe Browsing. Second, you should only provide data to sites you know and only over SSL. And finally, try to only provide sites information when it was you initiated the exchange of information.

P.S.

Thanks To Vincent Lynch and the others who were kind enough to proof this post before publishing.

How do you show someone they are important to you?

Showing someone they are important to you is not always straightforward. Hackers, on the other hand, seem to have figured this out.

Over the years I have designed, built and operated a number of WebPKI Certificate Authorities. In these roles, one thing that has been common is that hackers are always willing to use their finest exploits for a chance at the keys that would let them impersonate any site on the Internet.

That is why it’s not uncommon for an attacker to spend a zero-day on a WebPKI CA, and when they do I would argue that is their way of showing you they think you are important.

The black market for high-quality remote exploits that have never seen the light of day is not exactly small. Exploits like this are essentially a rapidly depreciating asset, once they become known the clock starts ticking and their value decreases until it hits zero.

In a past role, over a period of about three years, we had at least three `zero-days` spent on us, and I was flattered.

Attackers are no different than everyone else, they prefer the easy path so they often go after oldest and cruftiest stuff you are dependent on, and Adobe Acrobat was one such component for us.

Acrobat has had nearly 800 CVEs in the last 18 years. Each CVE often covers many related vectors but if we are generous and say one CVE equals one exploit it averages over 40 a year. Now not all of those issues are of the same bar, but it’s one of the most attacked pieces of software out there.

What got me thinking about this was a great post on mitigation vs isolation and the different strategies used by Chrome and Edge in their security program.

While you clearly need to employ both strategies, perfection is impossible to achieve which means isolation and least privilege ends up being one of the most powerful tools you have in your mitigation toolbox.

This is one of the reasons I like projects like the PDF.js project, Electron and Progressive Web Applications, they inherently get the benefit of browser isolation and mitigation.

The trend to use web applications like these make it harder on the attackers and that’s a good thing.

Do PDFs have to be so frustrating?

Did you know PDF format is almost as old as the internet itself? Despite its age, it is still the only true cross-platform “paper like” experience available on the web. Unfortunately, it is also one of the most frustrating file formats on the web.

The primary reason for this frustration is rooted in the format itself. Its maintainers, like most maintainers of old technology still in use, have made a half-hearted attempt to keep pace with the internet as it evolved. The goal of those efforts typically rooted in the philosophy of wanting to have your cake and eat it too. Specifically, they have tried to build a mega-format that retains 20+ years of backward compatibility while bolting more features on.

This combination has resulted in a very complicated format with more options than you can shake a stick at. Many of these features are either not documented or worse, are documented incorrectly. ISO has been making an effort to at address the documentation issue and the latest documents are much better but the fact remains that PDF is both old and complicated.

Generally, this means application developers that need to support PDF either have to choose between defining some minimal profile of the standard that they feel fits their needs and building their own libraries or licensing a more complete library with onerous terms and fees associated with them.

To complicate things further these commercial libraries are usually written in languages like Java, C and PHP which don’t exactly represent the most modern development platforms for the web.

As a result, almost all solutions that work with PDFs, where the PDF itself is not the “product” do the natural thing, create an image and wrap it in a PDF file. They call this approach “flattening” the document, I can only assume this is to make it sound less hacky.

While there are legitimate cases to flatten a document in this way it causes a few problems, for example:
– Accessibility tools like screen readers no longer work,
– You can no longer select text,
– You can no longer extract the field data entered into forms,
– The document can no longer reflow to be readable on smaller screens,
– And more…

An interesting observation is that nearly every document signing solution I have seen seems to flatten the document also. From an engineering standpoint I understand why they make this decision, it’s much easier to do and there is an argument to be made that long term images are easier to handle, with that said the downsides of this approach are significant.

At Microsoft we used to talk about being “authentically digital”, the idea being you want to embrace the good things about the physical worlds way of doing things, but you also need to be true to the technology.

The technical baggage of the PDF format and the lack of freely available SDKs basically put developers in a no-win situation forcing them to give up the best parts of the digital medium if they want to work with these documents.

Are these non-flattened documents a replacement for the more modern file formats? No, they are not, but equally so, at least when you consider cross-platform needs, neither are they a for PDF.

In short, I think PDFs do not need to be so frustrating! However, if we are going to keep using the format we need to go the extra mile to retain their digital goodness.

XMLDSIG and XAdES in the browser and Node

It is great that we have a language like Javascript that we can use in both client and server applications. With that said one of the challenges both the browser and server environments have compared to say Java or .NET is a fairly weak set of class libraries to build upon.

This is especially true when it comes to security technologies, this is the foundation of most of our OSS work at Peculiar Ventures.

Our most recent project is a XMLDSIG library we call XAdESjs, it is based on WebCrypto and as a result with an appropriate polyfill it works on Node as well as it does in the browser.

With it and a few lines of code you can now sign and verify XMLDSIG and XAdES-BES signatures on these platforms, for example a verify on Node might look like this:

var xadesjs = require(“./xadesjs/built/xades.js”);
var DOMParser = require(“xmldom”).DOMParser;
var WebCrypto = require(“./node-webcrypto-ossl”).default;

xadesjs.Application.setEngine(“OpenSSL”, new WebCrypto());

var fs = require(“fs”);
var xmlString = fs.readFileSync(“./xadesjs/test/static/valid_signature.xml”,”utf8″);

var signedDocument = new DOMParser().parseFromString(xmlString, “application/xml”);
var xmlSignature = signedDocument.getElementsByTagNameNS(“http://www.w3.org/2000/09/xmldsig#”, “Signature”);

var signedXml = new xadesjs.SignedXml(signedDocument);
signedXml.LoadXml(xmlSignature[0]);
signedXml.CheckSignature()
.then(function (signedDocument) {
console.log(“Successfully Verified”);
})
.catch(function (e) {
console.error(e);
});

Though we only support the most basic forms of XAdES at this time it is enough to verify the EU Trust Service Provider List (EUTSL).

We intend to add more of XAdES to the library in the future, though we have no schedule for that at this time.

There is still lots to do so please consider contributing.

Ryan

Escaping the Same-origin Policy for PKIjs

PKIjs enables you to build rich PKI-aware applications inside of the browser, that said the browser implements a security policy called the Same-origin Policy that restricts the code running on the page from accessing resources served from other locations.

There is also a related capability that allows a remote server to state what origins can interact with it, this capability is called Cross-Origin Resource Sharing (CORS).

This background is important since the associated standards for PKI presume the client will be able to reach the Certificate Authority that issued the certificate to check revocation status (OCSP and CRL) as well as potentially fetch issuer certificates.

Unfortunately, the WebPKI CRL and OCSP responders do not set these headers, as such for a web page to fetch these resources an intermediate proxy is needed to enable your web application to access these servers.

There are lots of great resources on how to configure Nginx as forward-proxy. The problem is you do not want to be used as an open forward proxy. In this case, what we want to do is have a virtual host that will proxy request on behalf of the client to the appropriate origin server. For example sake, let’s say we have a host proxy.example.com. It is straightforward to configure Nginx as an open forward proxy and there are a lot of examples on the internet showing how to do this. That said we do not want to create an open proxy to be abused so we want some constraints:

  1. Only forward proxy to a whitelisted set of hosts,
  2. Only proxy specific methods (POST and GET),
  3. Rate limited the client to make it more difficult to abuse.

To enable this we want to be able to pass a query string to Nginx that contains the URL we want the request forwarded to, for example, https://proxy.example.com?url={urlencoded url}

A configuration that does this might look something like the following:

map "$request_method:$uri" $whitelist {
default "deny";
~^POST:/&url=http:/timestamp.globalsign.com "allow";
~^GET:/&url=http:/ocsp2.globalsign.com "allow";
}

limit_req_zone $binary_remote_addr zone=proxy:10m rate=5r/m;

server {
 resolver 8.8.8.8;
 listen 80 default_server;
 location / {
 limit_req zone=proxy burst=5;
 if ($whitelist = deny) { return 403; break; }
 if ($uri ~ \&url\=(https?):/([^\/]*)(.+)) { set $myproto $1; set $myhost $2; set $myuri $3; }
 if ($myproto = http) { proxy_pass http://$myhost$myuri; break; }
 if ($myproto = https) { proxy_pass https://$myhost$myuri; break; }
 }
}

NOTE: It seems that Nginx has a bug where when tokenizing it messes with the URL-encoded value stripping out some characters (including the /). To work around this in the above configuration we match on a single slash, this still works because this stripping appears consistent but does not effect the $uri value which is passed to the origin server.

NOTE: Unfortunately to accomplish the above we needed to use the if statement which has some downsides. If you know of how to accomplish the above without the use of them, or how to use less of them please let me know.

With this, the browser is now able to make requests to a limited set of hosts once every 12 seconds.

But what if you want this forward proxy(proxy.example.com) to exist in a different domain  from the application (pkijsapp.example.com)?  By default, the browser will not allow this but thankfully we can use  Cross-Origin Resource Sharing (CORS) to tell the clients they can send requests to our proxy, you do this by returning two additional headers to the client:

Access-Control-Allow-Origin: http://pkijsapp.example.com
Access-Control-Allow-Methods: POST, GET

With these two headers set, the browser will allow the pkijsapp.example.com application to send both POST and GETs to our proxy.

But what if we want to communicate with multiple servers? Unfortunately, the CORS specification only allows the server to set a single origin in this header. It recommends that cases that require access to multiple origins set the header dynamically. This, of course, requires the server to have knowledge of what the client needs and if you are building a Single Page Application it’s quite likely the server doesn’t have that context.

Thankfully it does support a wildcard so if you have multiple application domains you want your proxy to be accessible from you can specify *:

Access-Control-Allow-Origin: *

This approach works around the multiple origin limitations of CORS but has its own issues, for example:

  1. Your server is now an unauthenticated network proxy that can be abused,
  2. If the servers you include in the whitelist can also serve active content they become useful to an attacker,
  3. Your server now has knowledge of which certificates are being validated by the client,

Neither of these approaches is perfect but they both allow you to get the information necessary to validate certificates using PKIjs within the browser.

Ryan

Understanding risks and avoiding FUD

Disclaimer: This post represents personal opinions and thoughts, and does not represent the views or positions of my employer Google, or Let’s Encrypt where I am a member of their advisory board.

The first step in building a complex system or protocol that has any security needs at all is to model the threats that the system is exposed to. Without a deep understanding of the system and the threats it is exposed to it is not possible to effectively secure it from a motivated attacker.

A key part of this process is understanding the abilities, motivations and the perspective of the attacker. This not only helps you understand what it is they are looking for, but what they will do with the surface area they have access to.

I bring this up because today TrendMicro published a blog post titled “Let’s Encrypt Now Being Abused By Malvertisers” and after reading it, I am convinced the authors are not looking at this topic with that frame of mind. Additionally, it seems they either intentionally or unintentionally omit and/or misrepresent details that are imperative to understand the incident they discuss.

In this post, I try to look more critically at what happened and identify what the core issues at play actually were.

Types of certificates

Before I get to the details I want to provide some background. First it is important to understand that there are, broadly speaking, three types of SSL certificates that Certificate Authorities issue, each with increasing levels of “identity assurance”.

The first is what is referred to as a “Domain Validated” or DV certificates. To obtain a DV certificate, one must only prove that they control the host or domain to be certified.

The next is referred to as an “Organization Validated” or OV certificates. To obtain an OV certificate, one must prove control of the domain using the same mechanisms allowed for DV but must also prove two additional things. First is who the legal entity that controls the domain is. Second is that the requester is acting on behalf of that entity. Browsers do not treat these certificates any different than a DV certificate as such they offer the relying party no additional value.

Finally, there is “Extended Validation” or EV certificates. These are logically the same as OV except the requirements that must be met when verifying the legal entity and requestor affiliation are higher. Since knowing the entity that controls a given host or domain can provide useful information to a relying party browsers display a “Green Bar” with the legal entity name visible within it for these certificates.

Let’s Encrypt issues only Domain Validated certificates. To put this in context today around 5% of certificates on the Internet are EV. This is important to understand in that the certificates issued by Let’s Encrypt are effectively no less “secure” or “validated” than the DV certificates issued by other Certificate Authorities. This is because all CAs are held to the same standards for each of the certificate types they issue.

Request validation

So what exactly are the processes a Certificate Authority follows when validating a request for a Domain Validated certificate? The associated requirements document is nearly fifty pages long so to save you muddling through that I will summarize the requirement:

  • the requestor controls the key being bound to the account or key being certified,
  • the requestor controls the host or domain being certified,
  • that issuing the certificate would not give the key holder ability to assert an identity other than what has been verified,
  • the hosts or domains being certified are not currently suspected of phishing or other fraudulent usages.

It also suggests (but does not require) that CAs implement something called CA Authorization (CAA) Resource Records. These are special DNS records that let a domain owner state which Certificate Authority it legitimately uses for certificates.

Let’s Encrypt

If you want to see for yourself what Let’s Encrypt does for each of these requirements you can review their code, that said it can be summarized as:

Proof of possession: Ensure the requestor can sign a message with the key that will be associated with their account and another for the key associated with their certificate.

Domain control: They support several mechanisms but they all require someone in control of the domain or host to perform an action only a privileged user could.

They also check the Public Suffix List (PSL) on each request. They use this to rate limiting certificate issuance of subdomains not on the PSL.

They also do not issue wildcard certificates.

Domain intent: They implement strict support for CAA records, if such a record is found and Let’s Encrypt is not listed the request will be denied.

Fraudulent usage: Every host or domain requested is checked against the Google Safe Browsing service.

This is, at a high level, the same thing every CA is supposed to do for every DV, OV or EV SSL certificate they issue. If they do not, they would not pass their WebTrust audit. The only place there is any room for interpretation is:

  1. Which domain control approaches they support,
  2. How broadly in their orders they use the PSL,
  3. If they support CAA,
  4. Which data source they use for fraudulent use detection.

On fraudulent use detection I know of CAs that utilize various pre-issuance approaches. Some use private databases; others may use Google Safe Browsing, APWG, PhishTank or some combination of all of these. I should add that the Google Safe Browsing API that Let’s Encrypt uses has a reputation of being one of the best available sources for doing these kinds of checks. I know of only a few who do any post-issuance checking of these data sources.

But what about the infamous Let’s Encrypt certificate issuance automation? Contrary to popular belief Let’s Encrypt is not the first, or even the only CA that automates 100% of the issuance process. They are also not the only ones who offer client automation to request the certificate and configure SSL.

Just ask yourself how to providers like CloudFlare or Hosting Providers get and configure SSL certificates in seconds if it is done manually?

If that is the case how is Let’s Encrypt different? I would say there are two very visible differences:

  1. They are 100% free with no limitations on who can get certificates,
  2. Their practices are the most transparent of the publicly trusted CAs.

The FUD

There is a lot of FUD in the TrendMicro post so please bear with me.

The first that jumps out is that the authors essentially say they saw this coming. It’s framed this way in an attempt to convince the reader this was only possible because of Let’s Encrypt.

Unfortunately, the potential for Let’s Encrypt being abused has always been present. Because of this, we have kept an eye out for malicious sites that would use a Let’s Encrypt certificate.

The problem with this argument is that there really is nothing special about what Let’s Encrypt is doing. In fact, Netcraft recently did a post titled Certificate authorities issue SSL certificates to fraudsters where they find numerous other well-respected CAs have issued certificates to people who later used them for bad deeds.

The issue here is that fraudsters understand CAs check orders for fraud pre-issuance to avoid posts just like the one from TrendMicro we are discussing, as a result attackers wait until after a certificate is issued to perpetrate fraud with the associated hosts and domains.

Any technology that is meant for good can be abused by cybercriminals, and Let’s Encrypt is no exception. As a certificate authority ourselves we are aware of how the SSL system of trust can be abused.

This statement is supposed to suggest that if Let’s Encrypt did not issue the associated certificate the attack in question would not be possible. It again goes on to suggest that TrendMicro is somehow different. Recall however that all CAs are subject to the same requirements for verifying domain control, the core difference between Let’s Encrypt and TrendMicro as CAs is that TrendMicro only issues Organization Validation and Extended Validation certificates.

Maybe that is what they are driving at? There is a valid argument to be made the increased cost and complexity getting an OV certificate makes it harder for an attacker to get an SSL certificate because they must also prove organization details. With that said, as long as DV certificates are allowed the user’s risk is the same.

It is worth noting though that it is quite possible to spoof the material necessary to get a OV certificate. In fact, there are numerous ways one can (very inexpensively) produce “false” but “verifiable” documentation that would meet the OV verification bar. But the argument they are making here is a red herring, I will explain why in a bit

Maybe then the issue is that Let’s Encrypt is somehow at fault because they issue certificates via a fully automated mechanism?

A certificate authority that automatically issues certificates specific to these subdomains may inadvertently help cybercriminals, all with the domain owner being unaware of the problem and unable to prevent it.

If so, the core issue is one that touches TrendMicro also. This is because they also issue certificates “in minutes”, something only possible with automation. This too is a red herring though.

Maybe the issue is a result of Let’s Encrypt not meeting the same requirements of other CAs?

Let’s Encrypt only checks domains that it issues against the Google safe browsing API;

As I have called out above, that is simply not true and all you need to do is check their public source repository to confirm yourself.

So maybe the problem is Let’s Encrypt just is not playing an active role in keeping you safe on the internet.

Security on the infrastructure is only possible when all critical players – browsers, CAs, and anti-virus companies – play an active role in weeding out bad actors.

There is a word for this sort of argument it is ‘specious’. The reality is when you look at the backers of Let’s Encrypt (which in full disclosure includes my employer) you see the who’s who of privacy and security.

Well then if the issue is not Let’s Encrypt, maybe it is Domain Validated certificates? Their post does not directly make this claim but by suggesting TrendMicro would not have issued the attacker a certificate they may be trying to subtly suggest as much since they don’t issue DV certificates.

The problem is even if DV did not exist I am confident a motivated attacker could easily produce the necessary documentation to get an OV certificate. It is also worth noting that today that 70% of all certificates in use only offer the user the assurance of a Domain Validated certificate.

OK so maybe the core issue is that SSL was available to the attacker at all? The rationale being if the site hosting the ad was served over SSL and the attacker did not serve their content over SSL the users would have seen the mixed content indicator which might have clued the user into the fact something afoot. The problem with this argument is that study after study has shown that the subtle changes in the lock icon are simply not noticeable by users, even to those who have been taught to look for them.

So if the the above points were not the issue what was?

How was this attack carried out? The malvertisers used a technique called “domain shadowing”. Attackers who have gained the ability to create subdomains under a legitimate domain do so, but the created subdomain leads to a server under the control of the attackers. In this particular case, the attackers created ad.{legitimate domain}.com under the legitimate site. Note that we are disguising the name of this site until its webmasters are able to fix this problem appropriately.

There you go. The attacker was able to register a subdomain under a parent domain and as such was able to prove control of any hosts associated with it. It had nothing to do with SSL, the attacker had full control of a subdomain and the attack would have still worked without SSL.

Motivations

With this post TrendMicro is obviously trying to answer a questions their customers must be asking: Why should I buy your product when I can get theirs for free?

DigiCert, GoDaddy, Namecheap as well as others have done their own variation of FUD pieces like this trying to discourage the use of “Free Certificates” and discredit their issuers. They do this, I assume, in response to the fear that the availability of free certificates will somehow hurt their business. The reality is there are a lot of things Let’s Encrypt is not, and will probably never be.

Most of these things are the things commercial Certificate Authorities already do and do well. I could also go on for hours on different ways they could further improve their offerings and better differentiate their products to grow their businesses.

They should, instead of trying to sell fear and uncertainty, focus on how they can make their products better, stickier, and more valuable to their customers.

That said, so far it looks like they do not have much to worry about. Even though Let’s Encrypt has issued over 250,000 certificates in just a few months almost 75% of the hosts the associated with them did not have certificates before. The 25% who may have moved to Let’s Encrypt from another CA were very likely customers of the other free certificates providers and resellers, where certificates are commonly sold for only a few dollars.

In short, it looks like Let’s Encrypt’s is principally serving the long tail of users that existing CAs have historically failed to engage.

Threat Model

When we look critically at what happened in the incident TrendMicro discusses can clearly identify two failures:

  1. The attacker was able to register a legitimate subdomain due to a misconfiguration or vulnerability,
  2. A certificate was issued for a domain and the domain owner was not aware of it.

The first problem would have potentially been prevented if a formal security program was in place and the system managing the domain registration process had been threat modeled and reviewed for vulnerabilities.

The second issue could have also been caught through the same process. I say “could”, because  not everyone is aware of Certificate Transparency (CT) and CAA which are the tools one would use to mitigate this risk.

At this time CT is is not mandated by WebTrust but Chrome does require it for EV certificates as a condition of showing the “Green Bar”. That said even though it is not a requirement Let’s Encrypt chooses to publish all of the certificates they issue into the logs. In fact, they are the only Certificate Authority I am aware of who does this for DV certificates.

This enables site owners, like the one in question to register with log monitoring services, like the free and professional one offered by DigiCert, and get notified when such a certificate is issued.

As mentioned earlier CAA is a pre-issuance check designed for this specific case, namely how to I tell a CA I do not want them issuing certificates for my domain.

Until all CAs are required to log all of the SSL certificates they issue into CT Logs and are required to use CAA these are not reliable tools to catch mississuances. That said since Let’s Encrypt does both they would have caught this particular case and that is a step in the right direction.

 

[Edited January 9th 2016 to fix typos and make link to trend article point to the Internet Archive Wayback Machine]