Category Archives: Opensource

How is ACME different than XCEP/WSTEP anyway?

If you read my blog there is a reasonable chance that you are familiar with RFC 8555, the standard for Automatic Certificate Management Environment (ACME). Even though ACME is a relatively young protocol it is already used by the majority of websites on the internet for certificate lifecycle management.

While I won’t go into a lot of detail for this post to make sense you have to understand a couple of things about the ACME protocol. 

The first is that it works on the concept of dynamic “account” registration. By that I mean requestors can in real-time, request that an “account” be created for them. This account is represented by a public key pair that the ACME service will use to persist meta-data about the requestor. The ACME service can deny this request for any reason it likes but commonly in Web PKI as long as the request is wellformed it is accepted.

The next thing you need to understand is that it has the concept of “challenges” that can be used to communicate conditions that must be met before a certificate is issued. For example, an ACME service may request that the account holder demonstrate that they are authorized to get a certificate for a given domain name by placing a specific value in DNS at a well-known location. Since only a DNS administrator could perform that action the ACME service can have confidence the requestor controls the name it has requested a certificate for.

And finally, there is nothing in the ACME protocol that limits its use to just web server certificates. It is a general framework that can be used to acquire and manage certificates of any type. For example, CISCO is using ACME in their WebEx offering to facilitate the acquisition of what are essentially email certificates via OIDC authentication as a way to authenticate chat members.

But there is another very popular protocol, a well set of protocols that fewer people know about, that is XCEP and WSTEP. These protocols are used by Windows machines to both determine what kind of certificates a machine or user should enroll for as well as enabling the enrollment for those certificates. 

Similar to ACME these protocols also support, although in a more ridged rigid way, the ability for the issuer to challenge the client for additional information necessary to get a certificate of a particular type. For example, you can configure a certificate type (known as a template) to require that the requestor provide a cryptographic attestation backed by a TPM to be used to prove the machine belongs to the organization operating the certificate authority.

There are differences though, the first of which is the concept of a template, this enables XCEP/WSTEP to have one URL endpoint issue many types of certificates which is very important within an enterprise which is where certificates are used for many different scenarios.

Another difference is that XCEP/WSTEP presume the authorization of the client happened out of the band before the client requested the certificate. The dynamic approach to challenges that was adopted by ACME allowed it to tackle this problem in-band or rely on the out-of-band authorization. It supports this out-of-band concept through the concept of External Account Binding which allows the requestor to use an API key gathered out of band to prove on account creation the account key is associated with some pre-enrolled user.

And finally, ACME has a clear model for extensibility built into it. What this means is that one can easily extend it with additional capabilities. The most fundamental part of this is the Directory resource which lists all of the APIs supported by this ACME instance. One could use this, for example, to add a “Templates” API that would allow an ACME client to request specific types of certificates from the ACME endpoint.

Similarly, the concept of the challenge allows the server to demand the client do any number of things before the certificate is to be issued so the idea of adding a TPM challenge, for example, is trivial within this framework.

In short, ACME, contrary to popular belief, is not a protocol for getting and managing website certificates it is a framework for getting and managing any certificate. More importantly, it is extensible in such a way that with just a few minor additions it would be a proper superset of all the capabilities within the Windows enrollment protocol suite.

Why is this important? That’s easy! When I talk to anyone who is using certificates at any reasonable scale their concerns almost instantly come to the complexity of managing the certificate lifecycle management of those certificates across the various products and services that use them.

When we look at this complexity most of it arises from the use of a mish-mash of solutions for lifecycle management that when viewed in isolation seemed sufficient but when looked at holistically were actually woefully insufficient. 

If as an industry we move these legacy systems to a single protocol so that certificates regardless of them being for public or private PKI or representing users, machines or workloads use one protocol we will have a reliable substrate that we can use to authenticate and authorize with agility.

XMLDSIG and XAdES in the browser and Node

It is great that we have a language like Javascript that we can use in both client and server applications. With that said one of the challenges both the browser and server environments have compared to say Java or .NET is a fairly weak set of class libraries to build upon.

This is especially true when it comes to security technologies, this is the foundation of most of our OSS work at Peculiar Ventures.

Our most recent project is a XMLDSIG library we call XAdESjs, it is based on WebCrypto and as a result with an appropriate polyfill it works on Node as well as it does in the browser.

With it and a few lines of code you can now sign and verify XMLDSIG and XAdES-BES signatures on these platforms, for example a verify on Node might look like this:

var xadesjs = require(“./xadesjs/built/xades.js”);
var DOMParser = require(“xmldom”).DOMParser;
var WebCrypto = require(“./node-webcrypto-ossl”).default;

xadesjs.Application.setEngine(“OpenSSL”, new WebCrypto());

var fs = require(“fs”);
var xmlString = fs.readFileSync(“./xadesjs/test/static/valid_signature.xml”,”utf8″);

var signedDocument = new DOMParser().parseFromString(xmlString, “application/xml”);
var xmlSignature = signedDocument.getElementsByTagNameNS(“http://www.w3.org/2000/09/xmldsig#”, “Signature”);

var signedXml = new xadesjs.SignedXml(signedDocument);
signedXml.LoadXml(xmlSignature[0]);
signedXml.CheckSignature()
.then(function (signedDocument) {
console.log(“Successfully Verified”);
})
.catch(function (e) {
console.error(e);
});

Though we only support the most basic forms of XAdES at this time it is enough to verify the EU Trust Service Provider List (EUTSL).

We intend to add more of XAdES to the library in the future, though we have no schedule for that at this time.

There is still lots to do so please consider contributing.

Ryan

The PKCS#12 standard needs another update

PKCS#12 is the defacto file format for moving private keys and certificates around. It was defined by RSA and Microsoft in the late 90s and is used by Windows extensively. It was also recently added to KIMP as a means to export key material.

As an older format, it was designed with support for algorithms like MD2, MD5, SHA1, RC2, RC4, DES and 3DES. It was recently standardized by IETF RFC 7292 and the IETF took this opportunity to add support for SHA2 but have not made an accommodation for any mode of AES.

Thankfully PKCS #12 is based on CMS which does support AES (see RFC 3565 and RFC 5959). In theory, even though RFC 7292 doesn’t specify a need to support AES,  there is enough information to use it in an interoperable way.

Another standard used by PKCS#12 is PKCS #5 (RFC 2898), this specifies the mechanics of password-based encryption. It provides two ways to do this PBES1 and PBES2, more on this later.

Despite these complexities and constraints, we wanted to see if we could provide a secure and interoperable implementation of PKCS#12 in PKIjs since it is one of the most requested features. This post documents our findings.

Observations

Lots of unused options

PKCS#12 is the swiss army knife of certificate and key transport. It can carry keys, certificates, CRLs and other metadata. The specification offers both password and certificate-based schemes for privacy and integrity protection.

This is accomplished by having an outer “integrity envelope” that may contain many “privacy envelopes”.

When using certificates to protect the integrity of its contents, the specification uses the CMS SignedData message to represent this envelope.

When using passwords it uses an HMAC placed into its own data structure.

The use of an outer envelope containing many different inner envelopes enables the implementor to mix and match various types of protection approaches into a single file. It also enables implementers to use different secrets for integrity and privacy protection.

Despite this flexibility, most implementations only support the following combination:

  1. Optionally using the HMAC mechanism for integrity protection of certificates
  2. Using password-based privacy protection for keys
  3. Using the same password for privacy protection of keys

NOTE: OpenSSL was the only implementation we found that supports the ability to use a different password for the “integrity envelope” and  “privacy envelope”. This is done using the “twopass” option of the pkcs12 command.

The formats flexibility is great. We can envision a few different types of scenarios one might be able to create with it, but there appears to be no documented profile making it clear what is necessary to interoperate with other implementations.

It would be ideal if a future incarnation of the specification provided this information for implementers.

Consequences of legacy

As mentioned earlier there are two approaches for password based encryption defined in PKCS#5 (RFC 2989).

The first is called PBES1, according to the RFC :

PBES1 is recommended only for compatibility with existing
applications, since it supports only two underlying encryption
schemes, each of which has a key size (56 or 64 bits) that may not be
large enough for some applications.

The PKCS#12 RFC reinforces this by saying:

The procedures and algorithms
defined in PKCS #5 v2.1 should be used instead.
Specifically, PBES2 should be used as encryption scheme, with PBKDF2
as the key derivation function.

With that said it seems, there is no indication in the format on which scheme is used which leaves an implementor forced with trying both options to “just see which one works”.

Additionally it seems none of the implementations we have encountered followed this advice and only support the PBES1 approach.

Cryptographic strength

When choosing cryptographic algorithm one of the things you need to be mindful of is the minimum effective strength. There are differing opinions on what each algorithm’s minimum effective strength is at different key lengths, but normally the difference not significant. These opinions also change as computing power increases along with our ability to break different algorithms.

When choosing which algorithms to use together you want to make sure all algorithms you use in the construction offer similar security properties If you do not do this the weakest algorithm is the weakest link in the construction. It seems there is no guidance in the standard for topic, as a result we have found:

  1. Files that use weak algorithms protection of strong  keys,
  2. Files that use a weaker algorithm for integrity than they do for privacy.

This first point is particularly important. The most recently available guidance for minimum effective key length comes from the German Federal Office for Information Security, BSI. These guidelines, when interpreted in the context of PKCS #12, recommend that a 2048-bit RSA key protection happens using SHA2-256 and a 128-bit symmetric key.

The strongest symmetric algorithm specified in the PKCS #12 standard is 3DES which only offers an effective strength of 112-bits.  This means, if you believe the BSI, it is not possible to create a standards compliant PKCS#12 that offer the effective security necessary to protect a 2048-bit RSA key.

ANSI on the other hand, currently recommends that 100-bit symmetric key is an acceptable minimum strength to protect a 2048-bit RSA key (though this recommendation was made a year earlier than the BSI recommendation).

So if you believe ANSI, the strongest suite offered by RFC 7292 is strong enough to “adequately” protect such a key, just not a larger one.

The unfortunate situation we are left with in is that it is not possible to create a “standards compliant” PKCS12 that support modern cryptographic recommendations.

Implementations

OpenSSL

A while ago OpenSSL was updated to support AES-CBC in PKCS#8 which is the format that PKCS#12 uses to represent keys.  In an ideal world, we would be using AES-GCM for our interoperability target but we will take what we can get.

To create such a file you would use a command similar to this:

rmh$ openssl genrsa 2048|openssl pkcs8 -topk8 -v2 aes-256-cbc -out key.pem

Generating RSA private key, 2048 bit long modulus

……………+++

……………….+++

e is 65537 (0x10001)

Enter Encryption Password:

Verifying – Enter Encryption Password:
rmh$ cat key.pem

—–BEGIN ENCRYPTED PRIVATE KEY—–

—–END ENCRYPTED PRIVATE KEY—–

If you look at the resulting file with an ASN.1 parser you will see the file says the Key Encryption Key (KEK) is aes-256-cbc.

It seems the latest OpenSSL (1.0.2d) will even let us do this with a PKCS#12, those commands would look something like this:

openssl genrsa 2048|openssl pkcs8 -topk8 -v2 aes-256-cbc -out key.pem

openssl req -new -key key.pem -out test.csr

openssl x509 -req -days 365 -in test.csr -signkey key.pem -out test.cer

openssl pkcs12 -export -inkey key.pem -in test.cer -out test.p12 -certpbe AES-256-CBC -keypbe AES-256-CBC

NOTE: If you do not specify explicitly specify the certpbe and keypbe algorithm this version defaults to using pbewithSHAAnd40BitRC2-CBC to protect the certificate and pbeWithSHAAnd3-KeyTripleDES-CBC to protect the key.

RC2 was designed in 1987 and has been considered weak for a very long time. 3DES is still considered by many to offer 112-bits of security though in 2015 it is clearly not an algorithm that should still be in use.

Since it supports it OpenSSL should really be updated to use aes-cbc-256 by default and it would be nice if  support for AES-GCM was also added.

NOTE: We also noticed if you specify “-certpbe NONE and -keypbe NONE” (which we would not recommend) that OpenSSL will create a PKCS#12 that uses password-based integrity protection and no privacy protection.

Another unfortunate realization is OpenSSL uses an iteration count of 2048 when deriving a key from a password, by today’s standards is far too small.

We also noticed the OpenSSL output of the pkcs12 command does not indicate what algorithms were used to protect the key or the certificate, this may be one reason why the defaults were never changed — users simply did not notice:

rmh$ openssl pkcs12 -in windows10_test.pfx

Enter Import Password:

MAC verified OK

Bag Attributes

   localKeyID: 01 00 00 00

   friendlyName: {4BC68C1A-28E3-41DA-BFDF-07EB52C5D72E}

   Microsoft CSP Name: Microsoft Base Cryptographic Provider v1.0

Key Attributes

   X509v3 Key Usage: 10

Enter PEM pass phrase:

Bag Attributes

   localKeyID: 01 00 00 00

subject=/CN=Test/O=2/OU=1/[email protected]/C=<\xD0\xBE

issuer=/CN=Test/O=2/OU=1/[email protected]/C=<\xD0\xBE

—–BEGIN CERTIFICATE—–

—–END CERTIFICATE—–

Windows

Unfortunately, it seems that all versions of Windows (even Windows 10) still produces PKCS #12’s using pbeWithSHAAnd3-KeyTripleDES-CBC for “privacy” of keys and privacy of certificates it uses pbeWithSHAAnd40BitRC2-CBC. It then relies on the HMAC scheme for integrity.

Additionally it seems it only supports PBES1 and not PBES2.

Windows also uses an iteration count of 2048 when deriving keys from passwords which is far too small.

It also seems unlike OpenSSL, Windows is not able to work with files produced with more secure encryption schemes and ciphers.

PKIjs

PKIjs has two, arguably conflicting goals, the first of which is to enable modern web applications to interoperate with traditional X.509 based applications. The second of which is to use modern and secure options when doing this.

WebCrypto has a similar set of guiding principles, this is why it does not support weaker algorithms like RC2, RC4 and 3DES.

Instead of bringing in javascript based implementations of these weak algorithms into PKIjs we have decided to only support the algorithms supported by webCrypto (aes-256-cbc, aes-256-gcm with SHA1 or SHA2 using PBES2.

This represents a tradeoff. The keys and certificates and exported by PKIjs will be protected with the most interoperable and secure pairing of algorithms available but the resulting files will still not work in any version of Windows (even the latest Windows 10 1511 build).

The profile of PKCS#12 PKIjs creates that will work with OpenSSL will only do so if you the -nomacver option:

openssl pkcs12 -in pkijs_pkcs12.pfx -nomacver


This is because OpenSSL uses the older PBKDF1 for integrity protection and PKIjs is using the newer PBKDF2, as a result of this command integrity will not be checked on the PKCS#12.

With that caveat, here is an example of how one would generate a PKCS#12 with PKIjs.

Summary

Despite its rocky start, PKCS#12 is still arguably one of the most important cryptographic message formats. An attempt has been made to modernize it somewhat, but they did not go far enough.

It also seems OpenSSL has made an attempt to work around this gap by adding support for AES-CBC to their implementation.

Windows on the other hand still only appear to support only the older encryption construct with the weaker ciphers.

That said even when strong, modern cryptography are in use we must remember any password-based encryption scheme will only be as secure as the password that is used with it. As a proof point, consider these tools for PKCS#12 and PKCS#8 that make password cracking trivial for these formats.

We believe the storage and transport of encrypted private keys is an important enough topic that it deserves a modern and secure standard. With that in mind recommend the following changes to RFC 7292 be made:

  • Deprecate the use of all weaker algorithms,
  • Make it clear both AES-CBC and AES-GCM should be supported,
  • Make it clear what the minimal profile of this fairly complex standard is,
  • Require the support of PBES2,
  • Be explicit and provide modern key stretching guidance for use with PBKDF2,
  • Clarify how one uses PBMAC1 for integrity protection,
  • Require that the certificates and the keys are both protected with the same or equally secure mechanisms.

As for users, there are a few things you can do to protect yourself from the associated issues discussed here, some of which include:

  • Do not use passwords to protect your private keys. Instead generated symmetric keys or generated passwords of an appropriate lengths (e.g. “openssl rand -base64 32”),
  • When using OpenSSL always specify which algorithms are being used when creating your PKCS#12 files and double check those are actually the algorithms being used,
  • Ensure that the algorithms you are choosing to protect your keys offer a minimum effective key length equal to or greater than the keys you will protect,
  • Securely delete any intermediate copies of keys or inputs to the key generation or export process.

Ryan & Yury

 

PKIjs and trust lists

As you probably know Yury and I think Same Origin Certificates (or Browser Bound Certificates) are the way PKI enabled applications will be built in the future. This is why we have been working on PKIjs for so long.

One of the issues you have when building applications that use this concept is deciding what Certificate Authorities you should trust. The answer to that question is pretty nuanced but the short version is only as many as you absolutely need to.

There are four trust stores that I personally think are interesting when thinking about digital signatures these include Mozilla’s, Microsoft’s, Adobe’s and the EUTL.

If you want to work with these lists you need to be able to parse them and get the CA certificates that are meaningful to you.

This is why we have created tl-create, it can (at the time of this post) parse the Mozilla list and the EUTL list.*

* At this time the EUTL trust list does no signature verification and should only be used for experimentation.

The output of which is either a PEM bag of certificates or Javascript array that you can import into your PKIjs based applications as trust anchors.

Hopefully you will find this interesting and useful, pull requests are welcomed.

PKCS #11, Javascript and Nodejs

Javascript has become the most popular language on the Internet. Until now there has not been a way to directly use cryptographic devices that provide PKCS#11 interfaces natively within NodeJS based applications.

The best you could do was to use the Node ability to use OpenSSL and OpenSSL’s ability to use the OpenSC PKCS#11 engine which would then wrap the vendor provided PKCS#11 library. That clearly is a convoluted mess.

We wanted to let Node developers use these devices directly. With that in mind we created Graphene which uses the node-ffi module to call into these libraries directly.

Our goal was to expose all of PKCS#11 while adopting the NodeJS “style” as appropriate. There is still work to do but we think it is now to the point where others may find value in it so we have made it public as of today.

Ryan

My thoughts on Let’s Encrypt

Today about 80% of all SSL certificates on the Internet that are in use are what are commonly referred to at Domain Validated (DV) certificates. The name is a bit of a misnomer in that not all DV certificates authenticate control of a Domain in-fact most actually authenticate the control of a specific server in the domain.

The large majority of these certificates can be issued with little to no human interaction. In a typical manual enrollment a server administrator generates and submits a certificate request and in return is provided a random value that they are instructed to place into a HTML meta-tag in /index.html that the CA will check for periodically to see if administrator was able to place it there. The idea being that modifying a the meta-tag there is sufficient to prove control over the website. Once the CA notices the administrator was able to complete this task it performs a handful of other checks and the certificate is issued.

Most certificates used for SSL end up coming from hosting providers, service providers and certificate resellers that sell these certificates for as little as a few dollars and in many cases they simply give them away for free.

These folks will also commonly automate the issuance, installation and maintenance of these certificates. Hosting providers typically do this using a plugin that comes from the issuing CA that hooks into their management console (WHM, etc) and the larger more advanced ones write their own based on the web services exposed by the certificate authorities.

So today, contrary to common perception certificates are in-fact are cheap to free and in many cases fully automated. With that said there are a number of pretty important cases where that automation is missing such as cloud service providers (AWS, Azure, Google Cloud, Rackspace, etc), corporate servers and Internet connected devices.

At some point all of the cloud service providers will provide SSL for free after all Mozilla has recently stated that they are working to deprecate HTTP all together and I am sure all other browsers will follow them when there is sufficient SSL ubiquity.

The Let’s Encrypt project aims to make this transition happen faster by being yet another place to get free certificates and making the acquisition of these certificates even easier by closely integrating the certificate lifecycle management into the most commonly used servers.

It is this last part that I think is the most important contribution that Let’s Encrypt will make to the Internet. There are a few reasons for this; for various reasons I could go on about for hours each of the Certificate Authorities have gone and created their own protocols for certificate enrollment instead of working together to define a common one. These protocols (like their cousins from device and operating system vendors) are designed around their specific back-ends and not generic enough to be used when they are not the entity behind them.

To address this the the Let’s Encrypt people have proposed a new modern REST based protocol that does not have this baggage. In fairness it also doesn’t solve all of the CAs needs either but I can easily envision how one would extend it to do so (in-fact it looks a lot like a protocol I designed for GlobalSign’s use).

The other problem not many actually understand is how many issues exist inside the various SSL implementations that prevent a third-party from properly automating the lifecycle of a certificate without downtime. The simplest example being for a external program to change certificates on a running web server it often has to rely on HUPing a the server to force it to pick up the new certificate.

Unfortunately Certificate Authorities are not exactly the most loved people on the Internet and I know from my experience trying to get the maintainers of web servers and SSL stacks to support things like OCSP Stapling that the scale of changes that are necessary to make automated certificate lifecycle totally seamless (and with low risk) for everyone was unlikely going to happen when driven by CAs.

NOTE: In my opinion a big reason for the resistance is that CAs have basically treated these projects as core infrastructure without supporting them financially or by hiring developers to contribute to them. That said this has been slowly changing and despite that the “love” still continues.

The Let’s Encrypt project is a project for developers by developers with the skill, credibility and motivation to fix these issues.

When they are successful (and I am confident they will be) those solutions that use the clients based on their code and protocol will rarely if ever experience an outage due to an expired certificate. Notice I didn’t say the clients that use Let’s Encrypt ? Thats because what they are doing is solving the plumbing problem that CAs have failed to solve and the CAs will be able to benefit from this work also.

It will also enable a class of products and services that otherwise would not have the technical experience, financial means or motivation to otherwise integrate SSL into their product.

Imagine your next refrigerator having a web portal you could log into at https://myhome.refrigerators.com where you could check if you needed to bring home milk where the portal was protected with SSL. These and other projects are unlikely to happen without something like Let’s Encrypt.

So when people tell me “Certificates are already practically free why do we need Let’s Encrypt?” I tell them they need to look at the long game.

The (soon to be) not-so Common Name

If you are reading this post you are probably familiar with the use of digital certificates in SSL even if you are you may not be familiar with their history. Before we go there though we should start with what, at its core a digital certificate actually is.

Fundamentally a digital certificate is a binding of entitlements and constraints to a key, in other words they say things like “The holder of the private key associated with this certificate can rightfully use the name Ryan Hurst when signing emails”.

When originally conceived they were to be used to help bind subjects (people and resources) to their representations in directories. This is why the Subject Name in a certificate is structured as a Distinguished Name (DN) as this is how a directory uniquely identify a subject.

This makes sense when looking up an encryption key for a user in an enterprise directory but not so well on the Internet where there is no global directory of users.

This brings us to SSL, it was introduced in the mid 1990s and at the time nearly every large enterprise was already deploying Directories and Certificate Authorities as part of their identity management frameworks. The technology of X.509 was tested, broadly accepted and fit the bill for the problem the designers of the protocol had so they included it as is.

At the time there was only one way to represent concept of a subject of a certificate and that was the Common Name (CN) so they chose to put the DNS name of the SSL server there. This was technically acceptable but was a re-purposing of a field that was really intended for a users actual name.

After SSL was finalized the IETF released their profile of X.509 for use on the Internet this standard introduced the concept of a Subject Alternative Name (SAN) where you can put names that are not associated with a directory. The problem is that ship had sailed, by the late 90s when this was standardized everyone had already settled on using the Common Name.

This led us down a bad path, first of all many servers (especially today) have multiple DNS names and application that supported only the Common Name field couldn’t work with a single certificate with more than one DNS name in it. This was addressed in the short term by using a single certificate for each DNS name but this came at a high cost, we also needed to use a single IP address for each domain name.

Another problem with this approach is applications never really knew what to expect in the Common Name field. Is the value in that field a person’s name or is it a DNS name? This is a problem because often times there are rules you need to use to validate a piece of data before using it and this is especially true for DNS names.

For these reasons (and more) since at least 1999 (when RFC 2459 was standardized) we have been on a slow path to moving away from the use of Common Names for domain names to using Subject Alternative Names.

Fast forward to 2012 some Stanford researchers publish a paper titled “The most dangerous code in the world: validating SSL certificates in non-browser software” which identifies a bunch of applications who fail to do the most basic certificate validation tasks correctly and as a result are the source of a bunch of security vulnerabilities.

These applications gave their users a false sense of security not out of malice but as a result of a lack of understanding of the technology they used to deliver on that promise. A big part of that is the complexity 18 years of technological evolution carries with it.

To address this a number of things need to change but one of the most immediate changes is what that the definition of what constitute a “valid” SSL certificate is changing to make the rule-set a little simpler for the application developer and to rule out options that are no longer considered good practice.

We see this happening in a few ways. First the CA/Browser Forum has worked with Browsers to define a set of Baseline Practices that all Certificates must meet, we are also seeing Browsers doing sanity checks to ensure these practices are in-fact followed.

These baseline requirements mandate that certificate authorities always include at least one Subject Alternative Name in the SSL certificates they issue, this means that today an application doesn’t need to look in both the Common Name and the Subject Alternative Name they only need to check the latter.

Currently most Certificate Authorities will include the first DNS Name from the Subject Alternative Name in the Common Name field also but this is done primarily for legacy reasons and at some point in the not so distant future will stop.

When it does certificates will be a little smaller and developers lives will be a little easier.

Ryan

Resources

· Baseline Requirements

· Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile

· Microsoft Security Advisory: Update for minimum certificate key length

Deploying forward secrecy on RedHat, Centos or Fedora based systems

If you want to deploy perfect forward secrecy and you use a RedHat, Centos or Fedora based system you likely won’t be able to do so without building your own OpenSSL. This is because by default the OpenSSL packages for these systems do not include ECC or ECDH and when web-servers like apache and Nginx are built against libraries that do not support them they obviously omit support for the algorithms.

Hopefully these distributions and the community repositories that support them will start including support for these algorithms by default soon but in the meantime you can add support by building your own.

You can check if your OpenSSL supports these algorithms easily enough, just run this command:

> openssl ciphers

If you see ciphers like “ECDHE-RSA-AES256-GCM-SHA384” then you have a version of OpenSSL that was built with ECC and ECDHE support enabled which is required if you want forward secrecy today. If you don’t see such a suite don’t worry, just check out this blog post and it will step you through how to build a new OpenSSL.

Unfortunately building a software package on each Linux distribution is slightly different, I will share what worked for me on Centos 6.3.

1. We don’t want to break your current Nginx deployment so lets start by finding out what options your Nginx was built with, you do this with the following command:

> nginx -V

This will return something like this:

nginx version: nginx/1.5.1

built by gcc 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC)

TLS SNI support enabled

configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 --with-cc-opt='-O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic'

To ensure we don’t omit something your using we will use the same options when we build our Nginx.

 

2. Next let’s make a backup of everything you have in your current Nginx installation just in case something doesn’t go right:

root> cp /usr/sbin/nginx /usr/sbin/nginx.orig

root> cp -r /etc/nginx /etc/nginx.orig

3. Now let’s make sure we get our dependencies installed, download our source and extract it so we can do our build.

root> cd /tmp

root> yum install pcre

root> yum install pcre-devel

root> yum install zlib

root> yum install zlib-devel

root> wget http://nginx.org/download/nginx-1.5.1.tar.gz

root> tar -xvzf nginx-1.5.1.tar.gz

root> cd nginx-1.5.1

4. Then we need to configure our build. Lets start by taking the options you saw when you ran “nginx -V” and add the following to the end of them:

opt='-L/usr/local/ssl/lib' --with-cc-opt='-I/usr/local/ssl/include'

This is where your OpenSSL libraries and headers should be, just append the combined set of parameters to the “./configure” command, this will look like this:

root> ./configure --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 --with-cc-opt='-O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --with-ld-opt='-L/usr/local/ssl/lib' --with-cc-opt='-I/usr/local/ssl/include'

5. Make and install your Nginx

root> make
root> make install

6. Restart Nginx

services nginx restart

Now if you are using the SSL configuration I recommend here you already have set your server to prefer the commonly supported forward secret algorithms and if you visit https://sslcheck.x509labs.com you will see these ECDH based algorithms at the top of the list of negotiated suites.

 

For more information see:

1. How to botch TLS forward secrecy, AGL
2.
Getting the Most Out of SSL Part 1: Choose the Right Certificate, CA Security
3. SSL Configuration Checker, X509 Labs
4. High Performance Browser Networking, Ilya Grigorik
5. Bulletproof SSL/TLS and PKI, Ivan Ristic

Setting HTTP headers with OpenSSL and the OCSP test client

So I have been doing some OCSP performance bench-marking the last few weeks, one of the things we noticed while doing this is that OpenSSL does not set the HOST header when sending requests.

This is a problem when interacting with a OCSP responder that is on a shared host (a virtual host), without it the web-server doesn’t know which instance to send the request to. You can work around this by using the “-header” command, for example:

openssl ocsp -noverify -no_nonce -respout ocspglobalsignca.resp -reqout ocspglobalsignca.req -issuer globalsigng2.cer -cert globalsign.com.cer -url “http://ocsp2.globalsign.com/gsextendvalg2” -header “HOST'” “ocsp2.globalsign.com”

You need to ensure the HOST header is set consistently with the host contained in the -url switch but it works.

Ryan

OCSP Stapling in Apache

Apache 2.3 and later support a feature called OCSP stapling. When enabled a server pre-fetches the OCSP response for its own certificate and delivers it to the user’s browser during the TLS handshake. This approach offers a privacy advantage. But, the main benefit is the browser doesn’t have to make a separate connection to the CA’s revocation service before it can display the Web page. This gives better performance and reliability.

For this pre-fetching to work the web-server certificate needs to contain a pointer to the OCSP responder, this is a best practice and a recommendation of the CA/Browser Forums baseline requirements so it’s almost certain your certificate has it.

Chances are you have a firewall between your webservers and the internet; it’s also likely that firewall disallows outbound connections from your servers unless explicitly allowed. So before you enable OCSP stapling you are going to need to allow your web servers to communicate with the OCSP responder.

To figure out what host and port you will need to open you will need to look at the certificates you use on your webserver; one way to do that is via OpenSSL, for example:

1. Get the certificate using s_client

openssl.exe s_client -connect www.globalsign.com:443


—–BEGIN CERTIFICATE—–
—–END CERTIFICATE—–

You need to copy the PEM header and footer (“—–BEGIN/END CERTIFICATE—–“) and the Base64 between them into a file.

2. Identify the OCSP responders within the server certificate

openssl.exe x509 -in globalsign.com.cer -text


X509v3 extensions:
..
Authority Information Access:
CA Issuers – URI:
http://secure.globalsign.com/cacert/gsextendvalg2.crt
OCSP – URI:
http://ocsp2.globalsign.com/gsextendvalg2

You need to find the “OCSP – URI” section, in the example certificate above the OCSP responder is http://ocsp2.globalsign.com/gsextendvalg2 there may be multiple responders specified, you should allow your servers to initiate outbound traffic to all of them.

Once your servers can request OCSP responses enabling stapling is very straight forward, there are just two directives that need to be added, these directives can be global or specific to a specific to one instance:

SSLUseStapling on
SSLStaplingCache “shmcb:logs/stapling_cache(128000)”

Their purpose is self-evident; SSLUseStapling turns the feature on while SSLStaplingCache specifies where to store the cache and how big it should be.

There are other directives also you can use but you should not need to worry about them.

As long as you are running the most recent stable versions of Apache and OpenSSL enabling this feature is safe. It is only used when the client supports it so there won’t be compatibility issues and if the server for some reason fails to populate its cache with a valid OCSP response the client will typically fall back to doing a live OCSP request on its own.

Ryan

 

Additional Resources

Overclocking Mod_SSL