Just got a logo for x509labs, what do you think?
So the other day I got a bee in my bonnet and decided I wanted a simple web service I could pass common day X509 objects to and get a JSON representation of that same object. We had recently done a project in Go at work and we found it quick, robust and easy to build, additionally it looks it’s certificate support decent enough so I thought it was the way to go.
In comes Freelancer, I threw my rough (and that’s kind) goals in a paragraph or two and a few days later I had a bid proposal from an engineer in Chicago — Eli Frey.
Based on a quick review of the Go documentation for cryptography it looked like this was going to be pretty straight forward, and for the most part it was – we did find that there were a few cases that just were not possible without more work than we wanted to put in, I will summarize those a little later.
As things progressed we also decided to add the ability to get an X509 certificate from the interface. Normally one would do this by generating a PKCS #10 request (CSR) and sending it to a CA for processing, unfortunately one of those cases that required more work than we wanted to put in was parsing PKCS #10s since go does not as of yet support it. With that said a CSR is really just a self-signed certificate we just did the same thing with a self-signed X509 certificate request.
So how do these interfaces work? Here are a few examples of how you would call them:
Decode a PEM encoded X509 certificate curl -F “[email protected]” “api.x509labs.com/v1/x509/certificate?action=decode&inputEncoding=PEM” Decode a DER encoded X509 certificate curl –fail -F “[email protected]” “api.x509labs.com/v1/x509/certificate?action=decode&inputEncoding=DER” Request and issue an X509 certificate based on a DER encoded self-signed certificate with one hostname openssl genrsa -out request.key 2048 openssl req -config openssl.cfg -subj “/CN=www.example.com” -new -x509 -set_serial 01 -days 1 -key request.key -out request.cer curl –fail -F “[email protected]” “api.x509labs.com/v1/x509/certificate?action=issue&hostnames=bob.com&inputEncoding=DER” Request and issue an X509 certificate based on a PEM encoded self-signed certificate with one hostname openssl genrsa -out request.key 2048 openssl req -config openssl.cfg -subj “/CN=www.example.com” -new -x509 -set_serial 01 -days 1 -key request.key -out request.cer curl –fail -F “[email protected]” “api.x509labs.com/v1/x509/certificate?action=issue&hostnames=bob.com&inputEncoding=PEM” Request and issue an X509 certificate based on a PEM encoded self-signed certificate with several hostnames openssl genrsa -out request.key 2048 openssl req -config openssl.cfg -subj “/CN=www.example.com” -new -x509 -set_serial 01 -days 1 -key request.key -out request.cer curl –fail -F “[email protected]” “api.x509labs.com/v1/x509/certificate?action=issue&hostnames=bob.com,fred.com&inputEncoding=PEM” Decode a set of PEM encoded X509 certificates curl –fail -F “[email protected]” “api.x509labs.com/v1/x509/certificates?action=decode&inputEncoding=PEM” Decode a PEM encoded X509 crl curl –fail -F “[email protected]” “api.x509labs.com/v1/x509/crl?action=decode&inputEncoding=PEM” Decode a DER encoded X509 crl curl –fail -F “[email protected]” “api.x509labs.com/v1/x509/crl?action=decode&inputEncoding=DER” Decode an OCSP response openssl ocsp -noverify -no_nonce -respout ocsp.resp -reqout ocsp.req -issuer ca.cer -cert www.cer -url “http://ocsp2.globalsign.com/gsextendvalg2” -header “HOST” “ocsp2.globalsign.com” -text curl –fail -F “[email protected]” “api.x509labs.com/v1/x509/ocsp?action=decode&type=response”
So even though this started out as a pet project I actually think these interfaces are pretty useful, the largest limitations of these interfaces are:
X509Certificate
X509OCSP
Here are some things you might want to know about these interfaces:
All in-all I think this was a fun project and I really enjoyed working with Eli and Freelancer (though its mail client is awful and the site needs some UI work).
Ryan
So today I have done posts on the browser user experience for expired and untrusted certificates but we wouldn’t have proper coverage on the topic of bad certificate user experience if we did not cover revoked certificates.
VeriSign is kind enough to host a test site that uses a revoked certificate (I know we do too I just can’t find it right now) so we will use that (https://test-sspev.verisign.com:2443/test-SSPEV-revoked-verisign.html)
Again what we want to see here is:
In this case I think again Internet Explorer and Chrome do the best; The worse experience is in Opera as it leads the user to believe there is a connectivity problem unless they expand the error message.
Today I did a blog post on how browsers show expired certificates. I figured I would take the opportunity to capture a few of the other failure cases for certificates.
The most severe example is that of an untrusted root certificate, for this scenario I figured the use of https://cacert.org was the most direct example.
There are a few cases where this error condition will come up, for example another one is if a server doesn’t include all of the intermediate certificates the clients cannot determine which Certificate Authority issued the certificate.
According to the current SSL Pulse data about 7.4% of the servers in the Alexa top one million may fall into this case.
Today I was on a mail thread where the topic of how browsers handle expired certificates; this is particularly relevant for a few reasons.
The first of which is that there is a large number of sites operating with expired certificates out on the Internet today, the other is that the adoption of short lived certificates (which I am a fan of) is at least in part dependent on how browsers deal with certificates that are expired.
In any event I was not sure how the most recent versions of browsers were handling these cases so I dug up an example site where an expired certificate was in use (https://www.appliancetherapy.com – it uses a certificate that expired a few weeks ago and has not as of yet been replaced).
So what did I want to find? In a perfect world I believe that the following should be true:
The good news is that for the most part browsers behaved fairly close to this, they all could have improved language but I believe Internet Explorers was the best.
The worst behaving client was Mozilla, as it doesn’t report the certificate as expired but instead indicates that it tried to make an OCSP request but got a response it was not expecting. This has two problems – the first of which being it should not have made an OCSP request for the status of an expired request.
RFC 5280 Section 5 states that:
A complete CRL lists all unexpired certificates, within its scope,
that have been revoked for one of the revocation reasons covered by
the CRL scope. A full and complete CRL lists all unexpired
certificates issued by a CA that have been revoked for any reason.
And RFC 2560 is written largely based on OCSP responses being fed from CRLs. What this means is that it is not appropriate to ask the revocation status of a certificate that is expired.
The next problem is that Mozilla also doesn’t handle the unauthorized response in a usable way. RFC 5019 Section 2.2.3 states:
The response “unauthorized” is returned in cases where the client
is not authorized to make this query to this server or the server
is not capable of responding authoritatively.
A user who receives this message would believe the issue is related to their permissions but based on the true reason for the error the failure as really that the responder in question doesn’t have the information that’s needed.
This lack of information on the server is likely due to the fact that it isn’t required to maintain information for expired certificates and the message Mozilla delivered should have been about the certificate being expired.
In any event the browsers behaved much better than I expected, IE and Chrome did the best (I really like Chromes red / over the https as a visual queue there is a problem).
There are in essence three kinds of SSL certificates: Domain Validated, Organization Validated and Extended Validated. I am not going to write about the differences here it seems that there are hundreds of articles on this topic on the Internet.
What I think has not been given sufficient coverage is how one is able to look at a certificate and determine what type it is.
One would think that this would be easy; In theory if nothing was explicitly stated it would be a Domain Validated certificate (since it is the weakest validation), otherwise someone would put something in the certificate making it clear that the certificate was either Organization Validated or Extended Validated.
Unfortunately it’s not this simple, the main issue being the historic lack of coordination within the CA industry.
Each Certificate Authority (CA) has its own unique practices relating to how they mark their certificates so with the existing deployed certificates there is no singular rule or approach can be used to definitively know what level of validation was done for a given certificate.
Thankfully it looks like that this problem is betting better thanks to the adoption of the Baseline Requirements but in the meantime we have to make do with heuristics.
Today the only way to know with confidence that a certificate is of a specific type is to know the practices of each CA.
In X.509 the way an issuer is supposed to express something like this is via the Certificate Policies extension which is defined in RFC 5280.
This allows a CA to express a unique identifier (an OID) in their certificates that maps to a document that describes its practices associated with this certificate. This identifier can be used programmatically to do make trust decisions about a certificate or to differentiate the user interface in an application based on what type of certificate is being used.
This is exactly how browsers today can tell if a certificate is an Extended Validation (EV) certificate. In essence they have some configuration that says “I trust GlobalSign to issue EV certificates, when a certificate is presented to me from them that has this policy OID show the EV user experience”.
The Baseline Requirements use the same approach defining identifiers for Domain Validated and Organization Validated certificates, these are:
Type | Policy Identifier |
Domain Validated | 2.23.140.1.2.1 |
Organization Validated | 2.23.140.1.2.2 |
Having these identifiers takes us a long way towards our goal of deterministic evaluation of certificate issuance policy — that said not all CAs have adopted them which is technically alright since the Baseline Requirements do allow them to use their own Policy Identifiers.
Since the Baseline Requirements were only established this year it will take some time for the existing install base of certificates to be re-issued to use these Policy Identifiers called about above. This doesn’t mean you can’t tell the certificates apart today, it does mean it is quite a bit messier though.
Here is some pseudo-code provided to me as an example from a friend that they used in one of their projects:
type = null; if (cert is self-signed) then type = SS; /* SS = Self-signed */ else if (cert was issued by a known “CA”) then type = DV; /* DV = Domain Validation */ else if (cert contains a known EV Policy OID) then type = EV; /* EV = Extended Validation */ else if (cert “Subject O” and “Subject CN” are the same or “Subject OU” contains “Domain Control Validated”) then { if (cert contains no Subject L, St or PostalCode) then type = DV; } else if (cert “Subject O” is “Persona Not Validated” and the cert’s issuer was StartCom type = DV; if (type is null) type = OV; |
This logic is not comprehensive but should work well enough for most uses.
Unfortunately today there is not a deterministic way to tell if a certificate was Domain or Organization Validated, that said things are changing and within a few years hopefully it will be possible.
In the mean-time there are heuristics you can use that help tell these types of certificates apart.
Recently I blogged about how Windows XP processes Name Constraints a little different than the RFC specifies — with the help of a friend I have a good set of examples of what would work and what would not work that illustrate what it does.
Assuming our Subject was:
C = US;S = Washington;L = Kirkland;O = GlobalSign;CN = globalsign.com
And that our Constraint was:
Permitted
[1]Subtrees (0..Max):
DNS Name=globalsign.com
We would see different results when validating a certificate on XP than we would on a later version of Windows.
Notice we did not include any directoryName attributes? That is supposed to mean that there is no constraints on the directoryName. On Windows XP however if you include a directory name in the subject there MUST be at least one Directory Name attribute in the RDN to match against otherwise it will not pass its Name Constraints check.
So if we instead made our constraint:
Permitted
[1]Subtrees (0..Max):
RFC822 Name=globalsign.com
[3]Subtrees (0..Max):
Directory Address:
C = US
S = Washington
L = Kirkland
O = Globalsign
Excluded=None
A certificate with the following subject would match:
When XP processes the RDNs it starts with the first and progresses from there. You can’t skip an RDN. If an RDN is present it must match the entire RDN value or be empty.
As such the following wouldn’t match in our example:
The prior blog post on this topic I described before talks about how an enterprise can work around this behavior (by setting some registry keys) but a public certificate issuer can too, for example by inserting just one RDN value and ensuring the subordinate CA issues with that RDN value in its certificates.
This way a site can have the flexibility it wants to change its directory structure without re-issuing the certificate containing the Name Constraints.
Ryan
On the surface the digital certificates are not complicated — a third-party (a certificate authority) verifies some evidence and produces a piece of identification that can be presented at a later date to prove that the verification has taken place.
As is usually the case when we look a little deeper things are not that simple. In the this case we have to care about a few other things, for example what are the qualifications of the third-party, what are their practices and what cryptographic algorithms did they use to produce the digital certificate?
As an administrator using digital certificates like in the case of SSL these things also can have impact on your operational environment – by using a certificate from a certificate authority you take dependencies on their practices and operational environment.
This is especially true when it comes to decisions relating to what cryptographic algorithms and key lengths are accepted and used by that third-party.
Thankfully you do not need to be a cryptographer to make good decisions on this topic, first we need to start with an understanding of the history, future and then considerations.
In recent history the industry has relied on two algorithms, the first being an encryption algorithm called RSA the second being a hash algorithm called SHA-1. Both of which have are considered weaker now due to advances in cryptanalysis.
RSA’s strength and performance is based on the size of the key used with it, the larger the key the stronger and slower it is.
These advances in cryptanalysis have driven the increase in key size used with this algorithm which in turn has increased the amount of computing power necessary to maintain the same effective strength.
The problem with this is that that every time we double the size of an RSA key the decryption operations with that key become 6-7 times slower.
As a result as of all of this as of January 2011 trustworthy Certificate Authorities have aimed to comply with NIST (National Institute of Standards and Technology) recommendations by ensuring certificates all new RSA certificates have keys of 2048 bits in length or longer.
Unfortunately this ever increasing key size game cannot continue forever, especially if we ever intend do see SSL make up the majority of traffic on the internet – the computational costs are simply too great.
That takes us to SHA-1, hash algorithms take a variable amount of input and reduce it to a typically shorter and fixed length output the goal of which being to provide a unique identifier for that input. The important thing to understand is that hash algorithms are always susceptible to collisions and the advances in the cryptanalysis have made it more likely that such a collision can be made.
The problem here is that there is no parameter to tweak that makes this problem harder for an attacker, the only way to address this issue is to change to a stronger algorithm to produce the hash.
For the last decade or so there has been slow and steady movement towards using two new algorithms to address these advances — SHA-2 and ECC.
ECC has the potential for significant performance benefits over RSA without reducing security and SHA-2 has three versions each with progressively longer lengths which help it both address the current risks and give it some longevity.
Our goal in configuring SSL is enabling users to communicate with us securely; to accomplish this goal we need to be able to do this with the fewest hassles, lowest costs and comply with any associated standards.
Interoperability is the key that ensures the fewest hassles — if it was not for this we would simply switch to these new algorithms and be done with it. As is normally the case when it comes to security this is where Windows XP rears its ugly head, SHA-2 was added to XP in Windows XP Service Pack 2 and ECC in Windows Vista.
These facts set the adoption clock for these new algorithms; if you care about XP (about 30% of the Internet today) you can’t adopt ECC and SHA-2 in full for about 5 years.
This leaves us with RSA 2048 and SHA-1 which thankfully is broadly considered sufficient for the next decade.
Performance is always a concern as well — a RSA 2048-bit RSA certificate used in SSL will result in around a 10% CPU overhead not huge but something to keep in mind.
As mentioned previously we can’t forget compliance — whether it is the Payment Card Industry / Data Security Standards (PCI/DSS), Federal Information Processing Standards (FIPS) 140-2 or some other set of criteria you need to meet this always needs to be considered.
The decision of what algorithm’s and key lengths to use in your digital certificates is dependent on a number of factors including security, interoperability, performance and compliance. Each situation may require a different trade-off to be made however a rule of thumb if you stick with SHA-2 and RSA 2048-bit certificates and keys you should be fine for now.
[1] BlueKrypt Cryptographic Key Length Recommendations
[2] Recommendation for Key Management, Special Publication 800-57 Part 1 Rev. 3, NIST, 05/2011
[3] Fact Sheet Suite B Cryptography, NSA, 11/2010
[4] Worldwide Operating System Statistics, Stat Counter, 9/2012
[5] RSA Algorithm, Wikipedia
[6] RSA Key Lengths, Javamex
[7] ECC Algorithm, Wikipedia
[8] Performance Analysis of Elliptic Curve Cryptography for SSL, Sun
[9] Using ECC keys in X509 certificates, UnmitigatedRisk
[10] Using SHA2 based signatures in X509 certificates, UnmitigatedRisk