Tag Archives: X509

The (soon to be) not-so Common Name

If you are reading this post you are probably familiar with the use of digital certificates in SSL even if you are you may not be familiar with their history. Before we go there though we should start with what, at its core a digital certificate actually is.

Fundamentally a digital certificate is a binding of entitlements and constraints to a key, in other words they say things like “The holder of the private key associated with this certificate can rightfully use the name Ryan Hurst when signing emails”.

When originally conceived they were to be used to help bind subjects (people and resources) to their representations in directories. This is why the Subject Name in a certificate is structured as a Distinguished Name (DN) as this is how a directory uniquely identify a subject.

This makes sense when looking up an encryption key for a user in an enterprise directory but not so well on the Internet where there is no global directory of users.

This brings us to SSL, it was introduced in the mid 1990s and at the time nearly every large enterprise was already deploying Directories and Certificate Authorities as part of their identity management frameworks. The technology of X.509 was tested, broadly accepted and fit the bill for the problem the designers of the protocol had so they included it as is.

At the time there was only one way to represent concept of a subject of a certificate and that was the Common Name (CN) so they chose to put the DNS name of the SSL server there. This was technically acceptable but was a re-purposing of a field that was really intended for a users actual name.

After SSL was finalized the IETF released their profile of X.509 for use on the Internet this standard introduced the concept of a Subject Alternative Name (SAN) where you can put names that are not associated with a directory. The problem is that ship had sailed, by the late 90s when this was standardized everyone had already settled on using the Common Name.

This led us down a bad path, first of all many servers (especially today) have multiple DNS names and application that supported only the Common Name field couldn’t work with a single certificate with more than one DNS name in it. This was addressed in the short term by using a single certificate for each DNS name but this came at a high cost, we also needed to use a single IP address for each domain name.

Another problem with this approach is applications never really knew what to expect in the Common Name field. Is the value in that field a person’s name or is it a DNS name? This is a problem because often times there are rules you need to use to validate a piece of data before using it and this is especially true for DNS names.

For these reasons (and more) since at least 1999 (when RFC 2459 was standardized) we have been on a slow path to moving away from the use of Common Names for domain names to using Subject Alternative Names.

Fast forward to 2012 some Stanford researchers publish a paper titled “The most dangerous code in the world: validating SSL certificates in non-browser software” which identifies a bunch of applications who fail to do the most basic certificate validation tasks correctly and as a result are the source of a bunch of security vulnerabilities.

These applications gave their users a false sense of security not out of malice but as a result of a lack of understanding of the technology they used to deliver on that promise. A big part of that is the complexity 18 years of technological evolution carries with it.

To address this a number of things need to change but one of the most immediate changes is what that the definition of what constitute a “valid” SSL certificate is changing to make the rule-set a little simpler for the application developer and to rule out options that are no longer considered good practice.

We see this happening in a few ways. First the CA/Browser Forum has worked with Browsers to define a set of Baseline Practices that all Certificates must meet, we are also seeing Browsers doing sanity checks to ensure these practices are in-fact followed.

These baseline requirements mandate that certificate authorities always include at least one Subject Alternative Name in the SSL certificates they issue, this means that today an application doesn’t need to look in both the Common Name and the Subject Alternative Name they only need to check the latter.

Currently most Certificate Authorities will include the first DNS Name from the Subject Alternative Name in the Common Name field also but this is done primarily for legacy reasons and at some point in the not so distant future will stop.

When it does certificates will be a little smaller and developers lives will be a little easier.

Ryan

Resources

· Baseline Requirements

· Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile

· Microsoft Security Advisory: Update for minimum certificate key length

Average CRL size and download time

The other day I had a great conversation with Robert Duncan over at Netcraft, he showed me some reports they have made public about CRL and OCSP performance and uptime.

One thing that I have been meaning to do is to look at average CRL size across the various CAs in a more formal way I just never got around to doing it; conveniently one of the Netcraft reports though included a column for CRL size. So while I was waiting for a meeting to start I decided to figure out what the average sizes were; I focused my efforts on the same CAs I include in the revocation report, this is what I came up with:

 

CA Average CRL Size(K) CRL Download Time @ 56k (s)
Entrust 512.33 74.95
Verisign 200.04 29.26
GoDaddy 173.79 25.42
Comodo 120.75 17.66
Cybertrust/Verizon 75.00 10.97
DigiCert 21.66 3.17
GlobalSign 21.25 3.11
Certum 20.00 2.93
StartSSL 9.40 1.38
TrendMicro 1.00 0.15

 

From this we can derive two charts one for size and another for download time at 56k (about 6% of internet users as of 2010):

clip_image002 clip_image004

 

I overlaid the red line at 10s because that is the timeout that most clients use to indicate when they will give up trying to download, some clients will continue trying in the background so that the next request would have the CRL already cached for the next call.

This threshold is very generous, after all what user is going to hang around for 10 seconds while a CRL is downloaded? This gets worse though the average chain is greater than 3 certificates per chain, two that need to have their status checked :/.

This is one of the reasons we have soft-fail revocation checking, until the Baseline Requirements were published inclusion of OCSP references was not mandatory and not every CA was managing their CRLs to be downloadable within that 10 second threshold.

There are a few ways CAs can manage their CRL sizes, one of the most common is simply roll new intermediate CAs when the CRL size gets unmanageable.

There is something you should understand about the data in the above charts; just because a CRL is published doesn’t mean it represents active certificates – this is one of the reasons I had put of doing this exercise because I wanted to exclude that case by cross-referencing the signing CA with crawler data to see if active certificates were associated with each CRL.

This would exclude the cases where a CA was taken out of operation and all of the associated certificates were revoked as a precautionary exercise – this can happen.

So why did I bother posting this then? It’s just a nice illustration as to why we cannot generally rely on CRLs as a form of revocation checking. In-fact this is very likely why some browsers do not bother trying to download CRLs.

All posts like this should end with a call to action (I need to do better about doing that), in this case I would say it is for CAs to review their revocation practices and how they make certificate status available to ensure it’s available in a fast and reliable manner.

A look at revocation repository uptime

It is no secret that in the last two months GlobalSign was affected by outages at relating to our use of CloudFlare. I won’t go into the specifics behind those outages because the CloudFlare team does a great job of documenting their outages as well as working to make sure the mistakes of the past do not reoccur. With that said we have been working closely with CloudFlare to ensure that our services are better isolated from their other customers and to optimize their network for the traffic our services generate.

I should add that I have a ton of faith in the CloudFlare team, these guys are knowledgeable, incredibly hard working and very self critical — I consider them great partners.

When looking at these events it is important to look at them holistically; for example one of the outages was a result of mitigating what has been called the largest publically announced DDOS in the history of the Internet.

While no downtime is acceptable and I am embarrassed we have had any downtime it’s also important to look at the positives that come from these events, for one we have had an opportunity to test our mitigations for such events and improve them so that in the future we can withstand even larger such attacks.

Additionally it’s also useful to look the actual uptime these services have had and to give those numbers some context look at them next to one of our peers. Thankfully I have this data as a result of the revocation report which tracks performance and uptime from 21 different network worldwide perspectives every minute.

For 05/2012-12/2012 we see:

Service Uptime(%) Avg(ms)
GlobalSign/AlphaSSL OCSP 100.00 101.29
VeriSign/Symantec/Thawte/GeoTrust/Trustcenter OCSP 99.92 319.40
GlobalSign/AlphaSSL CRL 100 96.86
VeriSign/Symantec/Thawte/GeoTrust/Trustcenter CRL 99.97 311.42

 

For 01/2013 to 04/2013 we see:

Service Uptime(%) Avg(ms)
GlobalSign/AlphaSSL OCSP 99.98 76.44
VeriSign/Symantec/Thawte/GeoTrust/Trustcenter OCSP 99.85 302.88
GlobalSign/AlphaSSL CRL 99.98 76.44
VeriSign/Symantec/Thawte/GeoTrust/Trustcenter CRL 99.22 296.97

NOTE:  Symantec operates several different infrastructures – which one you hit is dependent on which brand you buy from and some cases which product you buy. We operate only two brands which share the same infrastructure. I averaged the results for each of their brands together to create these two tables. If you want to see the independent numbers see the Excel document linked to this post.

 

As you can see no one is perfect; I don’t share this to say our downtime is acceptable because it is not, but instead I want to make it clear this is data we track and use to improve our services and to make it clear what the impact really was.

By the way if you want to see the data I used in the above computation you can download these spreadsheets.

Why we built the Revocation Report

For over a year I have been monitoring the industry’s largest OCSP and CRL repositories for performance and uptime. I started this project for a few reasons but to understand them I think it’s appropriate to start with why I joined GlobalSign.

If you’re reading this post you are likely aware of the last few years of attacks against public Certificate Authorities (CA). Though I am no stranger to this space, like you I was watching it all unfold from the outside as I was working at Microsoft in the Advertising division where I was responsible for Security Engineering for their platform.

I recall looking at the industry and feeling frustrated about how little has changed in the last decade, feeling like the Internet was evolving around the CA ecosystem – at least technologically. The focus seemed almost exclusively on policies, procedures and auditing which are of course extremely important when you’re in this business but by themselves they are not a solution.

When I looked at the CA ecosystem there were a few players who I thought understood this; the one I felt got it the most was GlobalSign. Instead of joining the race to the bottom they were developing solutions to help with key management, certificate lifecycle management, and publishing guides to help customers deploy certificates cost effectively.

As a result when they approached me with the opportunity to join them as their CTO and set the technology direction for the company I was intrigued. Those of you who know me know I love data, I believe above all things successful businesses (if they recognize it or not) leverage the Define, Measure, Analyze, Improve and Control cycle to ensure they are solving the right problems and doing so effectively.

To that end when I joined GlobalSign as their CTO and I wanted market intelligence on what the status quo was for technology, operating practices and standards compliance so that I could use to adjust my own priorities as I planned where GlobalSign was going to focus its investments.

It was more than that though, as many of you know I am not new to PKI and especially not to revocation technologies having developed several products / features in this area as well as contributing to the associated standards over the years. I was always frustrated by many public certificate authorities’ inability or unwillingness to acknowledge the inadequacy of their revocation infrastructure and its contribution to slow TLS adoption and bad user agent behavior when it comes to revocation checking.

More directly the reliability and performance of major CA operational infrastructure was why browsers had to implement what is now called “soft-fail” revocation behaviors; the treating of failures to check the status of a certificate the same as a successful check. Yet it is these same people who point fingers at the browsers when the security implications of this behavior are discussed.

I wanted to see that change.

This is why from the very beginning of this project I shared all the data I had with other CAs, my hope was they would use it to improve their infrastructure but unfortunately short of one or two smaller players no one seemed concerned – I was shouting at the wind.

With the limited feedback I had received for the data I had been collecting I decided to put together what is now the revocation report. As part of this project I switched to a different monitoring provider (Monitis) because it gave me more control of what was being monitored and had a more complete API I could use to get at the data.

In parallel I began to work with CloudFlare to address what I felt was one barrier to optimally using a CDN to distribute OCSP responses (inability to cache POSTs). The whole time chronicling my experiences, thoughts and decisions on my blog so that others could learn from my experience and the industry as a whole could benefit.

When I set up the Monitis account I licensed the ability to monitor the top responders from 21 locations worldwide every minute. At first I just published the graphical reports that Monitis had but they had a few problems:

  1. They did not perform very well (at the time).
  2. It was not laid out in such a way you could see all the data at once (also now fixed).
  3. It did not exclude issues associated with their monitoring sensors.
  4. It gave no context to the data that was being presented.

This is what took me to working with Eli to build the revocation report we have today, the internet now has a public view into approximately eleven months (and growing) of performance data for revocation repositories. Eli and I are also working on mining and quantizing the data so we can do something similar for responder uptime but this has taken longer than expected due to other priorities — we will finish it though.

So the question at this point is “was the effort worth it?” — I think so, both of us put a lot of time into this project but I believe it’s been a success for a few reasons:

  1. It allowed me to figure out how to improve our own revocation infrastructure; we now perform at about the same speed as gstatic.google.com for a similarly sized object which is what the bar should be.
  2. Both StartSSL and Entrust have now followed suit and made similar changes to their infrastructure improving their performance by about 3x (besting our performance by a few ms!).
  3. Symantec has improved their primary revocation repository performance by almost 40% and I understand more improvements are on the way.
  4. We are a closer to having data based argument we can present to browsers about why they can and should re-enable hardfail revocation checking by default.
  5. It gives customers visibility into the invisible performance hit associated with the decision of who you choose as your certificate provider.

What do you think? Do you find this valuable? Are there any other elements you think we should be tracking?

Certificate-based Mozilla Persona IdP

My name is David Margrave, I am a guest author on unmitigatedrisk.com.  I have worked in the security sphere for 20 years at various U.S. federal agencies, financial institutions, and retailers.  My interests include improving the state of client authentication on the Internet, which is an area that saw robust developments in the 1990s, then languished for a number of years as the Internet at large seemed content with reusable passwords and form-based authentication over SSL/TLS, but has received renewed scrutiny because of recent large scale data breaches and the veiled promise from the Federal government to ‘fix this mess or we will fix it for you’.

 

The Mozilla Persona project is a recent initiative to improve and standardize browser-based authentication.  For a long time (over 10 years) the most widely-used form of browser-based authentication has been based on HTML forms.  At its most basic level, a user will enter an identifier and reusable password into an HTML form, and submit the form in an HTTPS request to access a protected resource.  The server will receive these values, validate them, and typically return state information in an encrypted and encoded HTTP cookie.  Subsequent visits to the protected resource will send the cookie in the HTTP request, and the server will decrypt and validate the cookie before returning the protected content.   This entire exchange usually takes place over HTTPS, but in many instances the authentication cookie is used over an HTTP connection after initial authentication has completed successfully.  There are other forms of HTTP authentication and other previous attempts at standardization, but a quick survey of the largest retailers and financial institutions will show that HTML form-based authentication is still the most common by far.

 

Assuming that the implementers of these cookie schemes are competent amateur cryptographers and avoided the most glaring mistakes (see this paper by MIT researchers), all of these authentication schemes which rely on HTTP cookies suffer from the same critical flaw:  An attacker who obtains the cookie value can impersonate the user.  The crucial problem is that HTML form-based authentication schemes have not been capable of managing cryptographic keying material on the client side.  More secure schemes such as Kerberos V5 use a ticket in conjunction with an accompanying session key, both of which are stored in a credentials cache.  In contrast to flawed cookie-based schemes, in the Kerberos V5 protocol, a service ticket is useless to an adversary without the accompanying service ticket session key.  An authentication exchange in Kerberos V5 includes the service ticket, and a value encrypted with the service ticket session key, to prove possession.There are some proprietary or enterprise-level solutions to this situation.  For example, Microsoft Internet Explorer and IIS have long had (for over 10 years) the capability to use HTTP Negotiate authentication and to use GSS-API with Kerberos V5 as the underlying mechanism.  The Apache web server has had the capability to accept HTTP Negotiate authentication for several years as well, but the adoption of these solutions on the Internet at large has not been widespread.  At a high level, the Mozilla Persona project improves this situation by bringing the credentials cache and cryptographic capabilities into the browser, and doing so in a standardized manner.  Although the underlying cryptographic algorithms may differ from the Kerberos V5 example, the importance of this project can’t be understated.

 

Persona introduces the concept of the Identity Data Provider (IdP).  The basic idea is that a domain owner is responsible for vouching for the identity of email addresses in that domain.  This could involve whatever scheme the domain owner wishes to implement.  If a domain does not implement an IdP, the Persona system will use its own default IdP which uses the email verification scheme that all Internet users are familiar with:  you prove your ability to receive email at a particular address.  When signing-in to a website which uses Persona authentication, the user will be presented with a dialog window asking for the email address to use.

Screenshot from 2013-04-10 13:14:26

Behind the scenes, the Persona system determines which IdP to use to verify the address.  A domain implementing an IdP must publish some metadata (the public key, and provisioning and verification URLs), in JSON format, at the URL https://domain/.well-known/browserid.  The server at the URL must have a certificate from a trusted certificate authority, and the returned value must be properly-formatted JSON with certain required metadata information (described here).

 

The author implemented an IdP at the domain margrave.com as a research exercise, borrowing from the NodeJS browserid-certifier project.  This particular IdP was written to accept X.509 client certificates issued by a commercial certificate authority, to extract the email address from the X.509 certificate, and issue a persona certificate with that email address. The .well-known/browserid file for node.margrave.com is shown here:

{
    "public-key": {"algorithm":"DS",
        "y":"aab45377fa024964a6b3339d107b91887adf85b96649b5b447a7ac7390866c92d88ed101f6525e717c0d703d5fd8727e0d1d8adb60bb80c7123730616c197326f1eed326fdfc136d7594ffce39a05005a433add8d3344813ea89f6e426d8f5b0bc0d3fdb59c8ec7c19583ba7f14d3636713b84c1ebe62a6866e9c2091def5c25aba967670eabc4591ee3f536006ce5c550265d4b2264c5a989abf908763b41014f35eb2949a0b027a1a1054203a3e13eeb1f16ffb171d6942405546a8407c3fb7e73227e432d150834054edc379de8f8988a8e3b102b70fe5b1164a28a4a453310313e00de1aa177f5ac2b73ef31670e16914607ba4196c06e57f7e5209bc7e4",
        "p":"d6c4e5045697756c7a312d02c2289c25d40f9954261f7b5876214b6df109c738b76226b199bb7e33f8fc7ac1dcc316e1e7c78973951bfc6ff2e00cc987cd76fcfb0b8c0096b0b460fffac960ca4136c28f4bfb580de47cf7e7934c3985e3b3d943b77f06ef2af3ac3494fc3c6fc49810a63853862a02bb1c824a01b7fc688e4028527a58ad58c9d512922660db5d505bc263af293bc93bcd6d885a157579d7f52952236dd9d06a4fc3bc2247d21f1a70f5848eb0176513537c983f5a36737f01f82b44546e8e7f0fabc457e3de1d9c5dba96965b10a2a0580b0ad0f88179e10066107fb74314a07e6745863bc797b7002ebec0b000a98eb697414709ac17b401",
        "q":"b1e370f6472c8754ccd75e99666ec8ef1fd748b748bbbc08503d82ce8055ab3b",
        "g":"9a8269ab2e3b733a5242179d8f8ddb17ff93297d9eab00376db211a22b19c854dfa80166df2132cbc51fb224b0904abb22da2c7b7850f782124cb575b116f41ea7c4fc75b1d77525204cd7c23a15999004c23cdeb72359ee74e886a1dde7855ae05fe847447d0a68059002c3819a75dc7dcbb30e39efac36e07e2c404b7ca98b263b25fa314ba93c0625718bd489cea6d04ba4b0b7f156eeb4c56c44b50e4fb5bce9d7ae0d55b379225feb0214a04bed72f33e0664d290e7c840df3e2abb5e48189fa4e90646f1867db289c6560476799f7be8420a6dc01d078de437f280fff2d7ddf1248d56e1a54b933a41629d6c252983c58795105802d30d7bcd819cf6ef"
    },
    "authentication": "/persona/sign_in.html",
    "provisioning": "/persona/provision.html"
}

 

The public key from the browserid file is the public portion of the key pair used by the IdP to certify users in the domain.  The fact that it must be served over a URL protected with a certificate issued from a trusted CA, is how the Persona system builds on the existing trust infrastructure of the Internet, instead of attempting to re-implement their own from scratch, or requiring operators of websites relying on Persona authentication to establish shared secrets out-of-band.  The authentication and provisioning URLs are how browsers interact with the IdP.

 

In the Certificate-based IdP implemented at margrave.com, the page located at /persona/provision.html includes some javascript which does the following things:  calls an AJAX API to get the email address from the certificate, receives the email address that the user entered in the Persona login dialog via a javascript callback, validates that they match, and calls another AJAX API to issue the certificate.  Note that the email address comparison performed in client-side javascript is purely for UI and troubleshooting purposes, the actual issuance of the Persona certificate uses the email address from the X.509 certificate (if the provisioning process progresses to that point), irrespective of what username was entered in the Persona login dialog.  The client-side validation of the email address is to provide the ability to troubleshoot scenarios where a user may choose the wrong certificate from the browser certificate dialog box, etc.  The client-side provisioning source code is shown below (ancillary AJAX error handling code is omitted).

 

function provision() {

  // Get the email from the cert by visiting a URL that requires client cert auth and returns our cert's email in a json response.
  // This is not strictly necessary, since the server will only issue persona certificates for the email address from the X.509 certificate,
  // but it is useful for troubleshooting, helping the user avoid choosing the wrong certificate from the browser dialog, etc.
  getEmailFromCert(function(emailFromCert) {
      if (emailFromCert) {
          navigator.id.beginProvisioning(function(emailFromPersona, certDuration) {
              if (emailFromPersona===emailFromCert) {
                  navigator.id.genKeyPair(function(publicKey) {
                      // generateServerSide makes an AJAX call to a URL that also requires client cert auth
                      generateServerSide(publicKey, certDuration, function (certificate) {
                          if (navigator.id && navigator.id.registerCertificate) {
                              //alert('registering certificate: ' + certificate);
                              navigator.id.registerCertificate(certificate);
                          }
                      });
                  });
              } else {
                  navigator.id.raiseProvisioningFailure('user is not authenticated as target user');
              }
          });
      } else {
          navigator.id.raiseProvisioningFailure('user does not have a valid X.509 certificate');
      }
  });
}

function generateServerSide(pubkey, duration, cb) {
    $.ajax({
        // Note that this URL requires SSL client certificate authentication,
        // and performs its own authorization on the email address from the certificate,
        // (for example, based on issuing CA or email address domain),
        // and so does not need the email address as an explicit input parameter
        url: "https://node.margrave.com/cert_key",
        type: "POST",
        global: false,
        data: {pubkey: pubkey,
               duration: duration},
               dataType: "json",
        success: function(response) {
                if (response.success && response.certificate) {
                    cb(response.certificate);
                }
            }
    });
    return false;
}

function getEmailFromCert(cb) {
        $.ajax({
            // Note that this URL requires SSL client certificate authentication,
            // and performs its own authorization on the email address from the certificate.
            url: "https://node.margrave.com/email",
            type: "POST",
            global: false,
            dataType: "json",
            success: function(response) {
                cb(response.email);
            }
        });}

 

The other portion of a Persona IdP, the authentication URL, turned out not to be necessary in this case, because the authentication is implicit in the use of X.509 client certificate-authenticated AJAX calls.  Once the Persona certificate has been provisioned, the user is able to access the protected resource.  If things don’t work as expected, the error messages do not seem to bubble up to the UI dialog, and I had to resort to tracing XHR calls with Firebug to determine what went wrong.  In one case, it was a clock skew error that was corrected by installing ntpd on my IdP server.   In another case, one of my IdP AJAX calls may return an error but this error gets masked by a vague UI message.  It may be helpful to standardize HTTP return code and JSON field names to return descriptive error text to the Persona UI.

 

Screenshot from 2013-04-10 13:15:32

 

 

In its current form, this concept could be useful for enterprises, but not really for the Internet at large, since it requires a) that you have a client cert and b) that the IDP for your email domain is certificate-aware.  However, If the persona-default IDP were certificate-aware, or CAs were persona-aware, then there are some interesting possibilities.

  1. The persona default IDP could skip the verification email if a trusted X.509 client certificate is provided.   Possession of a certificate from a trusted CA implies the email address has already been verified, at a minimum.  The Persona system already accepts CA’s trust when retrieving .well-known/browserid, this idea extends CA trust to clients.
  2. Going the other direction: If a CA were to accept a persona certificate from either a domain’s IDP or from the persona-default IDP, and using that to issue X.509 client certificates, or as one part of the client certificate enrollment process (higher assurance certificates may verify more information than email, such as state-issued identification).  This idea seems promising because the email verification scheme is the wheel that everyone on the Internet has reimplemented, in many cases with security flaws.

 

Abstract: Using least privileged design principals to improve trust in the online marketplace

Weekends, they are overrated 🙂

Tomorrow is the cut-off for submissions for the NIST workshop on “Workshop on Improving Trust in the Online Marketplace” to be held April 2013 and I have spent part of the day thinking about what talks might be interesting. I have already submitted one on “Revocation reality and the path to becoming effective” but I also wrote this one up and I might submit it also, posting here so I can get a little feedback before the submission deadline.

In 2010 security researchers with the EFF collected the certificates of all of the publicly-visible SSL certificates on the IPv4 internet and published their analysis and data-sets from their research. This work made it clear to the world how extensively PKI is used to facilitate commerce on the web but it also raised he concern that there were as many as 650 organizations capable of issuing publicly trusted certificates on the internet.

While this conclusion is exaggerated as many of those certificates and keys are in-fact operated by the same organizations that their certificates are ultimately issued by, the conclusion that there has been an un-needed expansion of the number of keys that are technically trusted to issue certificates for SSL for the entire Internet is sound.

To address this problem one of the steps that is needed is the application of least privilege principals to how one designs and manages publicly trusted keys and certificates. Thankfully in the late 90’s the foundation for addressing much of this problem was developed as a means to enable the Federated PKI in use by the U.S. Federal Government.

For the last year we have been working to broadly deploy X.509 Name Constraints’ along with other least privilege design principals to our customers PKIs both internally managed by our own staff as well as those on premise. This talk will explore these concepts, the client support for them, the challenges we have experienced in their deployment and identify the remaining issues that must be addressed to obtain the full benefits of this approach.

Using CAPICOM on Windows x64

So CAPICOM was one of the project I was responsible for while at Microsoft, its been discontinued but I always find it useful – it is kind of a Swiss Army knife for CryptoAPI certificate stores when paired with its VBS samples.

One of it’s problems is we never shipped with x64 bit version, you can do similar things with PowerShell and the .NET classes (this is why it was discontinued) but I still find this the quickest way to do stuff sometimes so I keep it in my toolbelt.

Here is what you need to know to make it work:

  1. Windows can run 32bit things in 64bit environments.
  2. You cannot have a 64bit thing call a 32bit thing.
  3. Windows ships a 32bit cmd prompt.
  4. Windows ships a 32bit regsrv32.

To use CAPICOM you need to:

  1. Download CAPICOM – http://www.microsoft.com/en-us/download/details.aspx?id=25281
  2. Install CAPICOM
  3. Register CAPICOM
  • Open an administrative command prompt
  • cd to “C:\Program Files (x86)\Microsoft CAPICOM 2.1.0.2 SDK\Lib\X86”
  • copy CAPICOM.DLL %windir%\syswow64
  • %windir%\syswow64\regsvr32.exe %windir%\syswow64\capicom.dll
  • “exit” the command prompt

So what can you do? There are lots of things, tonight I used it to enumerate the extensions included in a PFX file, you can do this with OpenSSL too by looking at the ASN.1 but this way you get some of the Microsoft specific stuff expanded out to human readable things.

I should note that its old, its unsupported and it may have vulnerabilities in it — as such I unregister it when its not in use and I recomend you do the same.

Hope this helps someone,

Ryan

RESTful X509, CRL and OCSP to JSON web-service

So the other day I got a bee in my bonnet and decided I wanted a simple web service I could pass common day X509 objects to and get a JSON representation of that same object. We had recently done a project in Go at work and we found it quick, robust and easy to build, additionally it looks it’s certificate support decent enough so I thought it was the way to go.

In comes Freelancer, I threw my rough (and that’s kind) goals in a paragraph or two and a few days later I had a bid proposal from an engineer in Chicago — Eli Frey.

Based on a quick review of the Go documentation for cryptography it looked like this was going to be pretty straight forward, and for the most part it was – we did find that there were a few cases that just were not possible without more work than we wanted to put in, I will summarize those a little later.

As things progressed we also decided to add the ability to get an X509 certificate from the interface. Normally one would do this by generating a PKCS #10 request (CSR) and sending it to a CA for processing, unfortunately one of those cases that required more work than we wanted to put in was parsing PKCS #10s since go does not as of yet support it. With that said a CSR is really just a self-signed certificate we just did the same thing with a self-signed X509 certificate request.

So how do these interfaces work? Here are a few examples of how you would call them:

 

Decode a PEM encoded X509 certificate
curl  -F “[email protected]” “api.x509labs.com/v1/x509/certificate?action=decode&inputEncoding=PEM”
 
Decode a DER encoded X509 certificate
curl –fail -F “[email protected]” “api.x509labs.com/v1/x509/certificate?action=decode&inputEncoding=DER”
 
Request and issue an X509 certificate based on a DER encoded self-signed certificate with one hostname
openssl genrsa -out request.key 2048
openssl req -config openssl.cfg -subj “/CN=www.example.com” -new -x509 -set_serial 01 -days 1 -key request.key -out request.cer
curl –fail -F “[email protected]” “api.x509labs.com/v1/x509/certificate?action=issue&hostnames=bob.com&inputEncoding=DER”
 
Request and issue an X509 certificate based on a PEM encoded self-signed certificate with one hostname
openssl genrsa -out request.key 2048
openssl req -config openssl.cfg -subj “/CN=www.example.com” -new -x509 -set_serial 01 -days 1 -key request.key -out request.cer
curl –fail -F “[email protected]” “api.x509labs.com/v1/x509/certificate?action=issue&hostnames=bob.com&inputEncoding=PEM”
 
Request and issue an X509 certificate based on a PEM encoded self-signed certificate with several hostnames
openssl genrsa -out request.key 2048
openssl req -config openssl.cfg -subj “/CN=www.example.com” -new -x509 -set_serial 01 -days 1 -key request.key -out request.cer
curl –fail -F “[email protected]” “api.x509labs.com/v1/x509/certificate?action=issue&hostnames=bob.com,fred.com&inputEncoding=PEM”
 
Decode a set of PEM encoded X509 certificates
curl –fail -F “[email protected]” “api.x509labs.com/v1/x509/certificates?action=decode&inputEncoding=PEM”
 
Decode a PEM encoded X509 crl
curl –fail -F “[email protected]” “api.x509labs.com/v1/x509/crl?action=decode&inputEncoding=PEM”
 
Decode a DER encoded X509 crl
curl –fail -F “[email protected]” “api.x509labs.com/v1/x509/crl?action=decode&inputEncoding=DER”
 
Decode an OCSP response
openssl ocsp -noverify -no_nonce -respout ocsp.resp -reqout ocsp.req -issuer ca.cer -cert www.cer -url “http://ocsp2.globalsign.com/gsextendvalg2” -header “HOST” “ocsp2.globalsign.com” -text
curl –fail -F “[email protected]” “api.x509labs.com/v1/x509/ocsp?action=decode&type=response”

 

 

So even though this started out as a pet project I actually think these interfaces are pretty useful, the largest limitations of these interfaces are:

X509Certificate

  1. Not every element of the structures is included in the JSON serialization, for example AIA, CDP, Name Constraints and Certificate Policy are not present (most extensions actually); this is because there is not a decoder for them in GO.
  2. ECC based certificates are not supported, this is because at this time the released version of GO doesn’t include support for these.
  3. Only issuing certificates based on self-signed X509 certificates are supported, this is as I mentioned a result of the lack of support for the PKCS #10 object in GO.
  4. No OID is specified for the Signature algorithm, this is because it’s not exposed in GO.
  5. Only one certificate profile is supported when using the issue action, this is mostly due to limitations in go (time was also a factor) for example the lack of AIA and OCSP support mean these regardless of CA key material these certs are just good for playing around.
  6. No user supplied information is included in the generated certificate, this was really just a function of time and building a proper workflow that would not be valuable without addressing other go limitations.
  7. Requested certificates that contain RSA keys must have a bit length of at least 2048 bits in length, just a best practice.
  8. Requested certificates will only be issued if the submitted certificate contains a self-signed certificate with a valid signature, this is to ensure the requestor actually holds the private key.
  9. Not all SAN types are supported, only DNSnames really again a limitation of GO.
  10. Certificates with name constraints are not supported, again a limitation of GO.
  11. Not possible to put EKU in certificates, again a limitation of GO.

 

X509OCSP

  1. ResponderID is not specified, this is because it’s not exposed in GO.
  2. Only responses with a single response are supported, this is because more that response is not exposed in GO.
  3. No OCSP extensions are supported, this is because this is not exposed in GO.
  4. Only responses are supported, this is because the request is not supported in GO.

 

Here are some things you might want to know about these interfaces:

  1. Both X509crl and X509ocsp default to DER but you can specify PEM in the encode query string parameter.
  2. X509Certificate defaults to the PEM encoding but DER is supported via the encode query string parameter.
  3. X509Certificates defaults to PEM encoding but DER is not supported.
  4. X509Certificates takes the file you might use in Apache or Nginx to configure which certificates to send — a concatenation of PEM encoded certificates.
  5. All interfaces use HTTP error codes to report issues.
  6. I can’t propose they will always be up and available, be reliable, performant or accurate 🙂

All in-all I think this was a fun project and I really enjoyed working with Eli and Freelancer (though its mail client is awful and the site needs some UI work).

Ryan

How to tell DV and OV certificates apart

Introduction

There are in essence three kinds of SSL certificates: Domain Validated, Organization Validated and Extended Validated. I am not going to write about the differences here it seems that there are hundreds of articles on this topic on the Internet.

What I think has not been given sufficient coverage is how one is able to look at a certificate and determine what type it is.

One would think that this would be easy; In theory if nothing was explicitly stated it would be a Domain Validated certificate (since it is the weakest validation), otherwise someone would put something in the certificate making it clear that the certificate was either Organization Validated or Extended Validated.

Unfortunately it’s not this simple, the main issue being the historic lack of coordination within the CA industry.

Each Certificate Authority (CA) has its own unique practices relating to how they mark their certificates so with the existing deployed certificates there is no singular rule or approach can be used to definitively know what level of validation was done for a given certificate.

Thankfully it looks like that this problem is betting better thanks to the adoption of the Baseline Requirements but in the meantime we have to make do with heuristics.

Deterministic Approach

Today the only way to know with confidence that a certificate is of a specific type is to know the practices of each CA.

In X.509 the way an issuer is supposed to express something like this is via the Certificate Policies extension which is defined in RFC 5280.

This allows a CA to express a unique identifier (an OID) in their certificates that maps to a document that describes its practices associated with this certificate. This identifier can be used programmatically to do make trust decisions about a certificate or to differentiate the user interface in an application based on what type of certificate is being used.

This is exactly how browsers today can tell if a certificate is an Extended Validation (EV) certificate. In essence they have some configuration that says “I trust GlobalSign to issue EV certificates, when a certificate is presented to me from them that has this policy OID show the EV user experience”.

The Baseline Requirements use the same approach defining identifiers for Domain Validated and Organization Validated certificates, these are:

Type Policy Identifier
Domain Validated 2.23.140.1.2.1
Organization Validated 2.23.140.1.2.2

 

Having these identifiers takes us a long way towards our goal of deterministic evaluation of certificate issuance policy — that said not all CAs have adopted them which is technically alright since the Baseline Requirements do allow them to use their own Policy Identifiers.

Heuristic Approach

Since the Baseline Requirements were only established this year it will take some time for the existing install base of certificates to be re-issued to use these Policy Identifiers called about above. This doesn’t mean you can’t tell the certificates apart today, it does mean it is quite a bit messier though.

Here is some pseudo-code provided to me as an example from a friend that they used in one of their projects:

type = null;

if (cert is self-signed) then

     type = SS;        /* SS = Self-signed */

else if (cert was issued by a known “CA”) then

     type = DV;  /* DV = Domain Validation */ else if (cert contains a known EV Policy OID) then

     type = EV;  /* EV = Extended Validation */ else if (cert “Subject O” and “Subject CN” are the same or “Subject OU” contains “Domain Control Validated”) then {

     if (cert contains no Subject L, St or PostalCode) then

         type = DV;

}

else if (cert “Subject O” is “Persona Not Validated” and the cert’s issuer was StartCom

     type = DV;

if (type is null)

     type = OV;

This logic is not comprehensive but should work well enough for most uses.

Summary

Unfortunately today there is not a deterministic way to tell if a certificate was Domain or Organization Validated, that said things are changing and within a few years hopefully it will be possible.

In the mean-time there are heuristics you can use that help tell these types of certificates apart.

Windows XP and Name Constraints

Recently I blogged about how Windows XP processes Name Constraints a little different than the RFC specifies — with the help of a friend I have a good set of examples of what would work and what would not work that illustrate what it does.

Assuming our Subject was:

C = US;S = Washington;L = Kirkland;O = GlobalSign;CN = globalsign.com

 

And that our Constraint was:

Permitted

     [1]Subtrees (0..Max):

          DNS Name=globalsign.com

We would see different results when validating a certificate on XP than we would on a later version of Windows.

Notice we did not include any directoryName attributes? That is supposed to mean that there is no constraints on the directoryName. On Windows XP however if you include a directory name in the subject there MUST be at least one Directory Name attribute in the RDN to match against otherwise it will not pass its Name Constraints check.

So if we instead made our constraint:

Permitted

     [1]Subtrees (0..Max):

          RFC822 Name=globalsign.com

     [3]Subtrees (0..Max):

          Directory Address:

          C = US

          S = Washington

          L = Kirkland

          O = Globalsign

Excluded=None

 

A certificate with the following subject would match:

  • An empty DN, no RDNs
  • C = US
  • C = US;S = Washington
  • C = US;S = Washington;L = Kirkland
  • C = US;S = Washington;L = Kirkland;O = Globalsign
  • C = US;S = Washington;L = Kirkland;O = Globalsign;CN = globalsign.com
  • C = US;S =””;L = Kirkland;O =””;CN = globalsign.com

When XP processes the RDNs it starts with the first and progresses from there. You can’t skip an RDN. If an RDN is present it must match the entire RDN value or be empty.

As such the following wouldn’t match in our example:

  • S = Washington (Skipped the first RDN)
  • C = US;L = Kirkland (Skipped the second RDN)
  •  C = US;S = Washington;L = Kirkland;O = Globalsign Development Center (partial “O” value).

The prior blog post on this topic I described before talks about how an enterprise can work around this behavior (by setting some registry keys) but a public certificate issuer can too, for example by inserting just one RDN value and ensuring the subordinate CA issues with that RDN value in its certificates.

This way a site can have the flexibility it wants to change its directory structure without re-issuing the certificate containing the Name Constraints.

Ryan