Category Archives: Security

How to get the latest stable OpenSSL, Apache and Nginx

Unfortunately many distributions are slow to pick up the most recent distributions of these core software packages. I see many arguments why this not a problem, the most common being the enterprise distributions backport the most important security fixes so it’s not necessary to get more recent versions.

The problem with this argument is that sometimes security fixes are not patches but are in-fact new features. TLS 1.2 is a great example of this, it has numerous security fixes in it that don’t exist in earlier incarnations of the protocol and the older versions of OpenSSL simply do not support it.

Another argument I hear is “make install” is so easy it doesn’t really matter that distributions do not carry the latest packages because you can just build it yourself. This argument has several issues, one of which is production systems should hardened with only the minimal binaries on it to be supportable, bringing in a development environment on is about as opposite of that as you can be.

So what is a server administrator to do? Thankfully there are several additional repositories available for Enterprise Linux Distributions that offer stable and recent builds of the most commonly used packages, two such repositories are:

Between these two (depending on your OS version) you can probably get the most recent OpenSSL, Nginx, and Apache distributions and all the goodness they carry.

In my case I use Centos 6.3, if you do also you can follow the steps here to add these repositories to you own systems.

Once you have added the repositories you can simply use the IUS replace plug-in to replace your current distribution of OpenSSL with the latest, for example:

yum replace openssl --replace-with=openssl10 --enablerepo=ius-testing

That’s it, now you can enable TLS 1.2 and any other modern TLS features carried by this build of OpenSSL. It is worth noting that at this time this build does not include support for ECC and ECDH which are required for forward secrecy with modern browsers, to get a version that supports these algorithms you will have to build your own.

Example Nginx SSL / TLS configuration

Configuring your server for SSL can be a little overwhelming. To help with this I am writing three posts (one for Nginx, Apache and IIS) with example configurations that (to the extent possible) result in the same configuration regardless of what server you are using.

Let’s start with Nginx, for this site :

  1. Running nginx/1.4.1 and openssl 1.0.1e
  2. All static content is handled by Nginx.
  3. All dynamic content is handled by Node.js and Express.
  4. We use the X-Frame-Options header to help protect from Click-Jacking.
  5. We use the X-Content-Security-Policy
    header to help protect from Cross-Site-Scripting.
  6. All requests for content received over http are redirected to https.
  7. Once the user visits the https version of the site the Strict-Transport-Security header instructs the browser to always start with the https site.
  8. We have chosen SSL cipher suites to offer a blend of performance and security.
  9. We have disabled SSL v2 and v3 and enabled all versions of TLS.
  10. We have enabled OCSP stapling.
  11. We have enabled SSL session caching.
  12. We have put all certificates and keys into their own folder (certs.d/).
  13. Set the owner of the of the certs.d folder to the process that the server runs as.
  14. We have restricted the certs.d folder and key files so only the owner can read and write (chmod 600).

Here is the configuration file:

server {
listen  80;
server_name  example.com;

# tell users to go to SSL version this time
if ($ssl_protocol = "") {
rewrite     ^   https://$server_name$request_uri? permanent;
}

}

server {
listen  443 ssl;
server_name  example.com;

# tell users to go to SSL version next time
add_header Strict-Transport-Security "max-age=15768000; includeSubdomains;";

# tell the browser dont allow hosting in a frame
add_header X-Frame-Options DENY;

# tell the browser we can only talk to self and google analytics.
add_header X-Content-Security-Policy "default-src 'self'; \
script-src 'self' https://ssl.google-analytics.com; \
img-src 'self' https://ssl.google-analytics.com";

ssl_protocols               TLSv1 TLSv1.1 TLSv1.2;

# ciphers chosen and ordered for mix of performance, interoperability and security
#ssl_ciphers                 AES128-GCM-SHA256:ECDHE-RSA-AES128-SHA256:RC4:HIGH:!MD5:!aNULL:!EDH;

# ciphers chosen for security (drop RC4:HIGH if you are not worried about BEAST).
#ssl_ciphers                  RC4:HIGH:HIGH:!aNULL:!MD5;

# ciphers chosen for FIPS compliance.
#ssl_ciphers !aNULL:!eNULL:FIPS@STRENGTH;

# ciphers chosen for forward secrecy an compatibility
ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS";


ssl_prefer_server_ciphers   on;
ssl_certificate_key         certs.d/example.key;
ssl_certificate             certs.d/example.cer;

ssl_session_cache    shared:SSL:10m;
ssl_session_timeout  10m;

# enable ocsp stapling
resolver 8.8.8.8;
ssl_stapling on;
ssl_trusted_certificate certs.d/example.cer;

# let nginx handle the static resources
location ~ ^/(htm/|html/|images/|img/|javascript/|js/|css/|stylesheets/|flash/|media/|static/|robots.txt|humans.txt|favicon.ico) {

root /usr/share/nginx/example/public;
access_log off;
expires @30m;
}

# redirect to node for the dynamic stuff
location / {
proxy_pass http://localhost:8003;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;

proxy_hide_header X-Powered-By;

#proxy_redirect off;
#proxy_set_header   X-Real-IP            $remote_addr;
#proxy_set_header   X-Forwarded-For  $proxy_add_x_forwarded_for;
#proxy_set_header   X-Forwarded-Proto $scheme;
#proxy_set_header   X-NginX-Proxy    true;
}

error_page  404              /404.html;

# redirect server error pages to the static page /50x.html
#
error_page   500 502 503 504  /50x.html;

location = /50x.html {
root   /usr/share/nginx/html;
}
}

Average CRL size and download time

The other day I had a great conversation with Robert Duncan over at Netcraft, he showed me some reports they have made public about CRL and OCSP performance and uptime.

One thing that I have been meaning to do is to look at average CRL size across the various CAs in a more formal way I just never got around to doing it; conveniently one of the Netcraft reports though included a column for CRL size. So while I was waiting for a meeting to start I decided to figure out what the average sizes were; I focused my efforts on the same CAs I include in the revocation report, this is what I came up with:

 

CA Average CRL Size(K) CRL Download Time @ 56k (s)
Entrust 512.33 74.95
Verisign 200.04 29.26
GoDaddy 173.79 25.42
Comodo 120.75 17.66
Cybertrust/Verizon 75.00 10.97
DigiCert 21.66 3.17
GlobalSign 21.25 3.11
Certum 20.00 2.93
StartSSL 9.40 1.38
TrendMicro 1.00 0.15

 

From this we can derive two charts one for size and another for download time at 56k (about 6% of internet users as of 2010):

clip_image002 clip_image004

 

I overlaid the red line at 10s because that is the timeout that most clients use to indicate when they will give up trying to download, some clients will continue trying in the background so that the next request would have the CRL already cached for the next call.

This threshold is very generous, after all what user is going to hang around for 10 seconds while a CRL is downloaded? This gets worse though the average chain is greater than 3 certificates per chain, two that need to have their status checked :/.

This is one of the reasons we have soft-fail revocation checking, until the Baseline Requirements were published inclusion of OCSP references was not mandatory and not every CA was managing their CRLs to be downloadable within that 10 second threshold.

There are a few ways CAs can manage their CRL sizes, one of the most common is simply roll new intermediate CAs when the CRL size gets unmanageable.

There is something you should understand about the data in the above charts; just because a CRL is published doesn’t mean it represents active certificates – this is one of the reasons I had put of doing this exercise because I wanted to exclude that case by cross-referencing the signing CA with crawler data to see if active certificates were associated with each CRL.

This would exclude the cases where a CA was taken out of operation and all of the associated certificates were revoked as a precautionary exercise – this can happen.

So why did I bother posting this then? It’s just a nice illustration as to why we cannot generally rely on CRLs as a form of revocation checking. In-fact this is very likely why some browsers do not bother trying to download CRLs.

All posts like this should end with a call to action (I need to do better about doing that), in this case I would say it is for CAs to review their revocation practices and how they make certificate status available to ensure it’s available in a fast and reliable manner.

How not to collect sensitive information

So I was chatting with a friend today about the recent Register article about TeliaSonera’s application to add a new root into into the Mozilla root program.

I could not recall ever visiting a site that used a certificate from TeliaSonera so I started looking around for one in my web crawler data, a few moments later my friend pointed me at Telia’s own home page where he noticed they were collecting user id’s and passwords from a CSS layer not served over SSL:

image

More interestingly they also accept your Social Security Number as an alternative to your user id:

image

Now it does submit these over SSL but if your reading this you know that the information is still susceptible to a Man In The Middle attack.

A quick look at its SSL configuration also shows the server has a number of SSL configuration problems:

1. Susceptible to DDOS because it supports client initiated renegotiation

2. Vulnerable to MITM because it supports insecure renegotiation

3. Supports weak and extra weak (export strength) ciphers

 

The certificate itself also doesn’t include OCSP pointers (which is required now under the baseline requirements, though it was issued before they become mandated) but more importantly it only includes a ldap reference to a CRL which most clients wont chase and if they did most firewalls wont allow out – in other words it can’t be meaningfully revoked by anyone other than the browsers.

More concerning is it is issued off of a root certificate authority which means its likely an online CA, vs something in a offline vault which means all sort of badness if you consider not all patches of browsers get adopted and the only way to revoke certificates like this is to patch the browser.

A look at revocation repository uptime

It is no secret that in the last two months GlobalSign was affected by outages at relating to our use of CloudFlare. I won’t go into the specifics behind those outages because the CloudFlare team does a great job of documenting their outages as well as working to make sure the mistakes of the past do not reoccur. With that said we have been working closely with CloudFlare to ensure that our services are better isolated from their other customers and to optimize their network for the traffic our services generate.

I should add that I have a ton of faith in the CloudFlare team, these guys are knowledgeable, incredibly hard working and very self critical — I consider them great partners.

When looking at these events it is important to look at them holistically; for example one of the outages was a result of mitigating what has been called the largest publically announced DDOS in the history of the Internet.

While no downtime is acceptable and I am embarrassed we have had any downtime it’s also important to look at the positives that come from these events, for one we have had an opportunity to test our mitigations for such events and improve them so that in the future we can withstand even larger such attacks.

Additionally it’s also useful to look the actual uptime these services have had and to give those numbers some context look at them next to one of our peers. Thankfully I have this data as a result of the revocation report which tracks performance and uptime from 21 different network worldwide perspectives every minute.

For 05/2012-12/2012 we see:

Service Uptime(%) Avg(ms)
GlobalSign/AlphaSSL OCSP 100.00 101.29
VeriSign/Symantec/Thawte/GeoTrust/Trustcenter OCSP 99.92 319.40
GlobalSign/AlphaSSL CRL 100 96.86
VeriSign/Symantec/Thawte/GeoTrust/Trustcenter CRL 99.97 311.42

 

For 01/2013 to 04/2013 we see:

Service Uptime(%) Avg(ms)
GlobalSign/AlphaSSL OCSP 99.98 76.44
VeriSign/Symantec/Thawte/GeoTrust/Trustcenter OCSP 99.85 302.88
GlobalSign/AlphaSSL CRL 99.98 76.44
VeriSign/Symantec/Thawte/GeoTrust/Trustcenter CRL 99.22 296.97

NOTE:  Symantec operates several different infrastructures – which one you hit is dependent on which brand you buy from and some cases which product you buy. We operate only two brands which share the same infrastructure. I averaged the results for each of their brands together to create these two tables. If you want to see the independent numbers see the Excel document linked to this post.

 

As you can see no one is perfect; I don’t share this to say our downtime is acceptable because it is not, but instead I want to make it clear this is data we track and use to improve our services and to make it clear what the impact really was.

By the way if you want to see the data I used in the above computation you can download these spreadsheets.

Why we built the Revocation Report

For over a year I have been monitoring the industry’s largest OCSP and CRL repositories for performance and uptime. I started this project for a few reasons but to understand them I think it’s appropriate to start with why I joined GlobalSign.

If you’re reading this post you are likely aware of the last few years of attacks against public Certificate Authorities (CA). Though I am no stranger to this space, like you I was watching it all unfold from the outside as I was working at Microsoft in the Advertising division where I was responsible for Security Engineering for their platform.

I recall looking at the industry and feeling frustrated about how little has changed in the last decade, feeling like the Internet was evolving around the CA ecosystem – at least technologically. The focus seemed almost exclusively on policies, procedures and auditing which are of course extremely important when you’re in this business but by themselves they are not a solution.

When I looked at the CA ecosystem there were a few players who I thought understood this; the one I felt got it the most was GlobalSign. Instead of joining the race to the bottom they were developing solutions to help with key management, certificate lifecycle management, and publishing guides to help customers deploy certificates cost effectively.

As a result when they approached me with the opportunity to join them as their CTO and set the technology direction for the company I was intrigued. Those of you who know me know I love data, I believe above all things successful businesses (if they recognize it or not) leverage the Define, Measure, Analyze, Improve and Control cycle to ensure they are solving the right problems and doing so effectively.

To that end when I joined GlobalSign as their CTO and I wanted market intelligence on what the status quo was for technology, operating practices and standards compliance so that I could use to adjust my own priorities as I planned where GlobalSign was going to focus its investments.

It was more than that though, as many of you know I am not new to PKI and especially not to revocation technologies having developed several products / features in this area as well as contributing to the associated standards over the years. I was always frustrated by many public certificate authorities’ inability or unwillingness to acknowledge the inadequacy of their revocation infrastructure and its contribution to slow TLS adoption and bad user agent behavior when it comes to revocation checking.

More directly the reliability and performance of major CA operational infrastructure was why browsers had to implement what is now called “soft-fail” revocation behaviors; the treating of failures to check the status of a certificate the same as a successful check. Yet it is these same people who point fingers at the browsers when the security implications of this behavior are discussed.

I wanted to see that change.

This is why from the very beginning of this project I shared all the data I had with other CAs, my hope was they would use it to improve their infrastructure but unfortunately short of one or two smaller players no one seemed concerned – I was shouting at the wind.

With the limited feedback I had received for the data I had been collecting I decided to put together what is now the revocation report. As part of this project I switched to a different monitoring provider (Monitis) because it gave me more control of what was being monitored and had a more complete API I could use to get at the data.

In parallel I began to work with CloudFlare to address what I felt was one barrier to optimally using a CDN to distribute OCSP responses (inability to cache POSTs). The whole time chronicling my experiences, thoughts and decisions on my blog so that others could learn from my experience and the industry as a whole could benefit.

When I set up the Monitis account I licensed the ability to monitor the top responders from 21 locations worldwide every minute. At first I just published the graphical reports that Monitis had but they had a few problems:

  1. They did not perform very well (at the time).
  2. It was not laid out in such a way you could see all the data at once (also now fixed).
  3. It did not exclude issues associated with their monitoring sensors.
  4. It gave no context to the data that was being presented.

This is what took me to working with Eli to build the revocation report we have today, the internet now has a public view into approximately eleven months (and growing) of performance data for revocation repositories. Eli and I are also working on mining and quantizing the data so we can do something similar for responder uptime but this has taken longer than expected due to other priorities — we will finish it though.

So the question at this point is “was the effort worth it?” — I think so, both of us put a lot of time into this project but I believe it’s been a success for a few reasons:

  1. It allowed me to figure out how to improve our own revocation infrastructure; we now perform at about the same speed as gstatic.google.com for a similarly sized object which is what the bar should be.
  2. Both StartSSL and Entrust have now followed suit and made similar changes to their infrastructure improving their performance by about 3x (besting our performance by a few ms!).
  3. Symantec has improved their primary revocation repository performance by almost 40% and I understand more improvements are on the way.
  4. We are a closer to having data based argument we can present to browsers about why they can and should re-enable hardfail revocation checking by default.
  5. It gives customers visibility into the invisible performance hit associated with the decision of who you choose as your certificate provider.

What do you think? Do you find this valuable? Are there any other elements you think we should be tracking?

Certificate-based Mozilla Persona IdP

My name is David Margrave, I am a guest author on unmitigatedrisk.com.  I have worked in the security sphere for 20 years at various U.S. federal agencies, financial institutions, and retailers.  My interests include improving the state of client authentication on the Internet, which is an area that saw robust developments in the 1990s, then languished for a number of years as the Internet at large seemed content with reusable passwords and form-based authentication over SSL/TLS, but has received renewed scrutiny because of recent large scale data breaches and the veiled promise from the Federal government to ‘fix this mess or we will fix it for you’.

 

The Mozilla Persona project is a recent initiative to improve and standardize browser-based authentication.  For a long time (over 10 years) the most widely-used form of browser-based authentication has been based on HTML forms.  At its most basic level, a user will enter an identifier and reusable password into an HTML form, and submit the form in an HTTPS request to access a protected resource.  The server will receive these values, validate them, and typically return state information in an encrypted and encoded HTTP cookie.  Subsequent visits to the protected resource will send the cookie in the HTTP request, and the server will decrypt and validate the cookie before returning the protected content.   This entire exchange usually takes place over HTTPS, but in many instances the authentication cookie is used over an HTTP connection after initial authentication has completed successfully.  There are other forms of HTTP authentication and other previous attempts at standardization, but a quick survey of the largest retailers and financial institutions will show that HTML form-based authentication is still the most common by far.

 

Assuming that the implementers of these cookie schemes are competent amateur cryptographers and avoided the most glaring mistakes (see this paper by MIT researchers), all of these authentication schemes which rely on HTTP cookies suffer from the same critical flaw:  An attacker who obtains the cookie value can impersonate the user.  The crucial problem is that HTML form-based authentication schemes have not been capable of managing cryptographic keying material on the client side.  More secure schemes such as Kerberos V5 use a ticket in conjunction with an accompanying session key, both of which are stored in a credentials cache.  In contrast to flawed cookie-based schemes, in the Kerberos V5 protocol, a service ticket is useless to an adversary without the accompanying service ticket session key.  An authentication exchange in Kerberos V5 includes the service ticket, and a value encrypted with the service ticket session key, to prove possession.There are some proprietary or enterprise-level solutions to this situation.  For example, Microsoft Internet Explorer and IIS have long had (for over 10 years) the capability to use HTTP Negotiate authentication and to use GSS-API with Kerberos V5 as the underlying mechanism.  The Apache web server has had the capability to accept HTTP Negotiate authentication for several years as well, but the adoption of these solutions on the Internet at large has not been widespread.  At a high level, the Mozilla Persona project improves this situation by bringing the credentials cache and cryptographic capabilities into the browser, and doing so in a standardized manner.  Although the underlying cryptographic algorithms may differ from the Kerberos V5 example, the importance of this project can’t be understated.

 

Persona introduces the concept of the Identity Data Provider (IdP).  The basic idea is that a domain owner is responsible for vouching for the identity of email addresses in that domain.  This could involve whatever scheme the domain owner wishes to implement.  If a domain does not implement an IdP, the Persona system will use its own default IdP which uses the email verification scheme that all Internet users are familiar with:  you prove your ability to receive email at a particular address.  When signing-in to a website which uses Persona authentication, the user will be presented with a dialog window asking for the email address to use.

Screenshot from 2013-04-10 13:14:26

Behind the scenes, the Persona system determines which IdP to use to verify the address.  A domain implementing an IdP must publish some metadata (the public key, and provisioning and verification URLs), in JSON format, at the URL https://domain/.well-known/browserid.  The server at the URL must have a certificate from a trusted certificate authority, and the returned value must be properly-formatted JSON with certain required metadata information (described here).

 

The author implemented an IdP at the domain margrave.com as a research exercise, borrowing from the NodeJS browserid-certifier project.  This particular IdP was written to accept X.509 client certificates issued by a commercial certificate authority, to extract the email address from the X.509 certificate, and issue a persona certificate with that email address. The .well-known/browserid file for node.margrave.com is shown here:

{
    "public-key": {"algorithm":"DS",
        "y":"aab45377fa024964a6b3339d107b91887adf85b96649b5b447a7ac7390866c92d88ed101f6525e717c0d703d5fd8727e0d1d8adb60bb80c7123730616c197326f1eed326fdfc136d7594ffce39a05005a433add8d3344813ea89f6e426d8f5b0bc0d3fdb59c8ec7c19583ba7f14d3636713b84c1ebe62a6866e9c2091def5c25aba967670eabc4591ee3f536006ce5c550265d4b2264c5a989abf908763b41014f35eb2949a0b027a1a1054203a3e13eeb1f16ffb171d6942405546a8407c3fb7e73227e432d150834054edc379de8f8988a8e3b102b70fe5b1164a28a4a453310313e00de1aa177f5ac2b73ef31670e16914607ba4196c06e57f7e5209bc7e4",
        "p":"d6c4e5045697756c7a312d02c2289c25d40f9954261f7b5876214b6df109c738b76226b199bb7e33f8fc7ac1dcc316e1e7c78973951bfc6ff2e00cc987cd76fcfb0b8c0096b0b460fffac960ca4136c28f4bfb580de47cf7e7934c3985e3b3d943b77f06ef2af3ac3494fc3c6fc49810a63853862a02bb1c824a01b7fc688e4028527a58ad58c9d512922660db5d505bc263af293bc93bcd6d885a157579d7f52952236dd9d06a4fc3bc2247d21f1a70f5848eb0176513537c983f5a36737f01f82b44546e8e7f0fabc457e3de1d9c5dba96965b10a2a0580b0ad0f88179e10066107fb74314a07e6745863bc797b7002ebec0b000a98eb697414709ac17b401",
        "q":"b1e370f6472c8754ccd75e99666ec8ef1fd748b748bbbc08503d82ce8055ab3b",
        "g":"9a8269ab2e3b733a5242179d8f8ddb17ff93297d9eab00376db211a22b19c854dfa80166df2132cbc51fb224b0904abb22da2c7b7850f782124cb575b116f41ea7c4fc75b1d77525204cd7c23a15999004c23cdeb72359ee74e886a1dde7855ae05fe847447d0a68059002c3819a75dc7dcbb30e39efac36e07e2c404b7ca98b263b25fa314ba93c0625718bd489cea6d04ba4b0b7f156eeb4c56c44b50e4fb5bce9d7ae0d55b379225feb0214a04bed72f33e0664d290e7c840df3e2abb5e48189fa4e90646f1867db289c6560476799f7be8420a6dc01d078de437f280fff2d7ddf1248d56e1a54b933a41629d6c252983c58795105802d30d7bcd819cf6ef"
    },
    "authentication": "/persona/sign_in.html",
    "provisioning": "/persona/provision.html"
}

 

The public key from the browserid file is the public portion of the key pair used by the IdP to certify users in the domain.  The fact that it must be served over a URL protected with a certificate issued from a trusted CA, is how the Persona system builds on the existing trust infrastructure of the Internet, instead of attempting to re-implement their own from scratch, or requiring operators of websites relying on Persona authentication to establish shared secrets out-of-band.  The authentication and provisioning URLs are how browsers interact with the IdP.

 

In the Certificate-based IdP implemented at margrave.com, the page located at /persona/provision.html includes some javascript which does the following things:  calls an AJAX API to get the email address from the certificate, receives the email address that the user entered in the Persona login dialog via a javascript callback, validates that they match, and calls another AJAX API to issue the certificate.  Note that the email address comparison performed in client-side javascript is purely for UI and troubleshooting purposes, the actual issuance of the Persona certificate uses the email address from the X.509 certificate (if the provisioning process progresses to that point), irrespective of what username was entered in the Persona login dialog.  The client-side validation of the email address is to provide the ability to troubleshoot scenarios where a user may choose the wrong certificate from the browser certificate dialog box, etc.  The client-side provisioning source code is shown below (ancillary AJAX error handling code is omitted).

 

function provision() {

  // Get the email from the cert by visiting a URL that requires client cert auth and returns our cert's email in a json response.
  // This is not strictly necessary, since the server will only issue persona certificates for the email address from the X.509 certificate,
  // but it is useful for troubleshooting, helping the user avoid choosing the wrong certificate from the browser dialog, etc.
  getEmailFromCert(function(emailFromCert) {
      if (emailFromCert) {
          navigator.id.beginProvisioning(function(emailFromPersona, certDuration) {
              if (emailFromPersona===emailFromCert) {
                  navigator.id.genKeyPair(function(publicKey) {
                      // generateServerSide makes an AJAX call to a URL that also requires client cert auth
                      generateServerSide(publicKey, certDuration, function (certificate) {
                          if (navigator.id && navigator.id.registerCertificate) {
                              //alert('registering certificate: ' + certificate);
                              navigator.id.registerCertificate(certificate);
                          }
                      });
                  });
              } else {
                  navigator.id.raiseProvisioningFailure('user is not authenticated as target user');
              }
          });
      } else {
          navigator.id.raiseProvisioningFailure('user does not have a valid X.509 certificate');
      }
  });
}

function generateServerSide(pubkey, duration, cb) {
    $.ajax({
        // Note that this URL requires SSL client certificate authentication,
        // and performs its own authorization on the email address from the certificate,
        // (for example, based on issuing CA or email address domain),
        // and so does not need the email address as an explicit input parameter
        url: "https://node.margrave.com/cert_key",
        type: "POST",
        global: false,
        data: {pubkey: pubkey,
               duration: duration},
               dataType: "json",
        success: function(response) {
                if (response.success && response.certificate) {
                    cb(response.certificate);
                }
            }
    });
    return false;
}

function getEmailFromCert(cb) {
        $.ajax({
            // Note that this URL requires SSL client certificate authentication,
            // and performs its own authorization on the email address from the certificate.
            url: "https://node.margrave.com/email",
            type: "POST",
            global: false,
            dataType: "json",
            success: function(response) {
                cb(response.email);
            }
        });}

 

The other portion of a Persona IdP, the authentication URL, turned out not to be necessary in this case, because the authentication is implicit in the use of X.509 client certificate-authenticated AJAX calls.  Once the Persona certificate has been provisioned, the user is able to access the protected resource.  If things don’t work as expected, the error messages do not seem to bubble up to the UI dialog, and I had to resort to tracing XHR calls with Firebug to determine what went wrong.  In one case, it was a clock skew error that was corrected by installing ntpd on my IdP server.   In another case, one of my IdP AJAX calls may return an error but this error gets masked by a vague UI message.  It may be helpful to standardize HTTP return code and JSON field names to return descriptive error text to the Persona UI.

 

Screenshot from 2013-04-10 13:15:32

 

 

In its current form, this concept could be useful for enterprises, but not really for the Internet at large, since it requires a) that you have a client cert and b) that the IDP for your email domain is certificate-aware.  However, If the persona-default IDP were certificate-aware, or CAs were persona-aware, then there are some interesting possibilities.

  1. The persona default IDP could skip the verification email if a trusted X.509 client certificate is provided.   Possession of a certificate from a trusted CA implies the email address has already been verified, at a minimum.  The Persona system already accepts CA’s trust when retrieving .well-known/browserid, this idea extends CA trust to clients.
  2. Going the other direction: If a CA were to accept a persona certificate from either a domain’s IDP or from the persona-default IDP, and using that to issue X.509 client certificates, or as one part of the client certificate enrollment process (higher assurance certificates may verify more information than email, such as state-issued identification).  This idea seems promising because the email verification scheme is the wheel that everyone on the Internet has reimplemented, in many cases with security flaws.

 

Microsoft Root Program and Excel

The other day I was was studying up on my Excel so I could help someone with a project, today I met with a friend where we were discussing the composition of root programs.

Since when you’re a plumber you fix everything with a wrench out came Excel and since sometimes I can’t let a problem sit still I spent far to much time slicing and dicing the Microsoft Root Program membership list.

There is a ton more that can be done, for example:

  1. Root CA adoption relative to UN membership.
  2. Root CA certificates based on validity dates.
  3. Comparing the Microsoft Root Program membership to the Mozilla Root Program membership.
  4. CAs per network (using AIA:OCSP urls as an indicator)
  5. CA adoption of CDNs for OCSP and CRLs.
  6. A look at how many operational facilities are associated with each organization (for example Symantec has 66 root CAs for there various brands are they all in the same facilities?).

 

Anyways it was a fun exercise and I thought I would share the results with you, here is the XLSX.

For those who do not want to look at the XSLX here are some statistics you may think are interesting:

  1. 31.30% of the CAs were owned explicitly by governments.
  2. 64.35% of the CAs were owned by commercial entities in the business of being third-party trust providers.
  3. Of the 352 certificates they are owned by 115 organizations, 36 government, 74 commercial and 5 enterprise.
  4. The USA has the most organizations who own CAs coming in at 13.04%.
  5. Spain is next with 12.17% of the organizations owning  CAs being based there..
  6. France is number three with 5.22% of the organizations owning  CAs being based there..
  7. 17.33% of the roots use 1024bit keys.
  8. 60.23% of the roots use 2048bit keys.
  9. 18.47% of the roots use 4096bit keys.
  10. 1.70% of the roots use ECDSA384 keys.
  11. 115 of the 206 sovereign nations have CAs within their borders.

Advanced Troubleshooting of Certificate Validation Related Problems on Windows Part 2

One of the most complicated things to troubleshoot in X.509 is failures related to Name Constraints handling, there are a few ways to approach this but one of the easiest is to use the Extended Error Information in the Certificate viewer.

Lets walk through an exercise so you can try using this on your own, first download this script, if you extract its contents and run makepki.bat you get a multi-level PKI that looks something like this:

clip_image002

Before we begin its useful to understand that Name Constraints are applied at an Issuing CA level, in this case the constraints say that the CA is authoritative for any DNS or RFC822 name within in the example.com domain, it also allows a specific base distinguished name (DN).

In our example these constraints are applied by the “Test Partner CA” on to the “Customer CA”; you can see the restrictions in its certificate:

clip_image003

This script creates two certificates, one for email (user.cer) and another for SSL (www.cer), normally the script makes both of these certificates fall within the example.com domain space but for the purpose of this post I have modified the openssl.cfg to put the email certificate in the acme.com domain. — The idea here is that since this domain is not included in the constraints a RFC 3280 compliant chain engine will reject it and we have an error to diagnose.

Normally we don’t already know what the failure associated with a certificate chain is, after all that’s why we are debugging it but the process we use here will help us figure out the answer — applications are notorious for sharing why a certificate chain was rejected.

Since we can’t rely on the applications one way for us to figure this out on our own is to look at the chain outside of the application in the Windows Certificate viewer to see if we can tell what the issue is there.

Let’s start by looking at the “good” certificate:

clip_image005

What’s important here is that we see no errors, but how do we know that? Well let’s look at the bad certificate and see what the difference is:

clip_image007

The certificate clearly has a problem and Windows has done a decent job of telling us what the problem was, there is a name in this certificate that is inconsistent with the Name Constraints associated with this certificate chain. The question is — which name?

To answer that question we need to look at the Name Constraints, remember that this is in the CA certificates it can be in any or all of them which means we need to first figure out where the Constraints are.

To do that we start at the “Certification Path”, here you see something like this:

clip_image008

Notice the little yellow flag, that’s the certificate viewer saying this is where the problem is, let’s take a look at that certificate:

clip_image009

But wait! There isn’t a problem with this one, that’s because this certificate is actually good it’s the certificate it issued that is the problem. Here is the non-obvious part, let’s look at the details tab:

clip_image010

Here we see Windows is telling us exactly what the problem was with the certificate that this CA has issued, it is the email address and its inclusion of the acme.com domain.

Remember that it was restricted, via the Name Constraints extension, to the example.com domain when issuing email certificates. The chain engine would be non-compliant with the associated RFC if it trusted this certificate.

But why did we see this Extended Error Information in this certificate and not the leaf itself? This is actually by design and makes sense when you think about it. Name Constraints can exist in each issuing CA certificate in the chain and what is happening here is that CryptoAPI is telling us that the issuer certificate that was flagged is the offending actor.

This Extended Error Information is available in the Certificate Viewer in a number of cases (for example Revocation), though most of the interesting cases require the application to launch the certificate viewer as you get to view the certificate chain (and all of the associated state) that they were using when they encountered the problem.

Advanced Troubleshooting of Certificate Validation Related Problems on Windows Part 1

The Windows platform for validating X.509 certificates has a feature I don’t see many discuss — its robust logging subsystem.

This allows a non-developer (and developers) to get insights into what is happening with applications interactions with CryptoAPI 2 and to some degree what is happening inside those APIs.

To use this feature you must first enable it, the easiest way to do that is via the EventViewer Management Console (eventvwr.msc), once in there you must navigate to the CryptoAPI 2 (CAPI2) node of the viewer:

clip_image001[6] clip_image002[6] clip_image003[6]

 

Once you get there you select Properties on the operational log, which will give you a dialog that looks something like this:

clip_image007

 

Once you “Enable Logging” and “Apply” the changes, immediately all calls to CryptoAPI will be logged. CryptoAPI is used all the time so by the time you close that dialog you will have some events you can look at:

clip_image008

 

The high level view of each event doesn’t tell you much, for example in the above picture we really only know that the action that was being performed with a “Build Chain”, this particular event corresponds to a call to CertGetCertificateChain.

To really understand what is going on you need to look at the Details tab:

clip_image009

 

Here we can see what was passed into a given into that API call, we can see what certificates we passed in by the calling application, what settings they chose when making that call:

clip_image011

 

Some other things you can see in this particular API call include what the disposition of the call was and of course what certificate chain was built.

Every major API in CryptoAPI has logging similar to this, you get to see what is passed in and what came out. Additionally major “objects” are sometimes logged as well, for example here is an event showing a certificate that CryptoAPI was operating against:

clip_image012

 

Another very useful API to be able to look at is CertVerifyRevocation, this will be logged in this fashion:

clip_image013

 

Some of these events include references to temporary files, for example in this case you see:

clip_image014

This is a CRL for the “GlobalSign PersonalSign 1 CA – G2”, its stored in the Time Valid Object Cahce (TvoCache), you can look at this cache with the “certutil –urlcache” command.

 

The events will all tell you which application was the caller and if they are part of a sequence of tasks you get enough information to correlate them and put them in order.

clip_image015

 

I won’t go through all of the events but as you can see this is super valuable when trying to figure out – Why did that application do that?!

As you might imagine this logging can slow things down and produces a bunch of data so be sure to turn it off when you are done.