How not to collect sensitive information

So I was chatting with a friend today about the recent Register article about TeliaSonera’s application to add a new root into into the Mozilla root program.

I could not recall ever visiting a site that used a certificate from TeliaSonera so I started looking around for one in my web crawler data, a few moments later my friend pointed me at Telia’s own home page where he noticed they were collecting user id’s and passwords from a CSS layer not served over SSL:

image

More interestingly they also accept your Social Security Number as an alternative to your user id:

image

Now it does submit these over SSL but if your reading this you know that the information is still susceptible to a Man In The Middle attack.

A quick look at its SSL configuration also shows the server has a number of SSL configuration problems:

1. Susceptible to DDOS because it supports client initiated renegotiation

2. Vulnerable to MITM because it supports insecure renegotiation

3. Supports weak and extra weak (export strength) ciphers

 

The certificate itself also doesn’t include OCSP pointers (which is required now under the baseline requirements, though it was issued before they become mandated) but more importantly it only includes a ldap reference to a CRL which most clients wont chase and if they did most firewalls wont allow out – in other words it can’t be meaningfully revoked by anyone other than the browsers.

More concerning is it is issued off of a root certificate authority which means its likely an online CA, vs something in a offline vault which means all sort of badness if you consider not all patches of browsers get adopted and the only way to revoke certificates like this is to patch the browser.

A look at revocation repository uptime

It is no secret that in the last two months GlobalSign was affected by outages at relating to our use of CloudFlare. I won’t go into the specifics behind those outages because the CloudFlare team does a great job of documenting their outages as well as working to make sure the mistakes of the past do not reoccur. With that said we have been working closely with CloudFlare to ensure that our services are better isolated from their other customers and to optimize their network for the traffic our services generate.

I should add that I have a ton of faith in the CloudFlare team, these guys are knowledgeable, incredibly hard working and very self critical — I consider them great partners.

When looking at these events it is important to look at them holistically; for example one of the outages was a result of mitigating what has been called the largest publically announced DDOS in the history of the Internet.

While no downtime is acceptable and I am embarrassed we have had any downtime it’s also important to look at the positives that come from these events, for one we have had an opportunity to test our mitigations for such events and improve them so that in the future we can withstand even larger such attacks.

Additionally it’s also useful to look the actual uptime these services have had and to give those numbers some context look at them next to one of our peers. Thankfully I have this data as a result of the revocation report which tracks performance and uptime from 21 different network worldwide perspectives every minute.

For 05/2012-12/2012 we see:

Service Uptime(%) Avg(ms)
GlobalSign/AlphaSSL OCSP 100.00 101.29
VeriSign/Symantec/Thawte/GeoTrust/Trustcenter OCSP 99.92 319.40
GlobalSign/AlphaSSL CRL 100 96.86
VeriSign/Symantec/Thawte/GeoTrust/Trustcenter CRL 99.97 311.42

 

For 01/2013 to 04/2013 we see:

Service Uptime(%) Avg(ms)
GlobalSign/AlphaSSL OCSP 99.98 76.44
VeriSign/Symantec/Thawte/GeoTrust/Trustcenter OCSP 99.85 302.88
GlobalSign/AlphaSSL CRL 99.98 76.44
VeriSign/Symantec/Thawte/GeoTrust/Trustcenter CRL 99.22 296.97

NOTE:  Symantec operates several different infrastructures – which one you hit is dependent on which brand you buy from and some cases which product you buy. We operate only two brands which share the same infrastructure. I averaged the results for each of their brands together to create these two tables. If you want to see the independent numbers see the Excel document linked to this post.

 

As you can see no one is perfect; I don’t share this to say our downtime is acceptable because it is not, but instead I want to make it clear this is data we track and use to improve our services and to make it clear what the impact really was.

By the way if you want to see the data I used in the above computation you can download these spreadsheets.

Why we built the Revocation Report

For over a year I have been monitoring the industry’s largest OCSP and CRL repositories for performance and uptime. I started this project for a few reasons but to understand them I think it’s appropriate to start with why I joined GlobalSign.

If you’re reading this post you are likely aware of the last few years of attacks against public Certificate Authorities (CA). Though I am no stranger to this space, like you I was watching it all unfold from the outside as I was working at Microsoft in the Advertising division where I was responsible for Security Engineering for their platform.

I recall looking at the industry and feeling frustrated about how little has changed in the last decade, feeling like the Internet was evolving around the CA ecosystem – at least technologically. The focus seemed almost exclusively on policies, procedures and auditing which are of course extremely important when you’re in this business but by themselves they are not a solution.

When I looked at the CA ecosystem there were a few players who I thought understood this; the one I felt got it the most was GlobalSign. Instead of joining the race to the bottom they were developing solutions to help with key management, certificate lifecycle management, and publishing guides to help customers deploy certificates cost effectively.

As a result when they approached me with the opportunity to join them as their CTO and set the technology direction for the company I was intrigued. Those of you who know me know I love data, I believe above all things successful businesses (if they recognize it or not) leverage the Define, Measure, Analyze, Improve and Control cycle to ensure they are solving the right problems and doing so effectively.

To that end when I joined GlobalSign as their CTO and I wanted market intelligence on what the status quo was for technology, operating practices and standards compliance so that I could use to adjust my own priorities as I planned where GlobalSign was going to focus its investments.

It was more than that though, as many of you know I am not new to PKI and especially not to revocation technologies having developed several products / features in this area as well as contributing to the associated standards over the years. I was always frustrated by many public certificate authorities’ inability or unwillingness to acknowledge the inadequacy of their revocation infrastructure and its contribution to slow TLS adoption and bad user agent behavior when it comes to revocation checking.

More directly the reliability and performance of major CA operational infrastructure was why browsers had to implement what is now called “soft-fail” revocation behaviors; the treating of failures to check the status of a certificate the same as a successful check. Yet it is these same people who point fingers at the browsers when the security implications of this behavior are discussed.

I wanted to see that change.

This is why from the very beginning of this project I shared all the data I had with other CAs, my hope was they would use it to improve their infrastructure but unfortunately short of one or two smaller players no one seemed concerned – I was shouting at the wind.

With the limited feedback I had received for the data I had been collecting I decided to put together what is now the revocation report. As part of this project I switched to a different monitoring provider (Monitis) because it gave me more control of what was being monitored and had a more complete API I could use to get at the data.

In parallel I began to work with CloudFlare to address what I felt was one barrier to optimally using a CDN to distribute OCSP responses (inability to cache POSTs). The whole time chronicling my experiences, thoughts and decisions on my blog so that others could learn from my experience and the industry as a whole could benefit.

When I set up the Monitis account I licensed the ability to monitor the top responders from 21 locations worldwide every minute. At first I just published the graphical reports that Monitis had but they had a few problems:

  1. They did not perform very well (at the time).
  2. It was not laid out in such a way you could see all the data at once (also now fixed).
  3. It did not exclude issues associated with their monitoring sensors.
  4. It gave no context to the data that was being presented.

This is what took me to working with Eli to build the revocation report we have today, the internet now has a public view into approximately eleven months (and growing) of performance data for revocation repositories. Eli and I are also working on mining and quantizing the data so we can do something similar for responder uptime but this has taken longer than expected due to other priorities — we will finish it though.

So the question at this point is “was the effort worth it?” — I think so, both of us put a lot of time into this project but I believe it’s been a success for a few reasons:

  1. It allowed me to figure out how to improve our own revocation infrastructure; we now perform at about the same speed as gstatic.google.com for a similarly sized object which is what the bar should be.
  2. Both StartSSL and Entrust have now followed suit and made similar changes to their infrastructure improving their performance by about 3x (besting our performance by a few ms!).
  3. Symantec has improved their primary revocation repository performance by almost 40% and I understand more improvements are on the way.
  4. We are a closer to having data based argument we can present to browsers about why they can and should re-enable hardfail revocation checking by default.
  5. It gives customers visibility into the invisible performance hit associated with the decision of who you choose as your certificate provider.

What do you think? Do you find this valuable? Are there any other elements you think we should be tracking?

Certificate-based Mozilla Persona IdP

My name is David Margrave, I am a guest author on unmitigatedrisk.com.  I have worked in the security sphere for 20 years at various U.S. federal agencies, financial institutions, and retailers.  My interests include improving the state of client authentication on the Internet, which is an area that saw robust developments in the 1990s, then languished for a number of years as the Internet at large seemed content with reusable passwords and form-based authentication over SSL/TLS, but has received renewed scrutiny because of recent large scale data breaches and the veiled promise from the Federal government to ‘fix this mess or we will fix it for you’.

 

The Mozilla Persona project is a recent initiative to improve and standardize browser-based authentication.  For a long time (over 10 years) the most widely-used form of browser-based authentication has been based on HTML forms.  At its most basic level, a user will enter an identifier and reusable password into an HTML form, and submit the form in an HTTPS request to access a protected resource.  The server will receive these values, validate them, and typically return state information in an encrypted and encoded HTTP cookie.  Subsequent visits to the protected resource will send the cookie in the HTTP request, and the server will decrypt and validate the cookie before returning the protected content.   This entire exchange usually takes place over HTTPS, but in many instances the authentication cookie is used over an HTTP connection after initial authentication has completed successfully.  There are other forms of HTTP authentication and other previous attempts at standardization, but a quick survey of the largest retailers and financial institutions will show that HTML form-based authentication is still the most common by far.

 

Assuming that the implementers of these cookie schemes are competent amateur cryptographers and avoided the most glaring mistakes (see this paper by MIT researchers), all of these authentication schemes which rely on HTTP cookies suffer from the same critical flaw:  An attacker who obtains the cookie value can impersonate the user.  The crucial problem is that HTML form-based authentication schemes have not been capable of managing cryptographic keying material on the client side.  More secure schemes such as Kerberos V5 use a ticket in conjunction with an accompanying session key, both of which are stored in a credentials cache.  In contrast to flawed cookie-based schemes, in the Kerberos V5 protocol, a service ticket is useless to an adversary without the accompanying service ticket session key.  An authentication exchange in Kerberos V5 includes the service ticket, and a value encrypted with the service ticket session key, to prove possession.There are some proprietary or enterprise-level solutions to this situation.  For example, Microsoft Internet Explorer and IIS have long had (for over 10 years) the capability to use HTTP Negotiate authentication and to use GSS-API with Kerberos V5 as the underlying mechanism.  The Apache web server has had the capability to accept HTTP Negotiate authentication for several years as well, but the adoption of these solutions on the Internet at large has not been widespread.  At a high level, the Mozilla Persona project improves this situation by bringing the credentials cache and cryptographic capabilities into the browser, and doing so in a standardized manner.  Although the underlying cryptographic algorithms may differ from the Kerberos V5 example, the importance of this project can’t be understated.

 

Persona introduces the concept of the Identity Data Provider (IdP).  The basic idea is that a domain owner is responsible for vouching for the identity of email addresses in that domain.  This could involve whatever scheme the domain owner wishes to implement.  If a domain does not implement an IdP, the Persona system will use its own default IdP which uses the email verification scheme that all Internet users are familiar with:  you prove your ability to receive email at a particular address.  When signing-in to a website which uses Persona authentication, the user will be presented with a dialog window asking for the email address to use.

Screenshot from 2013-04-10 13:14:26

Behind the scenes, the Persona system determines which IdP to use to verify the address.  A domain implementing an IdP must publish some metadata (the public key, and provisioning and verification URLs), in JSON format, at the URL https://domain/.well-known/browserid.  The server at the URL must have a certificate from a trusted certificate authority, and the returned value must be properly-formatted JSON with certain required metadata information (described here).

 

The author implemented an IdP at the domain margrave.com as a research exercise, borrowing from the NodeJS browserid-certifier project.  This particular IdP was written to accept X.509 client certificates issued by a commercial certificate authority, to extract the email address from the X.509 certificate, and issue a persona certificate with that email address. The .well-known/browserid file for node.margrave.com is shown here:

{
    "public-key": {"algorithm":"DS",
        "y":"aab45377fa024964a6b3339d107b91887adf85b96649b5b447a7ac7390866c92d88ed101f6525e717c0d703d5fd8727e0d1d8adb60bb80c7123730616c197326f1eed326fdfc136d7594ffce39a05005a433add8d3344813ea89f6e426d8f5b0bc0d3fdb59c8ec7c19583ba7f14d3636713b84c1ebe62a6866e9c2091def5c25aba967670eabc4591ee3f536006ce5c550265d4b2264c5a989abf908763b41014f35eb2949a0b027a1a1054203a3e13eeb1f16ffb171d6942405546a8407c3fb7e73227e432d150834054edc379de8f8988a8e3b102b70fe5b1164a28a4a453310313e00de1aa177f5ac2b73ef31670e16914607ba4196c06e57f7e5209bc7e4",
        "p":"d6c4e5045697756c7a312d02c2289c25d40f9954261f7b5876214b6df109c738b76226b199bb7e33f8fc7ac1dcc316e1e7c78973951bfc6ff2e00cc987cd76fcfb0b8c0096b0b460fffac960ca4136c28f4bfb580de47cf7e7934c3985e3b3d943b77f06ef2af3ac3494fc3c6fc49810a63853862a02bb1c824a01b7fc688e4028527a58ad58c9d512922660db5d505bc263af293bc93bcd6d885a157579d7f52952236dd9d06a4fc3bc2247d21f1a70f5848eb0176513537c983f5a36737f01f82b44546e8e7f0fabc457e3de1d9c5dba96965b10a2a0580b0ad0f88179e10066107fb74314a07e6745863bc797b7002ebec0b000a98eb697414709ac17b401",
        "q":"b1e370f6472c8754ccd75e99666ec8ef1fd748b748bbbc08503d82ce8055ab3b",
        "g":"9a8269ab2e3b733a5242179d8f8ddb17ff93297d9eab00376db211a22b19c854dfa80166df2132cbc51fb224b0904abb22da2c7b7850f782124cb575b116f41ea7c4fc75b1d77525204cd7c23a15999004c23cdeb72359ee74e886a1dde7855ae05fe847447d0a68059002c3819a75dc7dcbb30e39efac36e07e2c404b7ca98b263b25fa314ba93c0625718bd489cea6d04ba4b0b7f156eeb4c56c44b50e4fb5bce9d7ae0d55b379225feb0214a04bed72f33e0664d290e7c840df3e2abb5e48189fa4e90646f1867db289c6560476799f7be8420a6dc01d078de437f280fff2d7ddf1248d56e1a54b933a41629d6c252983c58795105802d30d7bcd819cf6ef"
    },
    "authentication": "/persona/sign_in.html",
    "provisioning": "/persona/provision.html"
}

 

The public key from the browserid file is the public portion of the key pair used by the IdP to certify users in the domain.  The fact that it must be served over a URL protected with a certificate issued from a trusted CA, is how the Persona system builds on the existing trust infrastructure of the Internet, instead of attempting to re-implement their own from scratch, or requiring operators of websites relying on Persona authentication to establish shared secrets out-of-band.  The authentication and provisioning URLs are how browsers interact with the IdP.

 

In the Certificate-based IdP implemented at margrave.com, the page located at /persona/provision.html includes some javascript which does the following things:  calls an AJAX API to get the email address from the certificate, receives the email address that the user entered in the Persona login dialog via a javascript callback, validates that they match, and calls another AJAX API to issue the certificate.  Note that the email address comparison performed in client-side javascript is purely for UI and troubleshooting purposes, the actual issuance of the Persona certificate uses the email address from the X.509 certificate (if the provisioning process progresses to that point), irrespective of what username was entered in the Persona login dialog.  The client-side validation of the email address is to provide the ability to troubleshoot scenarios where a user may choose the wrong certificate from the browser certificate dialog box, etc.  The client-side provisioning source code is shown below (ancillary AJAX error handling code is omitted).

 

function provision() {

  // Get the email from the cert by visiting a URL that requires client cert auth and returns our cert's email in a json response.
  // This is not strictly necessary, since the server will only issue persona certificates for the email address from the X.509 certificate,
  // but it is useful for troubleshooting, helping the user avoid choosing the wrong certificate from the browser dialog, etc.
  getEmailFromCert(function(emailFromCert) {
      if (emailFromCert) {
          navigator.id.beginProvisioning(function(emailFromPersona, certDuration) {
              if (emailFromPersona===emailFromCert) {
                  navigator.id.genKeyPair(function(publicKey) {
                      // generateServerSide makes an AJAX call to a URL that also requires client cert auth
                      generateServerSide(publicKey, certDuration, function (certificate) {
                          if (navigator.id && navigator.id.registerCertificate) {
                              //alert('registering certificate: ' + certificate);
                              navigator.id.registerCertificate(certificate);
                          }
                      });
                  });
              } else {
                  navigator.id.raiseProvisioningFailure('user is not authenticated as target user');
              }
          });
      } else {
          navigator.id.raiseProvisioningFailure('user does not have a valid X.509 certificate');
      }
  });
}

function generateServerSide(pubkey, duration, cb) {
    $.ajax({
        // Note that this URL requires SSL client certificate authentication,
        // and performs its own authorization on the email address from the certificate,
        // (for example, based on issuing CA or email address domain),
        // and so does not need the email address as an explicit input parameter
        url: "https://node.margrave.com/cert_key",
        type: "POST",
        global: false,
        data: {pubkey: pubkey,
               duration: duration},
               dataType: "json",
        success: function(response) {
                if (response.success && response.certificate) {
                    cb(response.certificate);
                }
            }
    });
    return false;
}

function getEmailFromCert(cb) {
        $.ajax({
            // Note that this URL requires SSL client certificate authentication,
            // and performs its own authorization on the email address from the certificate.
            url: "https://node.margrave.com/email",
            type: "POST",
            global: false,
            dataType: "json",
            success: function(response) {
                cb(response.email);
            }
        });}

 

The other portion of a Persona IdP, the authentication URL, turned out not to be necessary in this case, because the authentication is implicit in the use of X.509 client certificate-authenticated AJAX calls.  Once the Persona certificate has been provisioned, the user is able to access the protected resource.  If things don’t work as expected, the error messages do not seem to bubble up to the UI dialog, and I had to resort to tracing XHR calls with Firebug to determine what went wrong.  In one case, it was a clock skew error that was corrected by installing ntpd on my IdP server.   In another case, one of my IdP AJAX calls may return an error but this error gets masked by a vague UI message.  It may be helpful to standardize HTTP return code and JSON field names to return descriptive error text to the Persona UI.

 

Screenshot from 2013-04-10 13:15:32

 

 

In its current form, this concept could be useful for enterprises, but not really for the Internet at large, since it requires a) that you have a client cert and b) that the IDP for your email domain is certificate-aware.  However, If the persona-default IDP were certificate-aware, or CAs were persona-aware, then there are some interesting possibilities.

  1. The persona default IDP could skip the verification email if a trusted X.509 client certificate is provided.   Possession of a certificate from a trusted CA implies the email address has already been verified, at a minimum.  The Persona system already accepts CA’s trust when retrieving .well-known/browserid, this idea extends CA trust to clients.
  2. Going the other direction: If a CA were to accept a persona certificate from either a domain’s IDP or from the persona-default IDP, and using that to issue X.509 client certificates, or as one part of the client certificate enrollment process (higher assurance certificates may verify more information than email, such as state-issued identification).  This idea seems promising because the email verification scheme is the wheel that everyone on the Internet has reimplemented, in many cases with security flaws.

 

Microsoft Root Program and Excel

The other day I was was studying up on my Excel so I could help someone with a project, today I met with a friend where we were discussing the composition of root programs.

Since when you’re a plumber you fix everything with a wrench out came Excel and since sometimes I can’t let a problem sit still I spent far to much time slicing and dicing the Microsoft Root Program membership list.

There is a ton more that can be done, for example:

  1. Root CA adoption relative to UN membership.
  2. Root CA certificates based on validity dates.
  3. Comparing the Microsoft Root Program membership to the Mozilla Root Program membership.
  4. CAs per network (using AIA:OCSP urls as an indicator)
  5. CA adoption of CDNs for OCSP and CRLs.
  6. A look at how many operational facilities are associated with each organization (for example Symantec has 66 root CAs for there various brands are they all in the same facilities?).

 

Anyways it was a fun exercise and I thought I would share the results with you, here is the XLSX.

For those who do not want to look at the XSLX here are some statistics you may think are interesting:

  1. 31.30% of the CAs were owned explicitly by governments.
  2. 64.35% of the CAs were owned by commercial entities in the business of being third-party trust providers.
  3. Of the 352 certificates they are owned by 115 organizations, 36 government, 74 commercial and 5 enterprise.
  4. The USA has the most organizations who own CAs coming in at 13.04%.
  5. Spain is next with 12.17% of the organizations owning  CAs being based there..
  6. France is number three with 5.22% of the organizations owning  CAs being based there..
  7. 17.33% of the roots use 1024bit keys.
  8. 60.23% of the roots use 2048bit keys.
  9. 18.47% of the roots use 4096bit keys.
  10. 1.70% of the roots use ECDSA384 keys.
  11. 115 of the 206 sovereign nations have CAs within their borders.

Advanced Troubleshooting of Certificate Validation Related Problems on Windows Part 2

One of the most complicated things to troubleshoot in X.509 is failures related to Name Constraints handling, there are a few ways to approach this but one of the easiest is to use the Extended Error Information in the Certificate viewer.

Lets walk through an exercise so you can try using this on your own, first download this script, if you extract its contents and run makepki.bat you get a multi-level PKI that looks something like this:

clip_image002

Before we begin its useful to understand that Name Constraints are applied at an Issuing CA level, in this case the constraints say that the CA is authoritative for any DNS or RFC822 name within in the example.com domain, it also allows a specific base distinguished name (DN).

In our example these constraints are applied by the “Test Partner CA” on to the “Customer CA”; you can see the restrictions in its certificate:

clip_image003

This script creates two certificates, one for email (user.cer) and another for SSL (www.cer), normally the script makes both of these certificates fall within the example.com domain space but for the purpose of this post I have modified the openssl.cfg to put the email certificate in the acme.com domain. — The idea here is that since this domain is not included in the constraints a RFC 3280 compliant chain engine will reject it and we have an error to diagnose.

Normally we don’t already know what the failure associated with a certificate chain is, after all that’s why we are debugging it but the process we use here will help us figure out the answer — applications are notorious for sharing why a certificate chain was rejected.

Since we can’t rely on the applications one way for us to figure this out on our own is to look at the chain outside of the application in the Windows Certificate viewer to see if we can tell what the issue is there.

Let’s start by looking at the “good” certificate:

clip_image005

What’s important here is that we see no errors, but how do we know that? Well let’s look at the bad certificate and see what the difference is:

clip_image007

The certificate clearly has a problem and Windows has done a decent job of telling us what the problem was, there is a name in this certificate that is inconsistent with the Name Constraints associated with this certificate chain. The question is — which name?

To answer that question we need to look at the Name Constraints, remember that this is in the CA certificates it can be in any or all of them which means we need to first figure out where the Constraints are.

To do that we start at the “Certification Path”, here you see something like this:

clip_image008

Notice the little yellow flag, that’s the certificate viewer saying this is where the problem is, let’s take a look at that certificate:

clip_image009

But wait! There isn’t a problem with this one, that’s because this certificate is actually good it’s the certificate it issued that is the problem. Here is the non-obvious part, let’s look at the details tab:

clip_image010

Here we see Windows is telling us exactly what the problem was with the certificate that this CA has issued, it is the email address and its inclusion of the acme.com domain.

Remember that it was restricted, via the Name Constraints extension, to the example.com domain when issuing email certificates. The chain engine would be non-compliant with the associated RFC if it trusted this certificate.

But why did we see this Extended Error Information in this certificate and not the leaf itself? This is actually by design and makes sense when you think about it. Name Constraints can exist in each issuing CA certificate in the chain and what is happening here is that CryptoAPI is telling us that the issuer certificate that was flagged is the offending actor.

This Extended Error Information is available in the Certificate Viewer in a number of cases (for example Revocation), though most of the interesting cases require the application to launch the certificate viewer as you get to view the certificate chain (and all of the associated state) that they were using when they encountered the problem.

Advanced Troubleshooting of Certificate Validation Related Problems on Windows Part 1

The Windows platform for validating X.509 certificates has a feature I don’t see many discuss — its robust logging subsystem.

This allows a non-developer (and developers) to get insights into what is happening with applications interactions with CryptoAPI 2 and to some degree what is happening inside those APIs.

To use this feature you must first enable it, the easiest way to do that is via the EventViewer Management Console (eventvwr.msc), once in there you must navigate to the CryptoAPI 2 (CAPI2) node of the viewer:

clip_image001[6] clip_image002[6] clip_image003[6]

 

Once you get there you select Properties on the operational log, which will give you a dialog that looks something like this:

clip_image007

 

Once you “Enable Logging” and “Apply” the changes, immediately all calls to CryptoAPI will be logged. CryptoAPI is used all the time so by the time you close that dialog you will have some events you can look at:

clip_image008

 

The high level view of each event doesn’t tell you much, for example in the above picture we really only know that the action that was being performed with a “Build Chain”, this particular event corresponds to a call to CertGetCertificateChain.

To really understand what is going on you need to look at the Details tab:

clip_image009

 

Here we can see what was passed into a given into that API call, we can see what certificates we passed in by the calling application, what settings they chose when making that call:

clip_image011

 

Some other things you can see in this particular API call include what the disposition of the call was and of course what certificate chain was built.

Every major API in CryptoAPI has logging similar to this, you get to see what is passed in and what came out. Additionally major “objects” are sometimes logged as well, for example here is an event showing a certificate that CryptoAPI was operating against:

clip_image012

 

Another very useful API to be able to look at is CertVerifyRevocation, this will be logged in this fashion:

clip_image013

 

Some of these events include references to temporary files, for example in this case you see:

clip_image014

This is a CRL for the “GlobalSign PersonalSign 1 CA – G2”, its stored in the Time Valid Object Cahce (TvoCache), you can look at this cache with the “certutil –urlcache” command.

 

The events will all tell you which application was the caller and if they are part of a sequence of tasks you get enough information to correlate them and put them in order.

clip_image015

 

I won’t go through all of the events but as you can see this is super valuable when trying to figure out – Why did that application do that?!

As you might imagine this logging can slow things down and produces a bunch of data so be sure to turn it off when you are done.

Abstract: Using least privileged design principals to improve trust in the online marketplace

Weekends, they are overrated 🙂

Tomorrow is the cut-off for submissions for the NIST workshop on “Workshop on Improving Trust in the Online Marketplace” to be held April 2013 and I have spent part of the day thinking about what talks might be interesting. I have already submitted one on “Revocation reality and the path to becoming effective” but I also wrote this one up and I might submit it also, posting here so I can get a little feedback before the submission deadline.

In 2010 security researchers with the EFF collected the certificates of all of the publicly-visible SSL certificates on the IPv4 internet and published their analysis and data-sets from their research. This work made it clear to the world how extensively PKI is used to facilitate commerce on the web but it also raised he concern that there were as many as 650 organizations capable of issuing publicly trusted certificates on the internet.

While this conclusion is exaggerated as many of those certificates and keys are in-fact operated by the same organizations that their certificates are ultimately issued by, the conclusion that there has been an un-needed expansion of the number of keys that are technically trusted to issue certificates for SSL for the entire Internet is sound.

To address this problem one of the steps that is needed is the application of least privilege principals to how one designs and manages publicly trusted keys and certificates. Thankfully in the late 90’s the foundation for addressing much of this problem was developed as a means to enable the Federated PKI in use by the U.S. Federal Government.

For the last year we have been working to broadly deploy X.509 Name Constraints’ along with other least privilege design principals to our customers PKIs both internally managed by our own staff as well as those on premise. This talk will explore these concepts, the client support for them, the challenges we have experienced in their deployment and identify the remaining issues that must be addressed to obtain the full benefits of this approach.

Abstract: Revocation reality and the path to becoming effective

Just submitted my first abstract for the NIST workshop on “Workshop on Improving Trust in the Online Marketplace” in April, the title of the talk is “Revocation reality and the path to becoming effective”, the abstract of which is:

 

The concept of certificate revocation is core to the X.509 trust model however 18 years after its introduction the reality is as implemented and deployed it falls short of its promise to enable an issuer certificate issuers to protect relying parties from malicious actors and miss-issuance

This talk will discuss the findings of a project where I have observed the behavior (https://revocation-report.x509labs.com), up time and performance of revocation repositories for a number of commercial Certificate Authorities for a period of over six months.

Additionally I will overview the revocation behavior of the most common browsers, identifying the gaps as they exist in those implementations.

And finally I will provide a set of recommendations that I believe if followed can address the current gaps which would move us to a world where revocation checking is an effective means of protecting relying parties from known bad actors and miss-issuance.

Is SSL Broken?

[ This is a re-post of a article I wrote for the GlobalSign corporate blog, you can find it here]

It seems every month a new flaw is identified in SSL, and while that’s a slight exaggeration, after a while one starts to ask the question – is SSL broken? My answer would to that question would be no, but the protocol is nearly twenty years old and even though it now carries a new name (TLS) it also carries much of the baggage of the past in its design.

Despite this fact, my faith in TLS is stronger today than it ever was. My reasoning is simple – today we understand the strengths and weaknesses of this protocol better than we ever have. It is continuously reviewed by the world’s best engineers and cryptographers, trying to find the bad assumptions their predecessors made, strengthening it in response to identified weaknesses, and modernizing it to use the strongest forms of cryptography available.

This continuous investment in this foundational technology gives me faith.

Today another attack on TLS was made public.  “Lucky Thirteen” is a derivative of the work of French cryptographer Serge Vaudenay (Padding Oracles against CBC based ciphers – 2010), though unlike Vaudenay’s attack, Lucky Thirteen uses a known Timing Attack previously believed to be impractical. A successful application of this attack enables an attacker to decrypt your SSL communications.

Unlike other recent attacks, such as BEAST,  Lucky Thirteen requires a server-side fix. This means that complete and effective protection against this attack will require all webservers to be updated or patched.

That said, it is possible to mitigate the attack by removing CBC cipher suites, since the attack is against SSL/TLS’s use of CBC. But what to use in its place? The consensus of security researchers is to adopt suites based on AES-GCM, and while I agree, this has one problem – the large population of clients that do not yet support it.

This recommendation is complicated slightly by the BEAST attack from last year, the resolution of which required a client side fix which has, in all likelihood, not yet been deployed ubiquitously. As such, I still recommend prioritizing the older and less secure RC4 based suites above AES-GCM since it addresses both issues.

But should you be worried? It depends. If you are using TLS (and not its little brother DTLS) I would say your best bet is to walk calmly to the nearest exit, and use this as an excuse to ensure you are following industry Best Practices when deploying SSL – if  you’re not, this attack is the least of your worries.  Specifically I would recommend visiting the SSL Configuration Checker and make the critical (red) and important (yellow) configuration changes it suggests.

I would also encourage you to deploy HTTP Strict Transport Security  on your site since the attack this mitigates (SSL stripping) is much easier for an attacker to execute.

The good news is that if you were already following the advice of the SSL Configuration Checker you were prioritizing RC4 over other ciphers and most sessions to your server were resistant to this attack. This doesn’t mean you should not be deploying the patch to this issue, you just don’t need to do so in a crazed rush.

So are there any lessons we can take away from this? Of course there are. As a server operator, I would say this finding underscores the importance of regularly reviewing your server configuration to ensure that it follows industry best, and that you are always operating the most recent and stable release of your web server.

If you want a more technical walk through of this attack, I highly recommend this post by Mathew Green on TLS Timing Oracles or this one by Adam Langly.