Why we built the Revocation Report

For over a year I have been monitoring the industry’s largest OCSP and CRL repositories for performance and uptime. I started this project for a few reasons but to understand them I think it’s appropriate to start with why I joined GlobalSign.

If you’re reading this post you are likely aware of the last few years of attacks against public Certificate Authorities (CA). Though I am no stranger to this space, like you I was watching it all unfold from the outside as I was working at Microsoft in the Advertising division where I was responsible for Security Engineering for their platform.

I recall looking at the industry and feeling frustrated about how little has changed in the last decade, feeling like the Internet was evolving around the CA ecosystem – at least technologically. The focus seemed almost exclusively on policies, procedures and auditing which are of course extremely important when you’re in this business but by themselves they are not a solution.

When I looked at the CA ecosystem there were a few players who I thought understood this; the one I felt got it the most was GlobalSign. Instead of joining the race to the bottom they were developing solutions to help with key management, certificate lifecycle management, and publishing guides to help customers deploy certificates cost effectively.

As a result when they approached me with the opportunity to join them as their CTO and set the technology direction for the company I was intrigued. Those of you who know me know I love data, I believe above all things successful businesses (if they recognize it or not) leverage the Define, Measure, Analyze, Improve and Control cycle to ensure they are solving the right problems and doing so effectively.

To that end when I joined GlobalSign as their CTO and I wanted market intelligence on what the status quo was for technology, operating practices and standards compliance so that I could use to adjust my own priorities as I planned where GlobalSign was going to focus its investments.

It was more than that though, as many of you know I am not new to PKI and especially not to revocation technologies having developed several products / features in this area as well as contributing to the associated standards over the years. I was always frustrated by many public certificate authorities’ inability or unwillingness to acknowledge the inadequacy of their revocation infrastructure and its contribution to slow TLS adoption and bad user agent behavior when it comes to revocation checking.

More directly the reliability and performance of major CA operational infrastructure was why browsers had to implement what is now called “soft-fail” revocation behaviors; the treating of failures to check the status of a certificate the same as a successful check. Yet it is these same people who point fingers at the browsers when the security implications of this behavior are discussed.

I wanted to see that change.

This is why from the very beginning of this project I shared all the data I had with other CAs, my hope was they would use it to improve their infrastructure but unfortunately short of one or two smaller players no one seemed concerned – I was shouting at the wind.

With the limited feedback I had received for the data I had been collecting I decided to put together what is now the revocation report. As part of this project I switched to a different monitoring provider (Monitis) because it gave me more control of what was being monitored and had a more complete API I could use to get at the data.

In parallel I began to work with CloudFlare to address what I felt was one barrier to optimally using a CDN to distribute OCSP responses (inability to cache POSTs). The whole time chronicling my experiences, thoughts and decisions on my blog so that others could learn from my experience and the industry as a whole could benefit.

When I set up the Monitis account I licensed the ability to monitor the top responders from 21 locations worldwide every minute. At first I just published the graphical reports that Monitis had but they had a few problems:

  1. They did not perform very well (at the time).
  2. It was not laid out in such a way you could see all the data at once (also now fixed).
  3. It did not exclude issues associated with their monitoring sensors.
  4. It gave no context to the data that was being presented.

This is what took me to working with Eli to build the revocation report we have today, the internet now has a public view into approximately eleven months (and growing) of performance data for revocation repositories. Eli and I are also working on mining and quantizing the data so we can do something similar for responder uptime but this has taken longer than expected due to other priorities — we will finish it though.

So the question at this point is “was the effort worth it?” — I think so, both of us put a lot of time into this project but I believe it’s been a success for a few reasons:

  1. It allowed me to figure out how to improve our own revocation infrastructure; we now perform at about the same speed as gstatic.google.com for a similarly sized object which is what the bar should be.
  2. Both StartSSL and Entrust have now followed suit and made similar changes to their infrastructure improving their performance by about 3x (besting our performance by a few ms!).
  3. Symantec has improved their primary revocation repository performance by almost 40% and I understand more improvements are on the way.
  4. We are a closer to having data based argument we can present to browsers about why they can and should re-enable hardfail revocation checking by default.
  5. It gives customers visibility into the invisible performance hit associated with the decision of who you choose as your certificate provider.

What do you think? Do you find this valuable? Are there any other elements you think we should be tracking?

2 thoughts on “Why we built the Revocation Report

  1. Ryan Sleevi

    Ryan,

    One thing to remember is that the Monitis report reflects the *ideal* conditions. The reality, as I’m sure you realize, is that client software often operates under far less than ideal conditions. Whether it being operating on dial-up in a country with a under-developed communications infrastructure, on a mobile network that is vastly oversold, or on client machines riddled with malware, LSPs, sub-optimal home routers, and any number of other issues, it’s a truly unfortunate story.

    Yes, the argument exists that these are the same issues that the sites’ the user is trying to access must go through, but when we talk about revocation – especially in the multi-hosted, inter-domain world the Web is today – do we really want to see a 2X+ degradation for the mobile web or developing world? Or should we be looking for solutions that work better in the real world networks that users operate in?

    Reply
  2. rmhrisk Post author

    Of course it doesn’t represent what browsers actually see, this is particularly problematic in the mobile world (where as a result no checking is done for the most part) where latency is so bad; the problem is only browsers have this data and they can’t or wont share it.

    As for this representing the whole picture — I agree there are a number of other factors such as average OCSP response (around 1400 bytes which is too high), average CRL size (which in many cases would qualify as ridiculously huge, up-time, chain complexity and of course network complexity.

    I have data on most of this and intend to publish dashboards on each, they just take time do do — next up will be uptime after that probably revocation message sizes and so on.

    Will it be enough in the end you and the other browsers get to decide, but publishing data is the best thing I can do to get CAs to change their practices and to try to get the browsers back to secure defaults.

    In the end though Chrome really did the only thing they could re CRLsets given the operational practices of most CAs on this stuff — it just doesn’t work completely and is chrome specific so improvements are needed and working with the CAs to operate scalable performant infrastructures seems like a fine place to start.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *