Category Archives: Thoughts

Strategies for Deploying 2FA

 

Two Factor Authentication (2FA) enables applications and services to tie the usage of a two things (typically something you know and something you have) to user accounts so that both factors must be proven prior to the use of the service. Normally the second factor is a device (either physical or virtual); a great example of this is how Google and Microsoft have added support for One Time Passwords (OTP) in their online services.

With that said there are many forms of two factor authentication and OTP is simply one of them. When we look at deploying 2FA, before you get to choosing what kind of technology to adopt you first need to understand “how” we will you use it and where? The most common answers to the “how” question are:

  • Optional usage.
  • Conditional usage.
  • Mandatory usage.
  • Account / password recovery.

You will notice I have crossed out the first option (optional usage); the reasoning behind this is that optional usage provides no security benefit. This is because the attacker can still choose to attack the weaker mechanism.

Next is conditional usage, an example to of this model would be how some solutions prompt you to authenticate with a stronger mechanism when performing a privileged operation such as modifying another users account.

Then there is mandatory usage, this is of course what we all envision when we think of using Two Factor Authentication. Unfortunately it is seldom used as it has significant barriers do adoption, the three most common issues I see preventing this model being used are:

  1. The usability of these solutions is normally considered too poor for the serviced user community to be expected to use all the time.
  2. Accessing the same account from multiple devices and locations where the second factor is not available or a viable option.
  3. Not all services are able to be enabled for multi-factor authentication. The canonical example here is that of POP/IMAP services offered by Google, by default they allow the user to log in via POP/IMAP with a password.

This leaves many sites coming to the conclusion that leveraging 2FA for account recovery (a special case of conditional usage) is the most deployable solution for their user bases.

But what if you want to actually achieve a world where the usage of 2FA is mandatory? The answer is buried in risk assessment, planning and having a technology strategy that includes acceptable authentication technologies.

Organizations should pick what technologies they will utilize for authentication and incorporate these as requirements into the procurement and technology adoption processes. While its natural (and even beneficial) to standardize on a single technology the business requirements and technological realities mean that you will have a suite of standards you will need to support.

Usually the process of establishing those standards begins with an inventory of what technologies you are already using. If you are an enterprise and running Windows that list will include Kerberos with passwords as well as NTLM.

Next one assesses the how extensive the use of each of these protocols are in your environment, what platforms/applications support those protocols, how business critical/sensitive those systems are, what are the constituency of users that use them and finally getting a solid understanding of the risks those protocols represent in your environment.

Armed with this information you now have sufficient information to build a plan, most of the time the conclusion will be something along the following:

  1. You are leveraging some legacy authentication mechanism (possibly NTLM) that you need to aggressively deprecate.
  2. There are small (relatively speaking) set of users who perform business critical / sensitive operations and if those users were to step up to a stronger authentication mechanism the business would benefit.
  3. If will take numerous budgetary cycles to standardize the organization on the desired core authentication technologies.

Armed with this information you are now prepared to evaluate the various approaches to Two Factor Authentication and build a practical plan on how you get it ubiquitously deployed. That deployment will likely involve all of the approaches called out above:

  1. Optional usage – Piloting the use of the chosen technologies with users across your targeted user segments, you ask them to use the new authentication solution as their primary authentication technique.  You also ask them to track their experiences, report their problems and recommendations to increase chances of successful adoption. During this time it is very valuable to measure their actual usage of the technology use audit logs if available.
  2. Conditional usage – Usually it is not possible to jump directly to mandatory usage so the most sensitive users and scenarios are tackled first, for reasons called out above this also typically requires starting by only requiring the usage when possible or appropriate. This allows you to quickly begin to realize the benefits of deploying this technology while not negatively effecting productivity.
  3. Mandatory usage – Once you feel prepared to support the usage as the primary authentication approach for your identified application and scenarios you move to making the usage you’re your identified users / scenarios mandatory. This is typically done one user segment at a time, each business has different operational requirements and as you deploy to each community of users you learn something new that can be used to ensure the project is a business success.

Hopefully you found this post useful, let me know if you have any questions and would like to discuss your particular situation and how one can approach eliminating or at least reducing your organizations dependency on passwords.

DCOM and your Firewall

DCOM (Distributed Component Object Model) is a framework used by Windows to allow COM components to work over the network.

Unlike your traditional TCP/IP and UDP/IP services where a single protocol has a fixed port DCOM dynamically assigns ports for the COM objects it remotes.

Any client wishing to communicate with objects owned by the server will always connect to the same TCP or UDP port. Clients discover the port associated with a particular object by connecting to and using the services provided by DCOM’s Service Control Manager (SCM).

The SCM always operates at a fixed network port on every computer; this is always port 135 for both TCP and UDP.

DCOM is a great technology for what it was designed for but unfortunately that design did not include the internet or firewalls.

If a firewall exists between the client and the DCOM endpoint you must open up a large range of ports that can be used for virtually any traffic between those two endpoints. By default, those ports port range from 1024 to5000.

The nature of this model means that one port is allocated per process, so the number of available ports equates to the number of simultaneous DCOM processes allowed through the firewall.

This design basically negates the value proposition of the firewall; as such deployment of services based on DCOM across firewall boundaries is not something that should ever be done.

Enterprise logon with challenge response (OTP) tokens

So from time to time I am asked about how to add support for a new authentication method for Windows Logon, the answer to that question is buried in what authentication methods Windows natively supports.

For the purpose of this blog post lets scope that question to the Windows Kerberos Implementation as this is the “modern” authentication platform in Windows (in other words Kerberos as they do not want you to be using NTLM any further), additionally it supports most authentication methods.

So what are those methods? Essentially there are two passwords and public/private keys. What about the Windows Native implementation of Biometrics and Picture Passwords? These like nearly every other modality of authentication in Windows is simply a layer on top of the native support of passwords that is built into Windows.

I should note I was the Lead Program Manager for the Windows Biometric Framework and I do think it is a good solution for what it was designed for.

Let me explain; let’s use Biometric as an example. In this solution there is a service that works with the biometric sensors to perform the biometric match. When the match occurs this service releases a clear text copy of your password which in turn is stuffed into the Windows logon path just as if the user entered the password by hand.

This means that somewhere in the file-system there is a clear text copy of the password stored in a reversibly encrypted form. Specifically the encryption done in this case is performed using Windows subsystem called the Data Protection API (DPAPI). DPAPI is used by applications when they need keep a secret — secret, you see this is a Catch 22 scenario unless a password (or key) is entered there is no secret to encrypt with which means ultimately there is a secret that is stored in the clear. This means if you can get at the root secret you can get at the clear text values encrypted with DPAPI.

There are ways to mitigate some of the associated risks; one example being the use of Bitlocker and a TPM protector but unfortunately this is not broadly deployed and doesn’t address the full risk profile; this is why Microsoft positions the Biometric feature as one designed Windows as a convince. That is not to say it cannot be used in a secure way or to suggest it can not be used in an enterprise but unless thought is given to the threat model and the right mitigations are deployed it actually weakens your security.

So what about those enterprise OTP and challenge response solutions that we keep hearing about, surely they must be different right? Unfortunately no they are not. In fact they are probably worse because unlike the Windows Biometric solution they create a central repository of all clear text passwords.

You see for them to work they introduce a client (once called a GINA, now called a Logon Provider) that collects the challenge and likely has a service component or SSPI provider that implements a protocol that interacts with their product’s service which then validates the challenge and returns a clear text password which they then stuff on the users behalf.

No matter how good the design and implementation of this system is they are reliant on that singular repository of clear text / reversibly encrypted passwords. Things get even worse when you look at how these systems have to deal with offline and local logons, you know the scenario where a user takes their laptop on a plane and still needs to do work? For this to work they need to store the clear text password on the clear on the local machine just like the Biometrics solution.

What about the public / private key approach? There are essentially two variants of this in windows PKINIT and PKU2U. The most common example is the use of a smart card, this is what is used by many Fortune 500s and governments to secure their logons. It is also possible to add these other modalities into Windows by emulating this approach with a software virtual smartcard but they too fall prey to the same attack vectors, namely what key do you use to keep the private key protected (the answer is you use DPAPI).

So does this mean OTP is a bad solution? No in fact it’s much better than passwords if a system was designed to use it and it makes a great additional factor, the thing is Windows was not.

How to redirect POST’s based on their Content-Type in Nginx

Time, it gives everything context especially in Public Key Infrastructure (PKI).

There a few way time comes into play with PKI the most obvious is that internal to a Certificate Authorities own infrastructure they use highly accurate and secure time sources to ensure any statements about time they make are accurate that all systems agree on the time.

Then there is the question of notarization, when a subscriber (a certificate holder) signs something how does a relying party (the person looking at the signature) know it has been signed just now or  a year ago – that’s where time stamping comes in.

These same concepts come into play in financial systems; remember the movie Entrapment? They stole a few seconds of time during a system update and netted millions. OK its just a movie and surely sensationalizes the concept of time but in reality these are real problems – you need trusted time.

Some of you have heard me discuss aspects of us building our new datacenter, one of the first services we are deploying to it is trusted time and time-stamping. The two most common protocols used for time stamping are Authenticode and RFC3161, the protocols are similar in nature, in both a client posts a binary blob to the server which is then time stamped; each protocol uses a different Content-Type in the case of Authenticode it is “application/octet-stream” and for RFC3161 it is “application/timestamp-query”.

If you look at most time stamping services (ours included today) they require to use different URIs (/scripts/timestamp.dll for Authenticode and /tsa for RFC3161 for example) but this just makes things more difficult for the user unnecessarily. To address this in our new service we will be using the posted Content-Type to ensure the right back-end service gets the timestamp request (though our old URLs will continue to work as well).

We use Nginx to do this remapping as it is our edge proxy server in this environment; I thought it might be useful for others to see how one can implement this sort of remapping as I did not see any great examples on the internet, here is what this might look like if you have a similar problem.

 

server {
listen       80;
server_name  timestamp.example.com;

 

location / {
if ($request_method = GET ) {
rewrite ^ http://www.example.com/timestamping;
}

 

if ($http_content_type = “application/octet-stream”) {
set $args “worker=authenticode”;
}

 

if ($http_content_type = “application/timestamp-query”) {
set $args “worker=rfc3161”;
}

 

proxy_set_header Host $http_host;
proxy_next_upstream error timeout http_500 http_502 http_503 http_504;
proxy_connect_timeout 3s;
proxy_read_timeout 3s;
proxy_pass http://timestamp1/process?$args;
}

}

With this configuration, no matter what URI the timestamp is sent to and whatever protocol it is sent via it will get routed to the appropriate timestamp server.

I should note I excluded a number of items like rate limiting and error handling to keep the post simple, these are also important concepts for you to consider in such a deployment.

Good Luck!

How to do a quick and dirty benchmark on a smart card

So you have to make a decision on which smart card or crypto token you’re going to use on a given project, there are lots of things to consider including price, platform support, certifications, build and software quality, number of certificates and keys it can hold, what algorithms and key sizes it implements amongst numerous others.

But even with those things well understood you need to understand the performance characteristics of the device, the two most common operations your users will likely be doing in an authentication deployment is sign and certificate requests.

Thankfully in Windows it’s fairly easy for us to get an idea of how these operations will perform for users with just a little bit of PowerShell.

In both cases we will use the measure_runs script that can be found here.

Measuring Signing Performance

To do this we will use the support PowerShell has for Code Signing, first thing is you need a code signing certificate whose key is on your smart card; I won’t include how to create one in this post but if you need help just ping me on twitter (@rmhrisk).

1. Make sure you have at least one code signing certificate by running this command:

gci cert:\CurrentUser\My –CodeSigningCert

 

Assuming you do you should see something like this:

 

Thumbprint Subject

———- ——-

EEB2729E922E72E9DCC03000129795939F194358 CN=PowerShell User

 

2. Make sure you can sign with that certificate:

$cert = @(gci cert:\currentuser\my -CodeSigningCert)[0]; Set-AuthenticodeSignature measure_runs.ps1 $cert

 

If things work you will see something like this:

SignerCertificate Status Path

—————– —— —-

EEB2729E922E72E9DCC03000129795939F194358 Valid measure_run..

 

3. Sign 100 times averaging the results:

.\measure_runs.ps1 -numberRuns 100 -run { Set-AuthenticodeSignature measure_runs.ps1 $cert} -before {$cert = @(gci cert:\currentuser\my -CodeSigningCert)[0]}

 

If things work you will see something like this:

[ 1 / 100] (Preparing… ) Running… 0.1874 seconds

[ 2 / 100] (Preparing… ) Running… 0.1700 seconds

[ 3 / 100] (Preparing… ) Running… 0.1316 seconds

[ 4 / 100] (Preparing… ) Running… 0.1426 seconds

[ 5 / 100] (Preparing… ) Running… 0.1468 seconds

Average

——-

0.262226129

 

A few things to keep in mind, you will have been prompted for a pin once in this exercise, why once? Because Windows is caching the handle to the card for the PowerShell process this means anything in the context of that PowerShell session will be able to sign with the key unless you pull the token.

Measuring Certificate Request Generation

To do this we will use a script that generates a self-signed certificate I stumbled on while searching the internet you can find it here (http://poshcode.org/1793), you will need to modify this to refer to the appropriate CSP/KSP for your smart card so we are testing the right device/software.

While a self-signed certificate isn’t exactly a Certificate Request to create both generate a key pair and sign a message so it’s a going to be representative of the CSR generation time also.

To do this test run this command:

.\measure_runs.ps1 -numberRuns 10 -run { .\createselfsigned.ps1}

 

If things work you will see something like this:

[ 1 / 100] Running… 11.9023 seconds

[ 2 / 100] Running… 9.8994 seconds

[ 3 / 100] Running… 9.9996 seconds

[ 4 / 100] Running… 10.7412 seconds

[ 5 / 100] Running… 8.9172 seconds

Average

——-

9.0738614

 

Hope this helps you, if you have any questions don’t hesitate to ask.

 

Ryan

Average CRL size and download time

The other day I had a great conversation with Robert Duncan over at Netcraft, he showed me some reports they have made public about CRL and OCSP performance and uptime.

One thing that I have been meaning to do is to look at average CRL size across the various CAs in a more formal way I just never got around to doing it; conveniently one of the Netcraft reports though included a column for CRL size. So while I was waiting for a meeting to start I decided to figure out what the average sizes were; I focused my efforts on the same CAs I include in the revocation report, this is what I came up with:

 

CA Average CRL Size(K) CRL Download Time @ 56k (s)
Entrust 512.33 74.95
Verisign 200.04 29.26
GoDaddy 173.79 25.42
Comodo 120.75 17.66
Cybertrust/Verizon 75.00 10.97
DigiCert 21.66 3.17
GlobalSign 21.25 3.11
Certum 20.00 2.93
StartSSL 9.40 1.38
TrendMicro 1.00 0.15

 

From this we can derive two charts one for size and another for download time at 56k (about 6% of internet users as of 2010):

clip_image002 clip_image004

 

I overlaid the red line at 10s because that is the timeout that most clients use to indicate when they will give up trying to download, some clients will continue trying in the background so that the next request would have the CRL already cached for the next call.

This threshold is very generous, after all what user is going to hang around for 10 seconds while a CRL is downloaded? This gets worse though the average chain is greater than 3 certificates per chain, two that need to have their status checked :/.

This is one of the reasons we have soft-fail revocation checking, until the Baseline Requirements were published inclusion of OCSP references was not mandatory and not every CA was managing their CRLs to be downloadable within that 10 second threshold.

There are a few ways CAs can manage their CRL sizes, one of the most common is simply roll new intermediate CAs when the CRL size gets unmanageable.

There is something you should understand about the data in the above charts; just because a CRL is published doesn’t mean it represents active certificates – this is one of the reasons I had put of doing this exercise because I wanted to exclude that case by cross-referencing the signing CA with crawler data to see if active certificates were associated with each CRL.

This would exclude the cases where a CA was taken out of operation and all of the associated certificates were revoked as a precautionary exercise – this can happen.

So why did I bother posting this then? It’s just a nice illustration as to why we cannot generally rely on CRLs as a form of revocation checking. In-fact this is very likely why some browsers do not bother trying to download CRLs.

All posts like this should end with a call to action (I need to do better about doing that), in this case I would say it is for CAs to review their revocation practices and how they make certificate status available to ensure it’s available in a fast and reliable manner.

How not to collect sensitive information

So I was chatting with a friend today about the recent Register article about TeliaSonera’s application to add a new root into into the Mozilla root program.

I could not recall ever visiting a site that used a certificate from TeliaSonera so I started looking around for one in my web crawler data, a few moments later my friend pointed me at Telia’s own home page where he noticed they were collecting user id’s and passwords from a CSS layer not served over SSL:

image

More interestingly they also accept your Social Security Number as an alternative to your user id:

image

Now it does submit these over SSL but if your reading this you know that the information is still susceptible to a Man In The Middle attack.

A quick look at its SSL configuration also shows the server has a number of SSL configuration problems:

1. Susceptible to DDOS because it supports client initiated renegotiation

2. Vulnerable to MITM because it supports insecure renegotiation

3. Supports weak and extra weak (export strength) ciphers

 

The certificate itself also doesn’t include OCSP pointers (which is required now under the baseline requirements, though it was issued before they become mandated) but more importantly it only includes a ldap reference to a CRL which most clients wont chase and if they did most firewalls wont allow out – in other words it can’t be meaningfully revoked by anyone other than the browsers.

More concerning is it is issued off of a root certificate authority which means its likely an online CA, vs something in a offline vault which means all sort of badness if you consider not all patches of browsers get adopted and the only way to revoke certificates like this is to patch the browser.

A look at revocation repository uptime

It is no secret that in the last two months GlobalSign was affected by outages at relating to our use of CloudFlare. I won’t go into the specifics behind those outages because the CloudFlare team does a great job of documenting their outages as well as working to make sure the mistakes of the past do not reoccur. With that said we have been working closely with CloudFlare to ensure that our services are better isolated from their other customers and to optimize their network for the traffic our services generate.

I should add that I have a ton of faith in the CloudFlare team, these guys are knowledgeable, incredibly hard working and very self critical — I consider them great partners.

When looking at these events it is important to look at them holistically; for example one of the outages was a result of mitigating what has been called the largest publically announced DDOS in the history of the Internet.

While no downtime is acceptable and I am embarrassed we have had any downtime it’s also important to look at the positives that come from these events, for one we have had an opportunity to test our mitigations for such events and improve them so that in the future we can withstand even larger such attacks.

Additionally it’s also useful to look the actual uptime these services have had and to give those numbers some context look at them next to one of our peers. Thankfully I have this data as a result of the revocation report which tracks performance and uptime from 21 different network worldwide perspectives every minute.

For 05/2012-12/2012 we see:

Service Uptime(%) Avg(ms)
GlobalSign/AlphaSSL OCSP 100.00 101.29
VeriSign/Symantec/Thawte/GeoTrust/Trustcenter OCSP 99.92 319.40
GlobalSign/AlphaSSL CRL 100 96.86
VeriSign/Symantec/Thawte/GeoTrust/Trustcenter CRL 99.97 311.42

 

For 01/2013 to 04/2013 we see:

Service Uptime(%) Avg(ms)
GlobalSign/AlphaSSL OCSP 99.98 76.44
VeriSign/Symantec/Thawte/GeoTrust/Trustcenter OCSP 99.85 302.88
GlobalSign/AlphaSSL CRL 99.98 76.44
VeriSign/Symantec/Thawte/GeoTrust/Trustcenter CRL 99.22 296.97

NOTE:  Symantec operates several different infrastructures – which one you hit is dependent on which brand you buy from and some cases which product you buy. We operate only two brands which share the same infrastructure. I averaged the results for each of their brands together to create these two tables. If you want to see the independent numbers see the Excel document linked to this post.

 

As you can see no one is perfect; I don’t share this to say our downtime is acceptable because it is not, but instead I want to make it clear this is data we track and use to improve our services and to make it clear what the impact really was.

By the way if you want to see the data I used in the above computation you can download these spreadsheets.

Why we built the Revocation Report

For over a year I have been monitoring the industry’s largest OCSP and CRL repositories for performance and uptime. I started this project for a few reasons but to understand them I think it’s appropriate to start with why I joined GlobalSign.

If you’re reading this post you are likely aware of the last few years of attacks against public Certificate Authorities (CA). Though I am no stranger to this space, like you I was watching it all unfold from the outside as I was working at Microsoft in the Advertising division where I was responsible for Security Engineering for their platform.

I recall looking at the industry and feeling frustrated about how little has changed in the last decade, feeling like the Internet was evolving around the CA ecosystem – at least technologically. The focus seemed almost exclusively on policies, procedures and auditing which are of course extremely important when you’re in this business but by themselves they are not a solution.

When I looked at the CA ecosystem there were a few players who I thought understood this; the one I felt got it the most was GlobalSign. Instead of joining the race to the bottom they were developing solutions to help with key management, certificate lifecycle management, and publishing guides to help customers deploy certificates cost effectively.

As a result when they approached me with the opportunity to join them as their CTO and set the technology direction for the company I was intrigued. Those of you who know me know I love data, I believe above all things successful businesses (if they recognize it or not) leverage the Define, Measure, Analyze, Improve and Control cycle to ensure they are solving the right problems and doing so effectively.

To that end when I joined GlobalSign as their CTO and I wanted market intelligence on what the status quo was for technology, operating practices and standards compliance so that I could use to adjust my own priorities as I planned where GlobalSign was going to focus its investments.

It was more than that though, as many of you know I am not new to PKI and especially not to revocation technologies having developed several products / features in this area as well as contributing to the associated standards over the years. I was always frustrated by many public certificate authorities’ inability or unwillingness to acknowledge the inadequacy of their revocation infrastructure and its contribution to slow TLS adoption and bad user agent behavior when it comes to revocation checking.

More directly the reliability and performance of major CA operational infrastructure was why browsers had to implement what is now called “soft-fail” revocation behaviors; the treating of failures to check the status of a certificate the same as a successful check. Yet it is these same people who point fingers at the browsers when the security implications of this behavior are discussed.

I wanted to see that change.

This is why from the very beginning of this project I shared all the data I had with other CAs, my hope was they would use it to improve their infrastructure but unfortunately short of one or two smaller players no one seemed concerned – I was shouting at the wind.

With the limited feedback I had received for the data I had been collecting I decided to put together what is now the revocation report. As part of this project I switched to a different monitoring provider (Monitis) because it gave me more control of what was being monitored and had a more complete API I could use to get at the data.

In parallel I began to work with CloudFlare to address what I felt was one barrier to optimally using a CDN to distribute OCSP responses (inability to cache POSTs). The whole time chronicling my experiences, thoughts and decisions on my blog so that others could learn from my experience and the industry as a whole could benefit.

When I set up the Monitis account I licensed the ability to monitor the top responders from 21 locations worldwide every minute. At first I just published the graphical reports that Monitis had but they had a few problems:

  1. They did not perform very well (at the time).
  2. It was not laid out in such a way you could see all the data at once (also now fixed).
  3. It did not exclude issues associated with their monitoring sensors.
  4. It gave no context to the data that was being presented.

This is what took me to working with Eli to build the revocation report we have today, the internet now has a public view into approximately eleven months (and growing) of performance data for revocation repositories. Eli and I are also working on mining and quantizing the data so we can do something similar for responder uptime but this has taken longer than expected due to other priorities — we will finish it though.

So the question at this point is “was the effort worth it?” — I think so, both of us put a lot of time into this project but I believe it’s been a success for a few reasons:

  1. It allowed me to figure out how to improve our own revocation infrastructure; we now perform at about the same speed as gstatic.google.com for a similarly sized object which is what the bar should be.
  2. Both StartSSL and Entrust have now followed suit and made similar changes to their infrastructure improving their performance by about 3x (besting our performance by a few ms!).
  3. Symantec has improved their primary revocation repository performance by almost 40% and I understand more improvements are on the way.
  4. We are a closer to having data based argument we can present to browsers about why they can and should re-enable hardfail revocation checking by default.
  5. It gives customers visibility into the invisible performance hit associated with the decision of who you choose as your certificate provider.

What do you think? Do you find this valuable? Are there any other elements you think we should be tracking?

Microsoft Root Program and Excel

The other day I was was studying up on my Excel so I could help someone with a project, today I met with a friend where we were discussing the composition of root programs.

Since when you’re a plumber you fix everything with a wrench out came Excel and since sometimes I can’t let a problem sit still I spent far to much time slicing and dicing the Microsoft Root Program membership list.

There is a ton more that can be done, for example:

  1. Root CA adoption relative to UN membership.
  2. Root CA certificates based on validity dates.
  3. Comparing the Microsoft Root Program membership to the Mozilla Root Program membership.
  4. CAs per network (using AIA:OCSP urls as an indicator)
  5. CA adoption of CDNs for OCSP and CRLs.
  6. A look at how many operational facilities are associated with each organization (for example Symantec has 66 root CAs for there various brands are they all in the same facilities?).

 

Anyways it was a fun exercise and I thought I would share the results with you, here is the XLSX.

For those who do not want to look at the XSLX here are some statistics you may think are interesting:

  1. 31.30% of the CAs were owned explicitly by governments.
  2. 64.35% of the CAs were owned by commercial entities in the business of being third-party trust providers.
  3. Of the 352 certificates they are owned by 115 organizations, 36 government, 74 commercial and 5 enterprise.
  4. The USA has the most organizations who own CAs coming in at 13.04%.
  5. Spain is next with 12.17% of the organizations owning  CAs being based there..
  6. France is number three with 5.22% of the organizations owning  CAs being based there..
  7. 17.33% of the roots use 1024bit keys.
  8. 60.23% of the roots use 2048bit keys.
  9. 18.47% of the roots use 4096bit keys.
  10. 1.70% of the roots use ECDSA384 keys.
  11. 115 of the 206 sovereign nations have CAs within their borders.