Category Archives: Certificates

Why crawling is not an adequate measurement methodology for the WebPKI

The answer is simple — It’s an incomplete view of the use of the WebPKI.

There are a number of different methodologies a web crawler-based approach might take in measuring the size of the WebPKI. The most naive approach would be to simply scan all IPv4 address space and log all of the certificates you see during this scan.

The problem is that this only shows a small fraction of the certificates that are out there. When you connect to an IP address and the associated web server doesn’t know what host you are trying to connect to it will return its “default” website and use the associated certificate.

That same IP address may literally be responsible for serving millions of sites based on the client’s indicated hostname. With this IP-based enumeration approach at best you would get one certificate from that host, at worse you wouldn’t even get that because some servers are not configured with a default site. This is just one problem with this approach there are many more.

Though most WebPKI market share reports do not document their methodology anecdotally it appears most work on this crawler approach and at least historically some have taken periodic drops from CAs to make their view “more complete”.

Today though the only way to measure CA market share that should be used is by relying on the pre-certificate counts in Certificate Transparency logs.

How to measure the WebPKI ecosystem

The web is dependent on there being a robust, secure, and scalable set of CAs being able to provide TLS certificates. It is unhealthy for there to be a single provider because if for any reason they have an operational or security issue they could become unavailable leaving the web in a world of hurt.

Beyond that in the name of TLS reliability TLS certificate consumers should be relying on multiple CAs for their certificates. For example, to reduce exposure to outages your certificate lifecycle management solution should support failover from one CA to the next. 

Another example of why you should use multiple CAs is to help ensure relying on party agility to changes in CAs, for example, if a CA changes which root key material they use you may lose (or gain) device compatibility, or if an issuing CA changes and someone is pinning you might break them. By to ensure device compatibility long term one should use multiple CAs to help ensure the relying party ecosystem you support is agile to these changes.

For this to work though you need to have an ecosystem of CAs you can use interchangeably, ACME (RFC 8555) helps here substantially because it provides a normalized way to interact with CAs to get these certificates. That is only helpful if there are multiple CAs that implement the protocol and if those CAs are able to scale to meet the needs of those who rely on them.

This is particularly important when you look at SaaS-like offerings the larger ones will often demand millions of certificates that need to be able to be revoked and re-issued in less than 24 hours in some cases so the scalability of the CA becomes particularly important.

Assessing the scalability of a CA is hard but one of the closest proxies you have is their overall market share.

In the US, according to the Google Transparency Report, 97% of all web traffic is protected with TLS. To put that in context there were 366.8 million registered domain names as of 2022.

Certificates can represent more than one domain name so depending on what you are measuring certificate count may not be the best metric to asses CA market share. With that said in the context of scalability, it’s probably a good metric.

What are some ways to evaluate the CA impact and market share?

  • How many certificates are issued by the CA and are unexpired.
  • How many domains are contained within the unexpired certificates issued by a CA.
  • What percentage of web traffic would be covered by the certificates issued by a CA.
  • What percentage of certificates issued by the CA are unexpired and actively in use.

Each of these answers different questions, and they progressively get harder to measure as you go down the list. The easiest by far is how many certificates are issued and still unexpired. This is because all CAs log what is called a pre-certificate to the Certificate Transparency ecosystem before issuance.

NOTE: Publication of a pre-certificate is not required by the rules of the ecosystem however not doing so would mean that users relying on that certificate would get an error.

While the existence of a pre-certificate doesn’t promise the certificate is in use it does signal that someone who controlled that domain wanted to use a certificate for that domain. They wouldn’t have bothered going to the trouble of doing that if there was not an intent to use the certificate in some way.

The easiest way to look at this data is to use the excellent https://crt.sh/cert-populations report. While it does go down from time to time it also provides very fresh views into the un-expired pre-certificate count.

NOTE: Since not all CAs publish what is referred to as the “final certificate” you can safely ignore the Certificate count data on this report.

So what does this data look like (As of July 29th, 2022)?

CertificatesPrecertificates
ALLUnexpiredALLUnexpired% of Unexpired Population
Internet Security Research Group2,834,892,521264,685,3352,553,476,280228,023,48050.18%
Sectigo109,399,9847,245,014373,669,758106,119,71323.35%
DigiCert560,740,35744,640,273497,448,38945,475,97210.01%
GoDaddy6,371,9601,874,81252,669,26031,293,3046.89%
Google Trust Services LLC17,28417828,112,66215,443,3063.40%
Amazon13,540,55698,980104,887,85914,757,6503.25%
GlobalSign nv-sa16,729,66393723,636,7786,893,7281.52%
Actalis55233,236,4931,691,7440.37%
Asseco Data Systems S.A. (previously Unizeto Certum)6,298,4726209,375,7421,571,8520.35%
Start Commercial (StartCom) Ltd.1,495,580982,866,004883,0220.19%
?1,241,4632143,924,285567,4800.12%
Entrust739,9015242,304,521554,4310.12%
SECOM Trust Systems CO., LTD.156,234-112,217,668242,8150.05%
WoSign CA Limited88,6607250,823110,1010.02%
Certainly LLC31,361205240,103101,5330.02%
Buypass186,2002702,127,22898,8350.02%
QuoVadis53,636432236,06397,4540.02%
SecureTrust311,226227301,19778,4890.02%
Microsoft Corporation Core Services Engineering & Operations ( “Microsoft CSEO”)216,44873,560212,90574,6970.02%
Deutsche Telekom Security GmbH57,57032147,94949,5560.01%
JPRS15,7383482,64236,5120.01%
SwissSign AG237,88668,27283,50426,9750.01%
Government of Spain, Fábrica Nacional de Moneda y Timbre (FNMT)86,87223,86657,20623,7720.01%

What you will see is the top 5 CAs out of 233 issue 98.59% of all TLS certificates. While I would like to see this distribution be more normalized to ensure that the ecosystem is not overly dependent on any one entity as far as health goes it does show there are several large providers out there that support the Web who have demonstrated they can scale to meet large certificate consumption needs.

One thing you will notice in this data is that the variability in the pre-certificate “ALL” and “Unexpired” count is quite large in some cases. This is because some CAs like Let’s Encrypt and Google Trust Services either predominantly, or exclusively issue shorter-lived certificates. This results in the certificate count in “All” being much higher than the “Unexpired” case.

So what can we take away from this data? I think there are four key takeaways:

  1. Support of certificate issuance via ACME has made shorter-lived certificates viable and they now represent the large majority of certificates on the web.
  2. Support of ACME has helped grow the percentage of the web that is encrypted from about half of the web to nearly 100% of the web.
  3. 2.15% of CAs issue 98.59% of all TLS certificates on the web.

How is ACME different than XCEP/WSTEP anyway?

If you read my blog there is a reasonable chance that you are familiar with RFC 8555, the standard for Automatic Certificate Management Environment (ACME). Even though ACME is a relatively young protocol it is already used by the majority of websites on the internet for certificate lifecycle management.

While I won’t go into a lot of detail for this post to make sense you have to understand a couple of things about the ACME protocol. 

The first is that it works on the concept of dynamic “account” registration. By that I mean requestors can in real-time, request that an “account” be created for them. This account is represented by a public key pair that the ACME service will use to persist meta-data about the requestor. The ACME service can deny this request for any reason it likes but commonly in Web PKI as long as the request is wellformed it is accepted.

The next thing you need to understand is that it has the concept of “challenges” that can be used to communicate conditions that must be met before a certificate is issued. For example, an ACME service may request that the account holder demonstrate that they are authorized to get a certificate for a given domain name by placing a specific value in DNS at a well-known location. Since only a DNS administrator could perform that action the ACME service can have confidence the requestor controls the name it has requested a certificate for.

And finally, there is nothing in the ACME protocol that limits its use to just web server certificates. It is a general framework that can be used to acquire and manage certificates of any type. For example, CISCO is using ACME in their WebEx offering to facilitate the acquisition of what are essentially email certificates via OIDC authentication as a way to authenticate chat members.

But there is another very popular protocol, a well set of protocols that fewer people know about, that is XCEP and WSTEP. These protocols are used by Windows machines to both determine what kind of certificates a machine or user should enroll for as well as enabling the enrollment for those certificates. 

Similar to ACME these protocols also support, although in a more ridged rigid way, the ability for the issuer to challenge the client for additional information necessary to get a certificate of a particular type. For example, you can configure a certificate type (known as a template) to require that the requestor provide a cryptographic attestation backed by a TPM to be used to prove the machine belongs to the organization operating the certificate authority.

There are differences though, the first of which is the concept of a template, this enables XCEP/WSTEP to have one URL endpoint issue many types of certificates which is very important within an enterprise which is where certificates are used for many different scenarios.

Another difference is that XCEP/WSTEP presume the authorization of the client happened out of the band before the client requested the certificate. The dynamic approach to challenges that was adopted by ACME allowed it to tackle this problem in-band or rely on the out-of-band authorization. It supports this out-of-band concept through the concept of External Account Binding which allows the requestor to use an API key gathered out of band to prove on account creation the account key is associated with some pre-enrolled user.

And finally, ACME has a clear model for extensibility built into it. What this means is that one can easily extend it with additional capabilities. The most fundamental part of this is the Directory resource which lists all of the APIs supported by this ACME instance. One could use this, for example, to add a “Templates” API that would allow an ACME client to request specific types of certificates from the ACME endpoint.

Similarly, the concept of the challenge allows the server to demand the client do any number of things before the certificate is to be issued so the idea of adding a TPM challenge, for example, is trivial within this framework.

In short, ACME, contrary to popular belief, is not a protocol for getting and managing website certificates it is a framework for getting and managing any certificate. More importantly, it is extensible in such a way that with just a few minor additions it would be a proper superset of all the capabilities within the Windows enrollment protocol suite.

Why is this important? That’s easy! When I talk to anyone who is using certificates at any reasonable scale their concerns almost instantly come to the complexity of managing the certificate lifecycle management of those certificates across the various products and services that use them.

When we look at this complexity most of it arises from the use of a mish-mash of solutions for lifecycle management that when viewed in isolation seemed sufficient but when looked at holistically were actually woefully insufficient. 

If as an industry we move these legacy systems to a single protocol so that certificates regardless of them being for public or private PKI or representing users, machines or workloads use one protocol we will have a reliable substrate that we can use to authenticate and authorize with agility.

The next decade of Public Key Infrastructure…

Background

Before we talk about the future we need to make sure we have a decent understanding of the past. X.509 based Public Key Infrastructure originally was created in the late 80s with a focus on enterprise and government use cases.  These use cases were largely for private systems, it was not until a decade later this technology was applied to the internet at large.

Since the standards for enrollment and lifecycle management at the time were building blocks rather than solutions and were designed for government and enterprise use cases rather than the internet, the Web PKI, as it became known, relied largely on manual certificate lifecycle management and a mix of proprietary automation solutions.

While the use of PKI in the enterprise continued, primarily thanks to Microsoft AD/CS and its automatic certificate lifecycle management (I worked on this project), the Web PKI grew in a far more visible way. This was primarily a result of the fact that these certificates had to be acquired manually which led to the creation of an industry focused on sales and marketing of individual certificates.

The actors in this system had no incentive to push automation as it would accelerate the commoditization of their products. The reality was that these organizations had also lost much of their technical chops as they became sales and marketing organizations and could no longer deliver the technology needed to bring this automation anyways.

This changed in 2016 when the Internet Security Research Group, an organization I am involved in, launched Let’s Encrypt. This was an organization of technologists looking to accelerate the adoption of TLS on the web and as such started with a focus on automation as it was clear that without automation growth of HTTPS adoption would continue to be anemic. What many don’t know is in when Let’s Encrypt launched HTTPS adoption was at about 40% and year over year growth was hovering around 2-3%, about the rate of growth of the internet and — it was not accelerating. 

Beyond that TLS related outages were becoming more frequent in the press, even for large organizations. Post mortems would continuously identify the same root causes, a manual process did not get executed or was executed incorrectly.

The launch of Let’s Encrypt gave the Internet the first CA with a standards-based certificate enrollment protocol (ACME), this combined with the short-lived nature of the certificates they issued meant those that adopted it would have to use automation for their services to reliably offer TLS. This enabled products to make TLS work reliably and by default, a great example of this is the Caddy web server. This quickly took the TLS adoption rate to around 10% year over year and now we are hovering around 90%+ HTTPS on the internet.

While this was going on the concept of microservices merged with containers which led to container-orchestration, which later adopted the concept of mesh networking. This mesh networking was often based on mutual-TLS (mTLS). The most visible manifestation of that being SPIFFE, the solution used by Kubernetes.

At the same time, we saw networks becoming more composable, pushing authentication and authorization decisions out to the edge of the network. While this pattern has had several names over the years we now call it Zero Trust and a visible example of that today is Beyond Corp from Google. These solutions again are commonly implemented ontop of mutual TLS (mTLS).

We now also see the concept of Secure Access Service Edge (SASE) or Zero-Trust Edge gaining speed which extends this same pattern to lower-level network definition. Again commonly implemented ontop of mTLS.

The reality is that the Web PKI CAs were so focused on sales and marketing they missed almost all of these trends. You can see them now paying lip service to this by talking about DevOps in their sales and marketing but the reality is that the solutions they offer in this area are both too late and too little. This is why cloud technology providers like Hashicorp and cloud providers like Amazon and Google (I am involved in this also) had to step in and provide their offerings.

We now see that Web PKI CAs are starting to more seriously embrace automation for the public PKI use cases, for example, most of the major CAs now offer ACME support to some degree and generally have begun to more seriously invest in the certificate lifecycle management for other use cases.

That being said many of these CAs are making the same mistakes they have made in the past. Instead of working together and ensuring standards and software exist to make lifecycle management work seamlessly across vendors, most are investing in proprietary solutions that only solve portions of the problems at hand.

What’s next?

The usage of certificates and TLS has expanded massively in the last decade and there is no clear alternative to replace its use so I do not expect the adoption of TLS to wain anytime soon.

What I do think is going to happen is a unification of certificate lifecycle management for private PKI use cases and public PKI use cases. Mesh networking, Zero-Trust, and Zero-Trust edge is going to drive this unification.

This will manifest into the use of ACME for these private PKI use cases, in-fact this has already started, just take a look at Cert Manager and Small Step Certificates as small examples of this trend. 

This combined with the ease of deploying and managing private CAs via the new generation of Cloud CA offerings will result in more private PKIs being deployed and the availability problems from issues like certificate expiration and scalability will no longer be an issue.

We will also see extensions to the ACME protocol that make it easier to leverage existing trust relationships which will simplify the issuance process for private use cases as well as ways to leverage hardware-backed device identity and key protection to make the use of these certificate-based credentials even more secure.

As is always the case the unification of common protocols will enable interoperability across solutions, improve reliability and as a result accelerate the adoption of these patterns across many products and problems.

It will also mean that over time the legacy certificate enrollment protocols such as SCEP, WSTEP/XCEP, CMC, EST, and others will become less common.

Once this transition happens this will lead us to a world where we can apply policy based on subjects, resources, claims, and context across L3 to L7 which will transform the way we think about access control and security segmentation. It will give both more control and visibility into who has access to what.

What does this mean for the Web PKI?

First I should say that Web PKI is not going anywhere – with that said it is evolving.

Beyond the increase in automation and shorter certificate validities over the next decade we will see several changes, one of the more visible will be the move to using dedicated PKI hierarchies for different use cases. For example, we will ultimately see server authentication, client authentication, and document signing move to their own hierarchies. This move will better reflect the intent of the Web PKI and prevent these use cases from holding the Web PKI’s evolution back.

This change will also minimize the browser influence on those other scenarios. It will do this at the expense of greater ecosystem complexity around root distribution but the net positive will be felt regardless. I do think this shift will give the European CAs an advantage in that they can rely on the EUTL for distribution and many non-web user agents simply do not want to manage a root program of their own so the EUTL has the potential to be adopted more. I will add that is my hope these user agents instead adopt solution-specific root programs vs relying on a generic one not built for purpose.

The Web PKI CAs that have not re-built their engineering chops are going to fall further behind the innovation curve. Their shift from engineering companies to sales and marketing companies resulted in them missing the move to the cloud and those companies that are going through digital transformation via the adoption of SaaS, PaaS, and modern cloud infrastructures are unlikely to start that journey by engaging with a traditional Web PKI CA.

To address this reality the Web PKI CAs will need to re-invent themselves into product companies focusing on solving business problems rather than selling certificates that can be used to solve business problems. This will mean, for example, directly offering identity verification services (not selling certificates that contain assertions of identity), providing complete solutions for document signing rather than certificates one can use to sign a document or turnkey solutions for certificate and key lifecycle management for enterprise wireless and other related use cases.

This will all lead to workloads that were once on the Web PKI by happenstance being moved to dedicated workload/ecosystem-specific private PKIs. The upside of this is that the certificates used by these infrastructures will have the opportunity to aggressively profile X.509 vs being forced to carry the two decades of cruft surrounding it like they are today.

The Web PKI CAs will have an opportunity to outsource the root certificate and key management for these use cases and possibly subcontract out CA management for the issuing CAs but many of these “issuing CA” use cases are likely to go to the cloud providers since that is where the workloads will be anyway.

Due to the ongoing balkanization of the internet that is happening through increased regional regulation, we will see smaller CAs get acquired, mainly for their market presence to let the larger providers play more effectively in those markets.

At the same time, new PKI ecosystems like those used for STIR/SHAKEN and various PKIs to support IoT deployments will pop up and as the patterns used by them are found to be inexpensive, effective, and easily deployable they will become more common.

We will also see that the lifecycle management for both public and private PKI will unify ontop of the ACME enrollment protocol and that through that a new generation of device management platforms will be built around a certificate-based device identity anchored in keys bound to hardware where the corresponding certificates contain metadata about the device it is bound to.

This will lay the groundwork for improved network authentication within the enterprise using protocols like EAP-TTLS and EAP-TLS, enable Zero-Trust and Zero-Trust Edge deployments to be more easily deployed which will, in turn, blur the lines further between what is on-premise and what is in the cloud.

This normalization of the device identity concepts we use across solutions and the use of common protocols for credential lifecycle will result in better key hygiene for all use cases, and simplify deployment for those use cases.