Introducing the WebPKI Observatory

For as long as I have been in this industry, the WebPKI compliance conversation has run on impressions. People with long memories and regular conference attendance have built up a picture of which CAs are well-run, which are struggling, and where the oversight gaps are. That picture has generally been accurate. It has also been almost entirely unmeasured.

The WebPKI Observatory at webpki.systematicreasoning.com, a project from Systematic Reasoning, is an attempt to change that. It’s a public dashboard covering 1,690 compliance incidents drawn from Mozilla Bugzilla between 2014 and 2025, cross-referenced with CCADB membership data, certificate issuance volumes from CT logs, root program trust store compositions, and the complete history of CA distrust events. The goal was simple: replace the shared intuition with actual data, and see what the data shows that intuition missed.

Some of it confirmed what most people in this space already suspected. Some of it was genuinely surprising.

The finding that reframes everything else is detection. When a compliance incident occurs, who finds it? Root programs find 52% of incidents. Automated external tools — CT log monitors, certificate linters, community scanning infrastructure — find 14%. CAs find their own problems in 9% of cases.

That number deserves more attention than it typically gets. One in eleven. CAs have full access to their own issuance systems, their own audits, their own CPSs, their own disclosure obligations, and they are the least effective detection mechanism in the ecosystem. External parties without any privileged access outperform internal CA monitoring by a factor of six or more. The compliance monitoring function has been effectively outsourced to external parties by default, and mostly without anyone deciding that was the right architecture.

Everything else in the data follows from that.

The failure classes that have grown are instructive. Technical misissuance has declined as a share of incidents over the past decade. What has grown is the process layer. In 2019, governance failures represented 21% of all incidents. By 2025 that figure was 60%. Policy violations, CPS failures, disclosure deadline misses. These are by definition things internal compliance programs should be catching. The 260 incidents tagged policy-failure or disclosure-failure in the dataset are a direct indictment of internal compliance operations. A CA that violates its own documented policy is not being surprised by an external attacker.

The oversight picture is also worth examining. In 2017, Mozilla engaged with 79% of Bugzilla compliance bugs. Chrome had no formal root program yet and was near zero. By 2025 the picture had reversed and degraded simultaneously. Chrome now contributes the dominant share of oversight engagement but covers only 18% of incidents. Mozilla covers 8%. The total corpus has roughly doubled since 2017 while combined meaningful oversight coverage has fallen by two-thirds. The Chrome Root Program launched in 2021, and its effect on the governance landscape is visible in the data — Chrome has made 239 substantive oversight comments in recent years versus Mozilla’s 158 over the same period. The center of gravity in CA compliance governance has shifted to the browser with 78% market share. That is structurally significant. Microsoft, which operates the largest trust store by root count at 346 trusted roots, has made zero recorded governance comments across all 1,690 incidents spanning 11 years.

The distrust history is also clarifying. The common mental model is that CAs get removed for catastrophic technical failures. The data does not support that model. 14 of 16 distrust events involve compliance operations failures. The behavioral taxonomy matters, negligent noncompliance, willful circumvention, demonstrated incompetence, and argumentative noncompliance. In 10 of the 16 cases, the distrust event was preceded by a documented pattern of prior incidents. The median runway from the first incident to distrust is 3.2 years. The failures were not hidden. They were in Bugzilla the whole time. The CA just was not resolving them systematically.

That means distrust is largely predictable given sufficient data. The indicators show up well before the outcome. That is a sobering observation about past oversight and a useful one for anyone thinking about what the compliance monitoring function should actually do.

The Observatory is a measurement tool, not a verdict. The dataset has limits — Bugzilla under-represents incidents that never reach public disclosure, CT-derived issuance volumes reflect only unexpired certificates at the time of measurement, and the behavioral taxonomy applied to distrust events involves judgment calls. But the patterns are robust enough to be useful.

For CA operators, the detection data alone should prompt hard questions about internal monitoring coverage. For root programs, the oversight gap data quantifies a scaling problem that is currently being absorbed by Chrome without anyone having explicitly decided that is the right architecture. For the policy community, the shift from technical to governance failures as the dominant incident class has direct implications for what audit frameworks should actually measure.

The dashboard is live at webpki.systematicreasoning.com, updated daily. The methodology is documented. Pull requests are welcome

Leave a Reply

Your email address will not be published. Required fields are marked *