Monthly Archives: March 2026

We Built It With Slide Rules. Then We Forgot How.

My father grew up on a subsistence farm, the kind that raised chickens and grew just enough to get by. Farmers were the original hackers. You couldn’t wait for the right tool or the right expert. You fixed what was broken with what you had, because the alternative was worse.

As a kid he taught himself rocket chemistry. Not from a kit. From whatever he could source locally. He was trying to make things burn hotter and fly farther, adjusting mixtures through trial and error long before he had words like specific impulse or oxidizer ratio for what he was doing.

The materials weren’t exotic. Potassium nitrate sold as stump remover. Sulfur and charcoal. Mix them correctly and you have black powder, the same oxidizer-fuel logic underlying every solid rocket motor ever built. More ambitious builders used potassium perchlorate from chemical suppliers, mixed with aluminum powder or sugar to control burn rate and energy density. All of it over the counter. All of it accessible to someone willing to read carefully and try things until they worked.

He wasn’t following a plan. He was just that kind of person.

Most people have forgotten that the Air Force had its own space program before NASA existed. NASA was carved out of NACA in 1958, but the Air Force had been running parallel efforts since the mid-1950s. That generation had grown up on science fiction and wanted to see it happen. When Sputnik launched in October 1957 the country went into a low-grade panic about whether it understood physics well enough to survive, and suddenly the kids who had been dreaming about space since they could read had somewhere to go with it. What followed was one of the rare moments in American history when technical aptitude was a genuine class elevator. The government needed people who understood this stuff badly enough to find them wherever they were.

He enlisted in his early twenties, aerospace degree in hand. The Air Force space program was what he was aiming at. He ended up working on attitude control thrusters for reconnaissance satellites, the kind that could resolve fine surface detail on Earth from hundreds of miles up. For that mission attitude control wasn’t a secondary problem. It was the central one. A camera that can’t hold still is useless. The thrusters are what made the intelligence possible. The underlying engineering was the same problem he had been teaching himself: oxidizer, fuel, combustion geometry, now controlled to tolerances that left no margin.

I remember him watching a satellite reenter on the cable news when I was young. I don’t know which one or exactly what year. What I remember is that he cried. He told me later there was a plate on that satellite with his name engraved on it. Work he had done, hardware he had touched, in orbit for years and now gone. Grief with no adequate audience, because the context was secret and the people who would have understood were scattered across programs that didn’t officially exist.

Years later my father was excited watching Iridium launch, Motorola’s commercial satellite constellation, first launches 1997. The same fundamental technology, now accessible to anyone with a phone. His generation had figured out how to do this, quietly, under classification, and here it finally was in the open. The knowledge had propagated. Just not through the channels that were supposed to carry it.

He kept a green chalkboard in the garage. He would pull out his slide rule and work through things with me. Orbital decay, thrust, specific impulse, delta-v, the rocket equation and why it makes everything harder than it looks. He had a worry he came back to often – society had forgotten how to go to the moon. The knowledge existed in aging engineers and partially classified documents and it was not being transmitted. The chalkboard was what he could do about that.

Last year Destin Sandlin, an aerospace engineer who describes himself as a redneck from Alabama, walked into a room full of the most senior people in American space policy and did something worth an hour of your time to watch. He asked questions that people inside the institutional food chain had stopped asking. Starting with the most basic one: how many rockets does it take to fuel the Artemis lunar lander?

The room went quiet. Nervous laughter. EPublic estimates have varied, but all point to a strikingly high number of launches and on-orbit refueling operations before a landing attempt depending on assumptions about boil-off and reuse, and nobody in the room had a confident answer.

These are not uninformed people. A core operational parameter of their own mission architecture was not common knowledge among the people running it.

Then Destin asked the room a simpler question.

“Is this the simplest solution?”

Silence.

Destin pointed them at NASA SP-287, a document the Apollo engineers wrote and left behind specifically so the next generation wouldn’t have to rediscover everything from scratch. The title is “What Made Apollo a Success.” It has been sitting there, public, for decades. Most of the people in that room had not read it.

The principle at the center of that document is blunt:

“Build it simple and then double up on as many components or systems so that if one fails, the other will take over.”

Simple first. Then redundant. Not complex and hoping.

Simple isn’t just aesthetic preference. Simple is how you keep the system inside your head. Simple is how you build procedures all the way down to bolt cutters and still know what comes next. When a system gets complex enough that a room full of its leaders can’t answer a basic operational question about it, it has exceeded the boundary of what they actually understand. They are renting the complexity along with the capability.

The Apollo engineers meant it literally. When designing the ascent stage separation, the mechanism that gets astronauts off the lunar surface, they didn’t stop at one solution or two. They built redundancy on top of redundancy. Flip the switch. If that fails, go outside and trip the manual release. If that fails, depressurize, suit up, go to the bottom of the spacecraft with bolt cutters, and cut the straps holding the stages together. Harrison Schmitt said there was one more procedure after the bolt cutters. Nobody would say what it was.

That’s not genius. That’s a chicken farmer’s epistemology applied to the hardest engineering problem humans had ever attempted. You don’t wait for perfect conditions or perfect knowledge. You start simple, you build every fallback you can think of, and then you think of one more.

Destin argues that Artemis didn’t follow that logic. The NRHO/Gateway architecture was publicly justified in part on communications, surface access, stability, and operational grounds, but Destin argues that it also reflects deeper architectural constraints that accumulated into a more complex solution. Destin’s read, and he makes a detailed case for it, is that it’s an architectural constraint dressed up as a design choice, complexity that accumulated because the real constraints couldn’t be named publicly. A room full of program leaders who couldn’t tell you the basic parameters of the system they were running.

That’s what happens when you lose the thread.

Destin also interviewed an engineer who had worked on the lunar landing training vehicle, the machine that taught Apollo astronauts to land in one-sixth gravity by actually putting them in a vehicle where their life depended on getting it right. Destin asked whether the Apollo engineers were smarter than engineers today. The answer was no. What they had wasn’t superior intelligence. It was a bias toward doing, toward simplicity, toward keeping the system inside human heads rather than delegating it to complexity they couldn’t fully reason about.

NASA SP-287 exists because those engineers understood something important. Capability doesn’t survive on its own. Knowledge doesn’t transmit automatically. You have to codify it deliberately or it dies with the people who held it. It is ownership made explicit. Here is what we understood. Here is why it worked. Here is the playbook so the next generation doesn’t have to rediscover it at the cost of lives.

The space race created a machine for turning hands-on knowledge into national capability. It found people like my father wherever they were because it needed what they had already taught themselves. It was the on-ramp, the forcing function that pulled curiosity into programs that mattered and gave it somewhere to go. That same forcing function generated SP-287, the discipline to write it down, the institutional pressure to transmit it. When the race ended the machine stopped. The on-ramp closed. The knowledge didn’t vanish immediately. It aged out, program by program, engineer by engineer, panel by panel. What remained was credentials and institutional memory of having once known how, which is a different thing entirely from knowing how.

We took that gift and built a lunar return architecture that, at least in its public form, often looks more operationally intricate than the Apollo playbook would have preferred. More complex architecture. Estimates ranging from eight to fifteen or more rockets just to fuel the lander. A room full of its leaders who hadn’t read the playbook.

“Is this the simplest solution?”

Silence.

That’s not an aerospace problem. That’s the pattern. The knowledge transmission problem is older than aerospace. I’ve been writing about it in other contexts for a while, starting here.

My father spent my childhood pointing at this from a chalkboard in a garage. I didn’t become an astronaut. That was his hope, not my path. The chalkboard worked anyway. The knowledge moved. The Iridium launches proved it. The knowledge his generation developed under classification eventually became infrastructure anyone could hold in their pocket. You can’t fully control where it lands. You can only decide whether to try.

Now AI is doing to software what the end of the space race did to aerospace. It is consuming the early career tasks that used to serve as scaffolding for building judgment. The debugging, the boilerplate, the routine iteration that taught tradeoffs and edge cases before anyone trusted you with the hard problems. The visible work disappears first. The tacit knowledge becomes unreachable just as it becomes most important. The on-ramp closes. And at some point a room full of senior people goes quiet when someone asks a basic operational question, not because they’re uninformed, but because the complexity was delegated before the understanding had time to form.

That is the cautionary tale. Not that AI is bad. That capability outsourced before it is understood leaves you renting decisions you don’t control while keeping consequences you can’t transfer. The room goes quiet. And eventually nobody even thinks to ask whether this is the simplest solution.

My father saw it coming. That’s what the chalkboard was for.

The question isn’t whether you work in aerospace or software. It’s whether you’ve stopped asking basic questions about the system you’re running. Whether it has exceeded the boundary of what you actually understand. Whether you’re renting complexity along with capability and calling it progress.

You don’t wait for perfect knowledge. You read every playbook you can find. You build redundancy all the way down to bolt cutters. And then you think of one more thing.

The chemicals are still on the shelves. SP-287 is still public. The Destin talk is an hour of your time and worth every minute.

Read the playbook.

The WebPKI and Client Authentication Are at a Crossroads

The CA/Browser Forum is having its first serious conversation about whether publicly trusted client authentication certificates deserve their own Baseline Requirements. Nick France kicked off the discussion on the public list last week, asking for concrete use cases, and the responses so far have been a useful window into how the industry thinks about this problem. Or rather, how it doesn’t.

The timing isn’t accidental. Chrome Root Program Policy v1.6 is forcing a structural realignment of the WebPKI, and client authentication is caught in the middle. All PKI hierarchies in the Chrome Root Store must now be dedicated solely to TLS server authentication. Chrome stopped accepting new intermediate CA applications with mixed EKUs in June 2025, and by June 15, 2026, Chrome will distrust any newly issued leaf certificate containing clientAuth EKU from a Chrome Root Store hierarchy. Multi-purpose roots get phased out entirely. Mozilla, Apple, and Microsoft are all aligning with this direction. Every major public CA has published a sunset schedule. Sectigo stopped including clientAuth by default in September 2025, DigiCert followed in October, and Let’s Encrypt is phasing it out through ACME profiles. By mid-2026, you will not be able to get a publicly trusted TLS certificate that also works for client authentication.

This is the right call. The historical practice of stuffing both serverAuth and clientAuth into the same certificate, from the same hierarchy, created exactly the kind of entanglement that makes the WebPKI brittle. The SHA-1 migration is the canonical example. Payment terminals that relied on client auth from the same roots as server certs couldn’t upgrade, holding back the entire transition for years. Today, Cisco Expressway is the poster child for the same problem, using a single certificate for both server and client auth in SIP mTLS connections and scrambling to decouple them before the deadline. Dedicated hierarchies for dedicated purposes. It’s a principle the WebPKI should have enforced from the start.

What to do about it

What’s emerging is a clearer, more honest WebPKI, but one with a gap that nobody is cleanly addressing. If you’re currently relying on publicly trusted certificates for client authentication, the path forward depends on your use case.

If the client auth is internal to your organization, VPN access, Wi-Fi onboarding, device authentication, mTLS between your own services, you should be moving to private PKI. This was always the right answer for internal use cases, and modern private CA solutions have made it far more practical than it used to be. You get full control over certificate profiles, lifetimes, and revocation without being subject to external root program policy changes. The blast radius of a private CA is contained to your organization, which is exactly what you want for internal trust.

If the client auth is between your organization and a small number of known partners, like B2B API integrations or supply chain connections, private PKI still works well. You exchange trust anchors with your partners and configure your systems to trust their specific CA. This is how most of these integrations should have been built in the first place. The “convenience” of using publicly trusted certs for this was always a false economy, because you were accidentally opening your trust boundary to every entity that could buy a cert from the same CA.

But if the client auth needs to work across organizational boundaries at scale, meaning you can’t reasonably pre-configure trust anchors for every potential counterparty, this is where it gets interesting and where the current alternatives fall short. Private PKI doesn’t solve this. You need some form of shared trust anchor, which is what public PKI provides for server authentication today. The question is whether a similar model can work for client authentication with properly scoped identifiers and validation methods.

The human identity case is the relatively easy part

On the CA/B Forum list, Sebastian Nielsen argued that public CAs shouldn’t issue client auth certificates at all, pointing to the name collision problem. He makes a fair point, but the conclusion is too broad. I’m Ryan Hurst the security practitioner, and there’s also Ryan Hurst the actor (Remember the Titans, Sons of Anarchy). A public CA asserting “Ryan Hurst” in a DN doesn’t help a relying party figure out which one of us is authenticating. The DN is a vestige of the X.500 global directory that never materialized. There is no global directory. Even local directories that correspond to DN structures don’t exist in any meaningful density. Identity in the WebPKI belongs in the SAN, where we have identifiers that are both globally unique and reachable.

S/MIME already handles the human case correctly. The rfc822Name in the SAN is at least unique at the time of issuance. More importantly, it’s reachable. You can send a challenge to an email address and get a response. You can’t send a challenge to a social security number. You can’t send a challenge to “Ryan Hurst, US.” The broad intent of the WebPKI is to make things reachable in an authenticated way. DNS names and email addresses fit that model. DNs do not.

Even with email, there’s a temporal problem. Addresses get reassigned, domains lapse, providers recycle accounts, and throwaway addresses exist by design. CAs can’t monitor for reassignment, so these are inherently short-lived assertions. The certificate lifetime is the outer bound of your trust in that binding. Broader questions around PII and auditability are really about how Key Transparency can be bolted into the ecosystem. I wrote about that previously.

There is valuable work happening in this space. Ballot SMC015v2 enabling mDLs and EU digital identity wallets for S/MIME identity proofing shows this evolving in a meaningful direction. Client authentication and signed email under S/MIME belong together. Apple has argued that emailProtection EKU should mean mandatory S/MIME BR compliance, closing the loophole where CAs omit email addresses from emailProtection certificates to avoid the BRs. I think that’s the right direction. One nuance worth calling out though. S/MIME bundles signing, authentication, and encryption, and I think that’s right for the first two but not the third. Signing and authentication are real-time assertions that work well as short-lived credentials. Encryption is different. The key is bound to an identifier that may not be durable, and without frequent rotation you risk bygone-SSL style attacks where a new holder of an email address could access messages intended for the previous one. The encryption case deserves its own careful treatment around key lifecycle and rotation.

Browsers are actively looking to remove client auth from TLS certificates, and I don’t disagree given how poorly specified and unconstrained it has been. That signals whatever comes next needs to be much more tightly defined. The human client auth case is covered by S/MIME, browser-based client auth is on its way out for good reason, and a new working group doesn’t need to revisit the human case.

The machine identity gap

Where it gets interesting is cross-organizational service-to-service authentication on the public internet. Today this is mostly handled with API keys, OAuth client credentials, or IP allowlisting, all with well-known limitations. mTLS with publicly trusted client certs could fill a real gap, but only if the identity model is built correctly.

Many current uses of mTLS with publicly trusted client certs are misplaced. Organizations are often assuming a level of assurance they don’t actually get when they accidentally cross security domains by relying on the public WebPKI for what is fundamentally a private trust relationship. A publicly trusted cert for payments.example.com tells you that the entity controlling that domain authenticated, nothing more. It does not mean they are your trusted partner, your approved vendor, or anyone you intended to grant access to. Public trust gives you authenticated identity, not authorization. Organizations that conflate the two will accidentally open up access based solely on someone having obtained a client cert. The examples collected on the list so far, Cisco Expressway and EPP, are mostly legacy compatibility problems being fixed. A working group built on those foundations would produce weak Baseline Requirements.

The better foundation is the emerging need for authenticated service-to-service communication across organizational boundaries. Consider SMTP. Mail servers already authenticate to each other over the public internet using TLS, and MTA-STS is pushing that toward authenticated connections. The logical next step is mutual authentication, where the receiving mail server can cryptographically verify the sending server’s identity, not just the other direction. SMTP and mTLS go together like peanut butter and jelly, but there’s no clean way to do it with publicly trusted client certs today. Or consider vendor supply chains. If a manufacturer’s procurement system needs to query a supplier’s inventory API, or a logistics provider needs to authenticate to a retailer’s fulfillment service, the options today are API keys, OAuth flows, or standing up an industry-specific trust framework just so machines can talk to each other. mTLS with publicly trusted client certs would let these systems authenticate directly, without building bespoke trust infrastructure for every partnership.

And this need is accelerating beyond any single industry. As AI agents increasingly act as user agents on the open internet, calling APIs, negotiating with services, and transacting across organizational boundaries on behalf of users, mutual authentication between machines that have no pre-established trust relationship is becoming a practical necessity, not a theoretical concern. You can’t pre-configure trust anchors for every service an agent might need to interact with any more than you can pre-configure them for every website a browser might visit. I wrote about this dynamic previously, and the trajectory is clear. The machine-to-machine authentication problem on the open internet is starting to look a lot like the server authentication problem that the WebPKI was built to solve, just in both directions.

For machines, the name collision problem largely disappears. DNS names are globally unique by design. A client cert with a dNSName SAN of payments-api.example.com or registry-client.registrar.example.net doesn’t have an ambiguity problem. The relying party knows exactly what organization controls that name. Nick’s original question on the list asked about what parts of the DN the relying party verifies. I’d argue that’s almost the wrong framing. There is no global X.500 directory. The question should be, what SAN types are needed, and what validation methods can we define for them?

For straightforward service identification, dNSName works today with no new validation methods needed.

  • payments-api.example.com
  • erp-connector.supplier.example.net
  • registry-client.registrar.example.com

For more expressive service identification, uniformResourceIdentifier SANs encode not just the organization but the specific service.

  • https://example.com/services/payments
  • urn:example:service:billing:v2

This URI-based approach isn’t speculative. SPIFFE already uses URI SANs (spiffe://cluster.local/ns/production/sa/checkout) to represent service identities in Kubernetes mTLS contexts. The pattern is proven and widely deployed within private PKI. Extending it to public trust for cross-organizational federation is a natural evolution of an approach the industry has already validated. URI SANs can be validated through .well-known challenge methods (like ACME HTTP-01 scoped to a URI path) and ALPN-based methods, extending battle-tested ACME-era infrastructure rather than building from X.500-era assumptions.

What the industry is doing instead

Almost all the CA and vendor messaging right now says “move to private PKI.” That’s the right answer for internal use cases, but it doesn’t address cross-organizational trust. The most interesting alternative emerging is the DigiCert X9 PKI, launched in partnership with ASC X9, the financial standards body. X9 PKI is a completely independent trust framework, governed by X9’s policy committee rather than the CA/Browser Forum or browser root programs. It supports both clientAuth and serverAuth EKUs, uses a common root of trust for cross-organizational interoperability, and is WebTrust audited. It’s specifically designed for the financial sector’s mTLS needs, though they’re expanding to other sectors.

X9 PKI is essentially a “public PKI that isn’t the WebPKI” for service-to-service auth. It validates the premise that there’s a real need for cross-organizational client authentication with a shared trust anchor. But it’s sector-specific and governed outside the CA/Browser Forum, which means it doesn’t solve the general case. The EU’s eIDAS QWAC framework is another sector-specific approach. These are workarounds for the absence of a general-purpose, properly scoped public client auth certificate type.

If this moves forward

I’m not advocating for or against a working group at the CA/Browser Forum. But if the Forum does decide to take this on, the scope needs to be narrow IMHO. Machine and service client auth only, with identity in the SAN using dNSName and uniformResourceIdentifier. DN fields should not be relied upon for authentication decisions. Validation methods should build on existing domain control mechanisms. Human client auth stays in S/MIME where it belongs. The BRs should address the authentication versus authorization distinction explicitly, so relying parties understand that a publicly trusted client cert tells them who is connecting, not whether that entity should be granted access. This is already how server certificates work, and client auth should follow the same model. And the issuing CAs need to be dedicated, separate from server auth hierarchies. The SHA-1 payment terminal debacle, the Cisco Expressway mess. Every time client and server auth are entangled in the same hierarchy, one use case holds back progress on the other. Don’t repeat that.

The bigger picture

What we’re watching is a structural realignment of the WebPKI’s purpose. The WebPKI is being narrowed to mean “TLS server authentication for web browsers,” full stop. Everything else, client auth, S/MIME, code signing, is being pushed to dedicated hierarchies, private PKI, or alternative trust frameworks. That’s mostly the right direction. But the service-to-service authentication gap is real, growing, and not well served by any of the current alternatives. Private PKI doesn’t solve cross-organizational trust. X9 PKI is sector-specific. The CA/Browser Forum has the institutional knowledge, the validation infrastructure, and the trust framework to define something that works here. Whether they choose to is another question.

The conversation is happening now on the public list. If you have concrete use cases for cross-organizational service authentication with publicly trusted client certificates, this is the time to share them. The shape of what comes next depends on whether the use cases justify the effort, and right now the list is thin.

Introducing the WebPKI Observatory

For as long as I have been in this industry, the WebPKI compliance conversation has run on impressions. People with long memories and regular conference attendance have built up a picture of which CAs are well-run, which are struggling, and where the oversight gaps are. That picture has generally been accurate. It has also been almost entirely unmeasured.

The WebPKI Observatory at webpki.systematicreasoning.com, a project from Systematic Reasoning, is an attempt to change that. It’s a public dashboard covering 1,690 compliance incidents drawn from Mozilla Bugzilla between 2014 and 2025, cross-referenced with CCADB membership data, certificate issuance volumes from CT logs, root program trust store compositions, and the complete history of CA distrust events. The goal was simple: replace the shared intuition with actual data, and see what the data shows that intuition missed.

Some of it confirmed what most people in this space already suspected. Some of it was genuinely surprising.

The finding that reframes everything else is detection. When a compliance incident occurs, who finds it? Root programs find 52% of incidents. Automated external tools — CT log monitors, certificate linters, community scanning infrastructure — find 14%. CAs find their own problems in 9% of cases.

That number deserves more attention than it typically gets. One in eleven. CAs have full access to their own issuance systems, their own audits, their own CPSs, their own disclosure obligations, and they are the least effective detection mechanism in the ecosystem. External parties without any privileged access outperform internal CA monitoring by a factor of six or more. The compliance monitoring function has been effectively outsourced to external parties by default, and mostly without anyone deciding that was the right architecture.

Everything else in the data follows from that.

The failure classes that have grown are instructive. Technical misissuance has declined as a share of incidents over the past decade. What has grown is the process layer. In 2019, governance failures represented 21% of all incidents. By 2025 that figure was 60%. Policy violations, CPS failures, disclosure deadline misses. These are by definition things internal compliance programs should be catching. The 260 incidents tagged policy-failure or disclosure-failure in the dataset are a direct indictment of internal compliance operations. A CA that violates its own documented policy is not being surprised by an external attacker.

The oversight picture is also worth examining. In 2017, Mozilla engaged with 79% of Bugzilla compliance bugs. Chrome had no formal root program yet and was near zero. By 2025 the picture had reversed and degraded simultaneously. Chrome now contributes the dominant share of oversight engagement but covers only 18% of incidents. Mozilla covers 8%. The total corpus has roughly doubled since 2017 while combined meaningful oversight coverage has fallen by two-thirds. The Chrome Root Program launched in 2021, and its effect on the governance landscape is visible in the data — Chrome has made 239 substantive oversight comments in recent years versus Mozilla’s 158 over the same period. The center of gravity in CA compliance governance has shifted to the browser with 78% market share. That is structurally significant. Microsoft, which operates the largest trust store by root count at 346 trusted roots, has made zero recorded governance comments across all 1,690 incidents spanning 11 years.

The distrust history is also clarifying. The common mental model is that CAs get removed for catastrophic technical failures. The data does not support that model. 14 of 16 distrust events involve compliance operations failures. The behavioral taxonomy matters, negligent noncompliance, willful circumvention, demonstrated incompetence, and argumentative noncompliance. In 10 of the 16 cases, the distrust event was preceded by a documented pattern of prior incidents. The median runway from the first incident to distrust is 3.2 years. The failures were not hidden. They were in Bugzilla the whole time. The CA just was not resolving them systematically.

That means distrust is largely predictable given sufficient data. The indicators show up well before the outcome. That is a sobering observation about past oversight and a useful one for anyone thinking about what the compliance monitoring function should actually do.

The Observatory is a measurement tool, not a verdict. The dataset has limits — Bugzilla under-represents incidents that never reach public disclosure, CT-derived issuance volumes reflect only unexpired certificates at the time of measurement, and the behavioral taxonomy applied to distrust events involves judgment calls. But the patterns are robust enough to be useful.

For CA operators, the detection data alone should prompt hard questions about internal monitoring coverage. For root programs, the oversight gap data quantifies a scaling problem that is currently being absorbed by Chrome without anyone having explicitly decided that is the right architecture. For the policy community, the shift from technical to governance failures as the dominant incident class has direct implications for what audit frameworks should actually measure.

The dashboard is live at webpki.systematicreasoning.com, updated daily. The methodology is documented. Pull requests are welcome

Signed, Auditable, Offline-Tolerant, PQ Secure QR Codes

Signed, Auditable, Offline-Tolerant, PQ Secure QR Codes

A few months ago I wrote about what it would take to make a QR code verifiable in a post quantum world. In this post I wanted to explore what it would look like if we wanted one that is genuinely verifiable, not just signed, but auditable, offline-tolerant, and ready for a post-quantum world. That post was mostly conceptual. A conversation with Bruno Couillard last week nudged me to put down the thoughts I had been carrying about exactly that.

The design draws heavily on the draft for Merkle Tree Certificates, which is working through the IETF right now. MTC is aimed at TLS, but the core insight is that you can replace per-certificate signatures with compact Merkle inclusion proofs against a periodically updated signed root, and that insight translates directly to QR codes once you think carefully about the offline constraint. If you haven’t read it, the draft is at datatracker.ietf.org/doc/draft-davidben-tls-merkle-tree-certs.

The result of applying that idea to the QR problem is MTA-QR, a working implementation of what I’ve been calling Merkle Tree Assertions for QR codes. The demo is live at mta-qr.peculiarventures.com, and the full source is at github.com/PeculiarVentures/mta-qr-demo. There are Go and TypeScript implementations, a browser-only demo that generates and verifies without any backend, and an interoperability test matrix that exercises all three signing algorithms against both runtimes in every combination.

To be clear, this isn’t a production-ready library, but building it helped me identify things I had missed while whiteboarding it in my head.

The size problem is real but solvable

The original post flagged signature size as the central constraint. An ML-DSA-44 signature is 2,420 bytes. A Version 40 QR code at medium ECC holds about 1,273 usable bytes. Those two numbers don’t fit in the same sentence without a solution.

The solution is separating what goes in the QR from what you need to verify it. The QR carries the assertion content, a Merkle inclusion proof, and coordinates pointing to a signed checkpoint. The checkpoint itself contains the issuer signature, lives outside the QR, and gets cached on the verifier’s device, typically during a charge cycle before the device ever sees a QR code. Once cached, verification is fully offline.

The proof is the interesting part. A two-level tiled Merkle tree, with an inner batch tree and an outer parent tree, caps the total proof at eight hashes regardless of how large the log grows. Eight hashes is 256 bytes. That’s the ceiling, forever. The QR version stays fixed. The code never gets denser as the issuer accumulates millions of entries.

In practice, a Mode 1 QR carrying bearer claims and a Merkle inclusion proof fits comfortably within a Version 10 to 15 code at medium ECC, well under 500 bytes total. ML-DSA-44 doesn’t appear in the QR at all. The issuer signature lives in the checkpoint that the verifier fetched during its last charge cycle.

ML-DSA-44 won’t fit in a single QR in Mode 0, the fully embedded mode where the signature is in the QR itself. Mode 0 is the bootstrap mode: it works on air-gapped verifiers, on paper QR codes printed before any checkpoint infrastructure exists, and for scenarios where prefetch is operationally impractical. It’s not a niche failure case; it’s the starting condition for any new deployment. Mode 0 with PQC will require waiting for NIST to finalize smaller-signature algorithms, or accepting larger QR codes. Mode 1 is the practical path to PQC today.

Offline tolerance is mostly a framing problem

There’s a habit of treating offline verification as binary, either the device has connectivity at scan time, or it doesn’t. That framing creates a false constraint.

Every verifier with a battery has a window where it is stationary, connected, and idle. That’s when it charges. Fetching a checkpoint during a charge cycle is trivially cheap compared to everything else happening during that window. The relevant question isn’t whether the device has connectivity at scan time. It’s whether the assertion being scanned was issued before the verifier’s last checkpoint fetch.

For the common case, the answer is yes. A concert ticket issued last week, a prescription filled this morning, a badge issued at enrollment, all of these predate the verifier’s cached checkpoint by hours or days. Verification is fully offline because the relevant checkpoint was already there.

The narrow failure case is an assertion issued and scanned within the same charge cycle, before any checkpoint fetch. That falls back to a single cache-miss network call, which then covers every subsequent scan of the same batch. One round trip, then fully offline for the rest of the operational period.

Witnessing is where the transparency guarantee actually lives

The issuer’s signature proves the assertion came from a specific key. That’s useful, but it doesn’t prevent a compromised issuer from presenting different views of the log to different verifiers. Split-view attacks are subtle and hard to detect after the fact.

Witnesses solve this. A witness cosigns a checkpoint only after verifying it extends the previous one they saw, establishing a consistency guarantee across the full history of the log. Once multiple independent witnesses have cosigned a checkpoint, the issuer cannot retroactively rewrite or fork the log without those witnesses catching it.

The witness protocol comes from c2sp.org/tlog-cosignature, the same infrastructure underpinning the transparency.dev witness network. I worked on that witness network during my time at Google, so it was never far from my mind when designing this. Connecting MTA-QR to it means the issuance of every assertion can be monitored by parties with no relationship to the issuer. That’s the difference between a signed QR and an auditable one.

The implementation uses Ed25519 for witness cosignatures regardless of what algorithm the issuer uses for checkpoints. That’s not a design choice I made, it’s what the spec requires. It means an issuer can use ML-DSA-44 for the checkpoint signature while the witness infrastructure stays on stable, widely deployed Ed25519 keys. The two concerns are separated cleanly, and that separation matters. The quantum threat to the issuer signature and the operational threat to the witness network are different problems on different timelines.

What I had wrong in the original post

The earlier post mentioned UOV and SQISign as especially promising for QR codes because of their smaller signature sizes. That framing isn’t wrong exactly; smaller signatures do help with the size constraint, and both algorithms are genuinely interesting work. But the NIST competition covering them isn’t finished, which means neither is practical for anything you’d want to deploy or standardize against today. More importantly, once you separate the checkpoint from the payload, signature size matters only for the checkpoint, which isn’t size-constrained anyway. The Merkle structure removes the problem that UOV and SQISign were addressing. They may still have a role in Mode 0 once the standards are settled, but they’re not the lever that makes the design work.

What’s still missing

The spec has a revocation mechanism based on index ranges that a verifier checks at scan time, but the format for distributing and authenticating those revocation lists isn’t fully defined yet. This is the most operationally significant open item. An unsigned revocation list is vulnerable to a stale-list attack at the network layer. An adversary who can delay or suppress list delivery can extend the validity of a revoked assertion. The natural fix is issuer-signed lists using the same key that signs checkpoints, but that format isn’t written yet. Until it is, revocation is a weak link in any deployment that takes revocation seriously.

Type 0x02 key assertions, where the QR proves possession of a private key rather than just embedding bearer claims, are defined in the log entry format but the challenge-response protocol isn’t specified. Two implementations can’t interoperate on key assertions without it.

The C2SP tlog-checkpoint format needs registrations for ECDSA and ML-DSA before those algorithms can interoperate with standard tlog-checkpoint parsers. Ed25519 is fully specified today. ECDSA and ML-DSA work in the reference implementation but aren’t interoperable with external tooling yet. This is a practical blocker for adoption by anyone not using the reference implementation, and it’s the right next conversation to have with the C2SP and MTC communities.

Try it

The browser demo runs entirely in-page with no backend. It generates Ed25519 or ML-DSA-44 keys in your browser, issues assertions, builds the Merkle tree, produces QR codes, and runs the full 15-step verification trace. The tamper panel lets you flip proof bytes, corrupt the TBS, zero the proof, or truncate the payload, and watch exactly which verification step catches each failure. It’s a useful way to build intuition for what the protocol is actually checking and why each step is there.

The repo is at github.com/PeculiarVentures/mta-qr-demo. Pull requests welcome, especially on the open items.

When Compliance Records Become the Only Honest Signal

I’ve been spending a lot of time lately building Systematic Reasoning with my long-time friend Vishal. The core premise is straightforward. Organizations reveal their true operational character through how they design to prevent failure, how they plan to handle it when it happens, and how they actually do. That signal deserves to be tracked, structured, and acted on. We’re building an agentic compliance platform to do exactly that.

Systematic Reasoning won’t be limited to any single domain, but we decided to start with the Web PKI. The reasoning was simple. It’s high impact in a way that’s hard to overstate. Every internet user depends, whether they know it or not, on a relatively small number of Certificate Authorities getting things right. The margin for error is zero. If that trust layer breaks, it breaks for everyone.

DigiNotar is the canonical example. A small Dutch CA, compromised so thoroughly that attackers could impersonate any website on the web, and did. That capability was used to spy on Iranian dissidents, intercepting communications that people believed were private and secure. The trust infrastructure that was supposed to protect them was turned into a weapon against them. DigiNotar isn’t an edge case or a cautionary tale from a more naive era; it’s a demonstration of the actual ceiling of what can go wrong. And it isn’t the only one. State-affiliated certificate authorities have been caught performing man-in-the-middle attacks on their own citizens’ traffic, something the Baseline Requirements explicitly prohibit, but prohibition only matters if it’s enforced. The web’s trust model works right up until the moment someone decides it’s more useful as surveillance infrastructure.

At the core of Systematic Reasoning, is a belief I’ve held for a while. Compliance can be a vital sign of organizational security, but only if it’s continuous. The reality today is that it isn’t. Code ships daily. Audits happen annually. The gap between those two rhythms is where things go quietly wrong.

I’ve written before about why I have limited faith in the current audit regime. Auditors are engaged by the organizations they assess. Their product is a clean seal; their incentive is to keep the client. They operate on point-in-time sampling with auditee-selected scope, and they’re often compliance professionals rather than engineers, which means they’re checking whether a policy exists more than whether the system actually behaves correctly. That’s if you’re lucky. Sometimes the audit is scoped against a version of the Baseline Requirements that was superseded over a year ago.

The same incentive shapes how certificate authorities write their governance documents. A CP/CPS that relies heavily on incorporation by reference, that omits specifics about what the organization actually does and what constraints it operates under, is easier to audit against than one that makes precise, testable commitments. Vagueness isn’t always carelessness. Sometimes it’s a design choice. The same thing happens in incident reports. A report that attributes a failure to “organic process evolution” or “human error” without describing the actual control gap is easier to close than one that names the broken system and commits to a specific fix. In both cases the document gets the box checked without creating accountability. References establish authority. Commitments establish accountability.

The audit gap isn’t compensated for by strong internal monitoring either. The majority of significant compliance failures are not caught internally. They are caught by external researchers, root program staff, or community tooling. A broken validation endpoint runs for five years and the organization finds out because someone posted a 404 error in a public issue tracker. A validation race condition exists undetected for seven and a half years not because it was well hidden but because nobody was looking. The absence of an internal alarm is not evidence that the system is healthy. It is often evidence that the monitoring itself is missing.

So public incident reports and governance documents become some of the most signal-rich material available. Policy documents tell you what an organization claims it will do. Incident reports tell you what happened when reality diverged from that claim. Together they create a longitudinal picture that neither document produces alone.

Building a system to reason over that data surfaced a problem I didn’t fully anticipate. When you’re working from the outside, with no access to internal systems and no way to verify what actually changed, the public record is almost all you have. The question isn’t whether to treat it with skepticism. It’s how much skepticism to build in by default.

The temptation is to give the benefit of the doubt. Organizations are required to describe the blast radius of an incident. Not every localized bug is a symptom of something systemic. But accepting minimizing language at face value is its own failure.

“Only” is doing a lot of work when the bug it’s describing went undetected for seven and a half years. “No compromise of end-entities” is doing a lot of work when what it really means is that nobody found the gap before you did. Framing survival as security isn’t reporting, it’s PR. And if an organization believes an incident is no big deal, you can predict with reasonable confidence that the root cause analysis will be shallow and the remediation will be a band-aid.

ForgeIQX, our first offering, tracks those signals longitudinally across both policy documents and incident reports. Not to prosecute organizations for their language choices, but to notice when a commitment made in a CP/CPS quietly disappears in the next version, or when a promised fix is nowhere to be found when the same failure mode surfaces years later. That’s commitment decay, the slow evaporation of a promise made under pressure, and it’s only visible if you’re tracking across multiple documents and incidents over time rather than treating each one in isolation.

The calibration problem is real and doesn’t have a clean answer. Get it wrong in one direction and you build a system that cries wolf. Get it wrong in the other and you build a system that launders PR-speak into clean signals, which is just automating the thing we already do too much of.

There’s a third failure mode that took me longer to see. A system like this can be gamed. Swap “we got lucky” for “our monitoring detected no active exploitation.” Replace “only thirty certificates” with a more clinical impact scoping statement that says the same thing in language that sounds like engineering rigor. The words change; the institutional posture doesn’t. A system that can be satisfied by better prose isn’t measuring operational maturity, it’s measuring communications sophistication.

That means the system has to be built with structural pessimism. Not cynicism for its own sake, but a deliberate prior that clean language is not the same as clean operations, and that the absence of red flags is not the same as the presence of green ones. We can’t verify that an organization fixed what it said it would fix. What we can do is watch whether the same failure mode surfaces again and whether the pattern of shallow root cause analyses continues or breaks. The historical record doesn’t tell us what’s true inside these organizations. It tells us what they were willing to say in public, under pressure, over time. Given the alternatives, that may be the most honest signal available.

A certificate authority with genuine operational maturity should want this kind of scrutiny applied to itself. Not because it will always produce a clean result, but because it surfaces the gaps before an external party does. ForgeIQX gives organizations a way to continuously monitor their own compliance posture, so their practices and code keep pace with their commitments. The same is true for auditors who want their findings to mean something beyond a checkbox. The problem with the current regime isn’t that the people in it are careless. It’s that the incentive structures don’t reward rigor, and the tooling to demonstrate it continuously doesn’t exist. That’s what we’re building.

The Web PKI is where we started because the stakes are concrete and the public record is unusually rich. But any regulated industry where compliance is measured annually, where governance documents are written to satisfy auditors rather than inform relying parties, and where incident reports are drafted with one eye on legal exposure, has the same gap between what the paper says and what the organization actually does. We started here. We don’t intend to stop here.