I’ve been involved in the Web PKI since the mid-‘90s, when SSL certificates carried five- or ten-year lifetimes—long-lasting credentials for an internet still a wild west. Issuance was manual, threats were sparse, and long validity fit that quieter era. Thirty years later, we’ve fought our way to a 398-day maximum lifetime—today’s standard as of 2025—thanks in part to Apple’s bold 2020 move to enforce 398-day certificates in Safari, dragging resistant CAs into a shared ballot after years of clinging to the status quo. Yet some certificate authorities, certificate consumers, and industry holdouts still resist shorter lifetimes and tighter data reuse policies, offloading breaches, increased risk, and eroded trust onto users, businesses, and the web’s backbone. This 15-year struggle got us to 398; now it’s time to push past it.
Core Argument
The journey to shorter lifetimes spans decades. The TLS Baseline Requirements set a 60-month cap in 2010, but by 2014, internal debates among browsers and CAs ignited over whether such spans were safe as threats ballooned. Progress stalled—pushback was fierce—until Apple threw a wrench in the works. Announced earlier in 2020, effective September 2020, they declared Safari would reject certificates issued after August 31, 2020, with lifetimes exceeding 398 days, blindsiding CAs who’d dug in their heels. Only after that jolt did the CA/Browser Forum pass Ballot SC-42 in 2021, codifying 398 days as a shared requirement—proof that CAs wouldn’t budge without external force. Earlier, Ballot 185 in 2017 had proposed cutting lifetimes to 27 months, Ballot SC-22 in 2019 explored short-lived certificates, and Ballot SC-081 in 2025 is expected to reaffirm 398 days as the maximum, with a long-term target of 45–47 days by 2029 (SC-081v2). That’s 15 years of incremental progress, built on 30 years of evolution—Last time Apple’s push broke CA inertia enough to land us at 398, and I am confident without that action we would not be where we are yet. Yet risks like “Bygone SSL” linger: valid certificates staying with old domain owners after a sale, opening doors to impersonation or chaos.

Automation made this possible—and Apple’s 2020 edict accelerated it. Let’s Encrypt launched in November 2014, revolutionizing issuance with free, automated certificates; the ACME protocol, drafted then and standardized as RFC 8555 in 2019, turned renewal into a background hum. Today, CAs split into camps: fully automated players like Let’s Encrypt, Google Trust Services, and Amazon, versus mixed providers like DigiCert, Sectigo, and GlobalSign, who blend proprietary and ACME based automation with manual issuance for some. Data from crt.sh suggests over 90% of certificates now use automated protocols like ACME. Apple’s push forced CAs to adapt or lose relevance, yet many clung to old ways, agreeing to 398 only post-ballot. That lag—resisting automation and shorter spans—doesn’t just slow progress; it externalizes risk, burdening the WebPKI with overstretched certificates and outdated practices.

What Problem Are We Solving Anyway?
Well for one certificates are snapshots of a domain’s status at issuance; that 13-month span lets changes—like ownership shifts or domain compromises—linger unreflected, while 45 days would keep them current, shrinking an attacker’s window from over a year to mere weeks. “Bygone SSL” proves the point: when domains change hands, old owners can hang onto valid certificates—sometimes for years—letting them spoof the new owner or, with multi-domain certs, trigger revocations that disrupt others. History teaches us that reusing stale validation data—sometimes months old—leads to misissuance, where certificates get issued on outdated or hijacked grounds. Tighter allowed reuse periods force regular revalidation, but when CAs or companies slack, the ecosystem bears the cost: spoofed domains impersonating legit sites, breaches exposing sensitive data, and a trust system strained by systemic hits.
Browsers show us the way—back in the ‘90s, updates came on floppy disks on magazine covers, a manual slog that left users exposed until the next trip to the store; today, automatic updates roll out silently, patching holes and keeping security tight without a fuss. Certificates should mirror that: automated renewal via ACME or proprietary tools manages 398 days now and could handle 45 effortlessly, shedding the old manual grind—an incremental evolution already underway. Yet some cling to slower cycles, offloading risk—leaving the WebPKI vulnerable to their refusal to fully embrace automation’s promise. The proof’s in the pudding—Kerberos rotates 10-hour tickets daily in enterprise networks without a hitch; ACME brings that scalability to the web. Legacy systems? Centralized solutions like reverse proxies, certificate management platforms, or off-device automation bridge the gap—technical excuses don’t hold.
We’ve hit 398 days, but Zeno’s Dichotomy still grips us: advocates push for shortening, hit “not ready,” and stall at the current max—halving the gap to robust security without ever closing it. Each delay lets inertia shift risk onto the system.

Critics’ Refrain
Critics cling to familiar objections. “Legacy systems can’t handle frequent renewals”? Centralized automation—proxies, management tools, off-device solutions—proves otherwise; their inertia spills risk onto the ecosystem. “Smaller players face a competitive burden,” implying the web should shoulder that risk? Shared tools and phased transitions even the odds, yet their lag, like SHA-1’s slow death, threatens everyone. “Why not focus on revocation, DNSSEC, or key management instead”? Revocation’s a pipe dream—three decades of flops, from CRLs to OCSP, show it crumbling at scale, with privacy holes, performance drags, and spotty enforcement, as DigiNotar’s failure left unpatched clients exposed. DNSSEC and key management complement, not replace—shorter lifetimes cut exposure fast, while those build out. “It’s too rapid”? Two decades of automation—from proprietary solutions to ACME—and 15 years of debate say no; 398 days took effort, 45–47 is next. “We’re not ready”? That’s an impossible hurdle—security leaps like SHA-2 to TLS 1.3 came by diving in, not waiting, just as parents figure out diapers post-birth. Stalling at 398 doesn’t shield risk—it dumps it on the rest.
Pushing Beyond 398 Delivers Concrete Gains When Inertia’s Beaten:
Benefit | Description |
Enhanced Trustworthyness | Frequent renewals keep data current, cutting misissuance—laggards can’t dump stale risks on the WebPKI. |
Shorter Exploitation Window | 45 days caps attacks at weeks, not 398 days—orgs can’t offload longer threats. |
Lower Misissuance Risk | Tight reuse forces fresh checks, slashing errors CAs push onto the system. |
Rapid Policy Transition | Quick shifts to new standards dodge inertia’s drag, keeping the PKI sharp. |
Stronger Baselines | 90%+ automated renewals set a secure norm—holdouts can’t undermine it. |
Collective Accountability | Deadlines force modernization, ending the pass where a few’s inaction risks all. |
Conclusion
Shorter lifetimes and tighter reuse periods—break the cycle: fresh data, capped risk, no more offloading. A phased, deadline-driven approach, like SC-081’s framework (targeting shorter spans by 2029 in SC-081v2), forces the industry to adapt, hones automation where needed, and drives security forward—waiting five more years just fattens the risks we’ve already outgrown.
How does inertia externalize risk in the WebPKI? When CAs lean on stale data, companies settle for 398 days, and stragglers resist progress, they turn trust into a punching bag—ripe for abuse. Thirty years in, with 398 days locked and over 90% automated, the tools sit ready—only will falters.
Zeno’s half-steps got us here, but “not ready” is a fantasy—no one masters security before the plunge, just as parents don’t ace diapers pre-birth; we’ve evolved through every shift this way. Browsers don’t wait for floppy disks anymore—certificates can’t linger on yesterday’s pace either. I’ve watched the WebPKI battle from the Wild West to now—let’s rip inertia’s grip off with deadlines that stick and lock in 45 days to forge a trust that outlasts the past’s failures.
Revocation has never really been worked on, with first shorter lifetimes Google wasn’t able at all to handle OCSP with their browser code, delegating the problem to the ecosystem (as you talk about others shoulders). The big players enforcing shorter lifetimes are running or sponsoring solutions for automation, as it’s their holy grail for everything, although there are many indicators of automation raising other problems. As you mention, 45 days are next, and then? Days? Hours? Minutes? Always spinning up workarounds instead of focusing on the direct problem itself. Revocation could be handled, if anyone is willing to do so. Instead of privacy you accept exploding CT logs and hackers being able to spy on crt.sh or similar tools to find hacking surfaces, there are always issues and it’s time to address them, it’s time to address the root cause instead of a typical let’s do first and ask later. There are reasons for companies not using automation. As you also stated before, we are talking about 10% of the internet, you may expect, an additional lower percentage may face problems, so we may talk about a low percentage value, which urge you to reduce lifetime? Pasts failures are, that 3 years didn’t fit, 2 years didn’t fit, 1 year didn’t fit, 90 days for 90% didn’t fit, so you want to continue that way, although you didn’t reach your goal yet and tell that outlasts past’s failures? Maybe that was the failure?!
I appreciate your perspective, but I must disagree with the notion that revocation was “never really worked on.” In fact, there have been over three decades of focused efforts addressing certificate revocation challenges. Here’s a concise, fact‐based timeline that highlights a few notable examples—though it’s not an exhaustive account:
1990s: Early work involved defining OCSP and experimenting with partitioned CRLs to manage revocation across various environments. At this time, certificate revocation trees—exemplified by Kocher’s work—offered a bandwidth-efficient approach, though patent issues ultimately hindered their broader adoption.
Early 2000s: LW OCSP emerged around the same time as DNS-based mechanisms were explored. Both aimed to refine the revocation process—LW OCSP by clarifying catchable semantics and DNS-based approaches by improving caching and mitigating issues like captive portals.
2010s: These approaches were revisited and refined to address evolving challenges, incorporating lessons learned from earlier experiments.
Late 2010s: Browsers began adopting out-of-band (OOOB) distribution methods—such as Mozilla’s CRLite—that pre-fetch and distribute revocation data via a trusted control plane, reflecting a pragmatic response to the limitations of earlier methods.
These examples are just a few highlights from a broad landscape of work in this area, and the timeline provided here is not exhaustive.
On Google’s Revocation Investments:
Google has made significant, multi-layered investments in revocation. Not only did Google’s browser code advance revocation support, but Google Trust Services also implemented a complex sharding scheme using individual CRLs—making them just slightly larger than OCSP responses. Chrome was among the first to adopt a delegated CRL mechanism, enabling fail‐closed revocation for specific use cases—a design pattern later emulated by other browsers. Still, it’s worth noting that even Google’s own internal production ID system relies on fail‐open revocation semantics due to the reliability challenges that out‐of-band revocation checks can introduce. This isn’t a shortcoming unique to Google; rather, it highlights a fundamental challenge in our current revocation systems.
On CT Logs and Privacy Concerns:
Regarding CT logs, while at first glance they might appear to expose sensitive information, public PKI certificates are designed to be public. Approximately 97% of certificates contain minimal organizational details and no personal data unless intentionally embedded (for example, within a DNS name). Moreover, most of this DNS information is already accessible through automated DNS systems. CT logs are primarily focused on certificates used for public TLS, and participation in CT is optional—no CA is excluded for not publishing to CT logs. In practical terms, CT does not present a significant privacy risk compared to the inherent public nature of these certificates.
On the Adoption of Automation:
Automation isn’t an ad hoc workaround—it’s a mature, industry-wide approach. As highlighted in our blog post, over 90% of certificate issuance is now automated through a mix of ACME protocols, proprietary mechanisms, commercial certificate lifecycle management solutions, and open source tools. This high level of adoption demonstrates that the ecosystem has largely embraced automation, which plays a critical role in managing risk by facilitating frequent renewals and reducing exposure.
On the Success of Automation Efforts:
Furthermore, the claim that automation attempts have failed is difficult to reconcile with the data. We’ve seen a dramatic transformation: the web has grown from roughly 10% of its infrastructure using TLS to nearly 100%, and nearly all of that deployment is automated. This remarkable progress underscores that automation has not only succeeded—it has fundamentally reshaped the landscape for the better.
On assumption of the whole discussion is that the internet is only websites for public access. But there are many other applications on the peer to peer level not talked about any embbedded systems rely on. Therfore IMHO it’s dangerous to simplify and focus only on public websites. TLS and certificates are independent of public websites and have a much broader usecase than that. Nevertheless that’s irrelevant to the tech giants google, apple, mozilla etc. Do not narrow the internet to these protagonists and keep it open.
Expanding the WebPKI to support applications beyond websites—such as embedded systems, payment terminals, or peer-to-peer communications—can be beneficial if done deliberately and thoughtfully. The issue isn’t broadening its scope but rather doing so haphazardly or without proper consultation. When individuals or organizations extend or overload the WebPKI without involving those responsible for managing it—thus denying them an opportunity to consult, plan, and accommodate these new use cases—the WebPKI becomes fragile, slower, and vulnerable to unintended consequences.
Any expansion must therefore be purpose-driven, carefully designed, and executed with clear consideration of the additional load and the context in which it exists. Properly planned expansions enhance security, interoperability, and resilience; ad-hoc implementations introduce risk and externalize costs onto users and the broader ecosystem.