Monthly Archives: December 2024

Technology bias—we all have it—but it often gets in the way.

An old saying goes, “When you’re a plumber, you fix everything with a wrench.” It highlights a truth: we naturally gravitate toward the tools, people, and methods we know and trust most. This tendency stems from cognitive biases like anchoring—our reliance on initial information—and confirmation bias, which pushes us to favor ideas that align with our existing beliefs. While these biases help us make quick decisions, they can also blind us to better alternatives.

Another saying, “To know thyself is to be true,” resonates here. Even with my deep experience in PKI, I consciously revisit first principles whenever I consider applying it to a new problem. Is this really the best solution? PKI, like many technologies, carries hidden baggage that isn’t always visible, and over-reliance on familiarity can obscure better approaches.

The danger of sticking to the familiar becomes evident in the adoption of Infrastructure as Code (IaC). When tools like Terraform and CloudFormation emerged, many teams resisted, clinging to manual infrastructure management because it felt familiar and unnecessary. Yet manual approaches introduced inconsistency, inefficiency, and even security risks. Teams that embraced IaC unlocked scalable, repeatable workflows that transformed operations. IaC not only streamlined processes but also embedded elements of compliance and best practices directly into code. What outdated practices might we be holding onto today that prevent us from unlocking similar benefits?

I recently encountered a similar situation during a meeting with the leader of a large IT organization. They were eager to adopt a technology developed by someone they trusted personally. However, when I asked fundamental questions like, “How much time do you have to deliver this project?” and “What other systems need to interoperate for this to be considered a success?” it became clear that the technology wasn’t the right fit—at least not yet. By breaking the problem down to its fundamentals, we uncovered insights that their initial bias had obscured.

Practicing first-principles thinking can help sidestep these pitfalls. Start by identifying the core problem: what is the actual goal? What constraints are truly fixed, and which are merely assumptions? From there, challenge each assumption. Is there an alternative approach that better addresses the need? This process not only reduces the influence of bias but also fosters creativity and more effective solutions.

Biases aren’t inherently bad—they help us move quickly—but as the example of IaC demonstrates, unchecked bias can limit us. By anchoring decisions in first principles, we can do more than solve problems; we open the door to better solutions. Asking, “Is this truly the best approach?” ensures we don’t just repeat old patterns but discover new opportunities to improve and thrive.

Government CAs and the WebPKI: Trust is Often the Opposite of Security

Following my recent post about another CA failing the “Turing test” with a likely MITM certificate issuance, let’s examine a troubling pattern: the role of government-run and government-affiliated CAs in the WebPKI ecosystem. This incident brings attention to Microsoft’s root program, what is clear is a fundamental contradiction persists: we’re trusting entities whose institutional incentives often directly conflict with the security goals of the WebPKI.

The Value Proposition

Let me be clear—CAs and root programs serve critical functions in the WebPKI. As I discussed in my article about Trust On First Use, attempting to build trust without them leads to even worse security outcomes. The issue isn’t whether we need CAs—we absolutely do. The question is whether our current trust model, which treats all CAs as equally trustworthy regardless of their incentives and constraints, actually serves our security goals.

The Core Contradiction

History has repeatedly shown that the temptation to abuse these capabilities is simply too great. Whether it’s decision-makers acting in their perceived national interest or CAs that fail to understand—or choose to ignore—the consequences of their actions, we keep seeing the same patterns play out.


Consider that a CA under government oversight faces fundamentally different pressures than one operating purely as a business. While both might fail, the failure modes and their implications for users differ dramatically. Yet our root programs largely pretend these differences don’t exist.

The DarkMatter Paradox

The removal of DarkMatter as a CA due to its affiliation with the UAE government, despite its clean record in this context, starkly contrasts with the continued trust granted to other government-affiliated CAs with documented failures. This inconsistency highlights a deeper flaw in root programs: Rules are often applied reactively, addressing incidents after they occur, rather than through proactive, continuous, and consistent enforcement.

A History of Predictable Failures

If you read yesterday’s post, you may recall my 2011 post on the number of government-run or affiliated CAs. The intervening years have given us a clear pattern of failures. Whether through compromise, willful action, or “accidents” (take that as you will), here are just the incidents I can recall off the top of my head—I’m sure there are more:

The Economics of (In)Security

The fundamental problem isn’t just technical—it’s economic. While some root programs genuinely prioritize security, inconsistencies across the ecosystem remain a critical challenge. The broader issue is not simply about convenience but about conflicting incentives—balancing compatibility, regulatory pressures, and market demands often at the expense of doing what is best for end users.


CAs face strong incentives to maintain their trusted status but relatively weak incentives to uphold the rigorous security practices users expect. The cost of their security failure is largely borne by users, while the benefits of looser practices accrue directly to the CA. Audits, much like those in financial scandals such as Wirecard or Enron, often serve as window dressing. With CAs selecting and paying their auditors, incentives rarely align with rigorous enforcement.


The long tail of rarely-discussed CAs is particularly concerning. Many root certificates in browser trust stores belong to CAs that issue only dozens to hundreds of certificates annually, not the thousands or millions that major CAs produce. Some haven’t issued a certificate in ages but retain the capability to do so—and with it, the ability to compromise security for months or longer. It wouldn’t be unreasonable to say these low-volume CAs pose risks far outweighing their utility.

Certificate Transparency: Necessary but Not Sufficient

While Certificate Transparency has been invaluable in helping identify incidents (including the latest ICP-Brasil case), it’s not a complete solution. Its limitations include:

  • Reactive nature: Violations are identified only after they occur.
  • Monitoring challenges: Effective oversight is resource-intensive and depends on a small community of volunteers.
  • Incomplete coverage: Not all certificates are logged, leaving gaps in visibility.
  • Poorly funded: We have too few logs and monitors to have confidence about the long-term survivability of the ecosystem.

The Limits of Technical Controls

Some browsers have implemented technical guardrails for some CA mistakes in their validation logic, such as basic certificate linting and rules, to reject certificates that don’t pass basic checks but nothing more granular. There have been discussions about imposing additional restrictions on CAs based on their relationship to government oversight or regulatory jurisdictions. However, these proposals face significant pushback, partly due to the political consequences for browser vendors and partly due to concerns about basing trust decisions on “future crime” scenarios. As a result, the WebPKI remains stuck with a one-size-fits-all approach to CA trust.

The Monitoring Gap

The challenges extend beyond malicious behavior to include operational oversight. For instance, in August 2024, ICP-Brasil formally announced they would cease issuing publicly trusted SSL/TLS certificates. Yet by November, they issued a rogue certificate for google.com. This outcome was predictable—public CT logs in 2020 revealed their consistent inability to handle basic operational and issuance requirements, including issuing certificates with invalid DNS names and malformed URLs. Despite these red flags, they remained trusted.


How many other CAs operate outside their stated parameters without detection? Patterns of technical incompetence frequently precede security incidents, but warnings are often ignored.

Required Reforms

To address these systemic issues, root programs must adopt the following measures:

  1. Consistent Standards: Apply appropriate scrutiny to CAs based on their operational and institutional context.
  2. Swift Response Times: Minimize delays between discovery and action.
  3. Proactive Enforcement: Treat red flags as early warnings, not just post-incident justifications.
  4. Technical Controls: Implement meaningful restrictions to limit the scope of certificate issuance.
  5. Automated Compliance: Require CAs to report security incidents, and operational, and ongoing compliance while continuingly to monitor them via automated checks for compliance.
  6. Value Assessment: Regularly evaluate whether each CA’s utility justifies its risks and remove those that do not.

Protecting Yourself

Until the ecosystem adopts consistent and enforceable security measures:

  • Windows users should monitor Microsoft’s root program decisions.
  • Enterprises should use the Microsoft distrust store and group policies.
  • Everyone should stay informed about CA incidents and their handling.

When Will We Learn?

The “Turing Test” reference in my previous post was somewhat tongue-in-cheek, but it points to serious questions: How many more failures will it take before we fundamentally reform the WebPKI? Even if we know what’s needed, can we realistically create a system that treats government-affiliated CAs differently – or even reliably identify such affiliations – given the complex web of international relations, corporate structures and potential diplomatic fallout?

With regulatory frameworks like eIDAS 2.0 potentially constraining security measures browsers can take, vigilance from the security community is more critical than ever. Stay vigilant, and keep watching those CT logs. Someone has to.