Exploring Requirements for Timelines Certificate Problem Reports and Revocations

Today, DigiCert’s mass revocation is in the news, so I thought it would be worthwhile to explore the rules for a CA when it comes to revoking a subscriber certificate and the choices and constraints a CA faces in circumstances like this.

First, let’s examine the types of issues that might lead a CA to revoke a certificate before its natural expiration. Broadly speaking, there are two categories of these issues: 

Externally identified Issues, these start from what is called a Certificate Problem Report (CPR), they may originate from: Subscribers, or Relying parties, application software suppliers, and other third parties

Internally identified issues, these may be identified by: Pre-issuance checks, Post-issuance checks, or Internal periodic reviews

Problem reports from external entities almost always require some minimal level of investigation. An exception might be cases where the reporter has access to the private key and requests a certificate be revoked, this could come from the subscriber, or someone who has gained access to the private key. Even in these cases you often need to do multiple communication round trips because requests often do not contain sufficient information to authenticate the request from the outset.

There are other cases too; for example, sometimes a problem report claims unauthorized issuance of a certificate, but it turns out the reporter had authorized it, perhaps by signing up for a CDN or SaaS offering that uses that CA. In these cases you might need to contact the person who requested the certificate, or the service provider for more information or as the reporter to prove control of the domain to verify your authorization to request the revocation — again introducing a few round trips. 

However, most problem reports are spam or requests to take down a website. In this second case, if a CA has a policy about revoking domains that serve such content then it also needs some level of investigation. On that check out this post on the CA’s role in phishing mitigation on why CAs should stay out of this class of issue altogether. 

These are just a few examples of certificate problem reports a CA might see but there are more, one of the most actionable is a claim that there was a violation of the requirements that a CA must meet, such as including incorrect information in a certificate or using a certificate profile with a wrong extension, or using a domain control verification process that does not meet requirements.

The workflow for these requests is straightforward, but it still requires some investigation. The questions that need to be answered before taking action include things like: When was the certificate issued? What rules were in force at that time? What data was used when validating the entitlement of the requester? What version of the software implemented the validation at the time of issuance? Is the reporter’s interpretation of the requirement technically correct?

This is why the Baseline Requirements (BRs) allow CAs some time for an investigation:

CA SHOULD revoke a certificate within 24 hours and MUST revoke a Certificate within 5 days

NOTE: Here is a conversation on the last changes to this text, searching the forums is always a good way to validate what the intent was at the time of authorship.

Let’s look at an example here, in 2020 Let’s Encrypt had a misissuance due to a bug in CAA handling. This issue was found internally through what we would call mitigating controls, a process that exists to double-check you are doing the right thing.

That’s not a lot of time, especially if such a report comes in on New Year’s Eve. CAs maintain on-call rotations to investigate issues, but it’s easy to imagine that from time to time, an investigation might take longer than 24 hours to fully complete. Despite this, the CAs shouldn’t have an infinite amount of time for such investigations, which is why the text limits the time to revocation to a maximum of 5 days.

Let’s explore what this might look like in the case of a real incident. On February 29, 2020, Let’s Encrypt through an internal review discovered an issue in their domain control verification logic and kicked off an investigation, and set off to do a revocation of over 3 million certificates on that same day they discovered the issue they announced their plan to revoke the affected certificates. Revocations began on March 4, 2020, and by the March 5, 2020 deadline, they had revoked 1,706,505 certificates. Network scans revealed that a large portion of the remaining 1,003,596 certificates were still in active use.

Due to concerns over the potential impact revocation would have on these active sites, they filed an incident for delayed revocation just before the deadline. These remaining certificates either expired or were revoked by May 29, 2020, after which the ticket remained open for community discussion and further analysis.

Let’s Encrypt did many things right here, including implementing compensating controls to detect issues internally, preparing to analyze the scope of impact immediately, and revoking the certificates they could without causing disruption as soon as possible. When it became clear that they would not meet the deadline requirements, they filed a bug to document their non-compliance with the obligations before the deadline lapsed. Moreover, they communicated clearly and regularly throughout the entire process until the issue was resolved.

But what about the missed revocation deadline!?! The reality is that delayed revocations happen, It’s not ideal, but they do happen. What is important is how a CA deals with them, and how credibly. Let’s Encrypt executed like a well-oiled machine here; no one complained. (They did this all without getting distrusted, for more on how CAs usually get distrusted see.)

There are other cases where delayed revocations were done as well, for example in 2019 Apple, Google, and GoDaddy were all hit by a bug in EJBCA, all of which had some delayed revocations, and while there may have been some challenges in each of those cases, what you could tell is that like Let’s Encrypt they were being proactive, transparent, and working towards addressing the issue responsibly and when you looked at their broader history there was not a sign of systemic failures.

OK so if you are in the CA industry right now you should be saying, but Ryan, What about the Certificate Problem Reports response requirements?! This Digicert issue was reported externally, those requirements are also relevant if so you are right, these requirements say:

The CA SHALL maintain a continuous 24×7 ability to accept and respond to revocation requests and Certificate Problem Reports.

And …

The period from receipt of the Certificate Problem Report or revocation-related notice to published revocation MUST NOT exceed the time frame set forth in Section 4.9.1.1. The date selected by the CA SHOULD consider the following criteria:

– The nature of the alleged problem (scope, context, severity, magnitude, risk of harm);

– The consequences of revocation (direct and collateral impacts to Subscribers and Relying Parties);

– The number of Certificate Problem Reports received about a particular Certificate or Subscriber;

– The entity making the complaint (for example, a complaint from a law enforcement official that a Web site is engaged in illegal activities should carry more weight than a complaint from a consumer alleging that they didn’t receive the goods they ordered); and

– Relevant legislation.

While In Section 4.9.1.1 it says

With the exception of Short-lived Subscriber Certificates, the CA SHALL revoke a Certificate within 24 hours and use the corresponding CRLReason (see Section 7.2.2) if one or more of the following occurs:

If you’re paying attention, you might be thinking to yourself, “Wait a minute, doesn’t that mean, since the request came in from an external reporter, that the certificate should be revoked within 24 hours?”

Well, yes, it does! This totally makes sense in some cases, for example, if a notice comes in of a key compromise that has all the required detail to confirm it, but as we discussed above, requests received externally often require some level of verification. This begs the question is 24 hours enough time to handle that or not? But one thing is for sure arguing that the rules are wrong or that the rules should be changed during an incident response is a recipe for disaster.

So what would I have done if I were DigiCert? I would have stopped issuance as soon as the issue was identified. I would have then set engineers to work on resolving the issue in the right way. In parallel, I would have prepared a preliminary incident report, noting that I had stopped the issuance. I would have also noted any confusion I may have had about the language in the requirements in that incident report, I would have then committed to start revoking following the requirements, and before the 24-hour clock expired, I would have started replacing and revoking the certificates that could be revoked safely. If it was clear that despite active, good faith, best efforts, the revocation was not going to be completed in time, then, I would have filed a delayed revocation incident before that clock expired—essentially following the same playbook Let’s Encrypt used back in 2020.

After successfully navigating the incident, I would have worked within the CA/Browser Forum to discuss this language and make sure it was it is clearer why the difference exists or change it to align with the other text.

We did not really tackle the topic of ‘Internally Identified Issues’? Though I do mentioned them above but didn’t really explain them in detail, so let’s see if I can capture them without rambling too much.

As I mentioned earlier, there are several classes. In the Let’s Encrypt case, they had some compensating controls around design reviews that identified an issue. When it comes to cases like this, it is pretty straightforward; by the time you have completed the design review, you are sure there is an issue at play, and the clock starts ticking.

Regarding pre-certificate linting, it’s a failure in your linting management if an issue that could or should have been caught by your linting gets issued; this is more akin to a design defect than a CPR-like incident and as such it would certainly trigger the 24-hour clock.

Then there is post-issuance linting, which is used either as a spot check during data gathering for an audit, as a compensating control, for example, or when you are testing new rules for the linter or refining your linter policy configuration. These are by their very nature a lot more like a CPR in that, because you don’t have a fine-tuned policy for the linter, they too need some level of investigation.

To be clear, none of this appears related to the DigiCert incident, but they have not published a detailed incident report yet, which will include an end-to-end timeline and that may change some of this analysis but I wanted to try to cover the topic more holistically in this post before declaring victory tonight. 

Regardless of the above, the key takeaway is that if you are a CA or a customer of a CA, you need to be prepared to respond to an incident. One of the most useful actions you can take to achieve this is to adopt ACME for certificate lifecycle management. In particular, look for ACME implementations and CAs that support ARI, an extension of ACME. This extension enables a CA to signal to its enrollment clients that their certificate needs to be replaced sooner than expected, allowing them to roll certificates without requiring on-call incident response or causing downtime. The key takeaway is if you are a CA or a customer of a CA you need to be ready to respond to an incident. 

If you are interested in this topic, I recommend checking out the Sectigo “Root Causes” podcast which has a great conversation on Chrome’s recent presentation at the CA/Browser Forum on Incident Reporting. 

This presentation can be summarized as : 

– Incident reporting is an opportunity to demonstrate continuous improvement and to make the ecosystem more resilient and secure.

– We rely on the public incident reporting process to demonstrate that continued trust is justified.

– We think incident reporting should be boring (i.e. routine).

I hope this was helpful. Let me know if you think I missed something or if my analysis is wrong in some way. When processing prose as code, there are almost always slightly different takes, and I wouldn’t be surprised to find out others feel differently than me on this issue.

Thanks to all of you who read this and provided feedback — it takes a village.

HSMs Largely Protect Keys from Theft Rather Than Abuse

HSMs were designed to protect keys from theft and to move those keys into a different security domain than the code that uses those keys. The workloads using these HSMs use credentials or, worse, shared secrets that are often pushed down to the machines via CI pipelines or at imaging time to authenticate to them. These API keys and credentials are often stored in key vaults like HashiCorp, making them no more secure than the key vault itself. Unfortunately, they’re also seldom rotated. If an attacker gains code execution on the box or gains access to the API keys credentials in some other way, they can sign or encrypt with the keys on the HSM that the associated credentials have access to. The attacker doesn’t even need to stay resident on the box with the key or credential because they are usually just stored in environment variables and files on the box, allowing them to be taken and used later from a network perspective with a line of sight to the HSM.

In short, beyond a simplistic access control model HSMs usually do not protect keys in use; they protect them from theft. To make things worse, since they have no concept of the workload, resulting in the auditing mechanisms they have lacking adequate detail to even usefully monitor the use of keys.

Challenges Using HSMs

By design, the administrative model of HSMs is quite different from what we’re typically used to. The goal of the HSM design was to prevent regular IT staff or third parties with access to the facilities containing the HSMs or those with code execution on the boxes connected to them from being able to abscond with the keys. These use cases were almost always low-volume systems that were infrequently used relative to other workloads. Their performance often becomes a bottleneck. It’s possible to design deployments that can keep up with and meet the availability requirements of large-scale systems, but this often requires deploying clusters of HSMs in every region and every cluster where your workload exists and having your data center staff manage these devices, which were designed around largely manual physical administration.

To make things worse, the only compartmentalization concept these HSMs have is usually the concept of a “slot,” which you can think of as a virtual HSM within that physical HSM. Each one you use increases your operational overhead, resulting in customers often either sharing one slot with many workloads or simply not using the HSMs at all. This often makes the capital and non-capital costs of their use in at-scale systems, as well as their use in less secure cases impractical. In cases where these challenges make HSMs impractical, there are often other approaches that can still help mitigate key compromise risks so it’s not a question of all or nothing.

When HSMs Make Sense

To be clear, HSMs are useful for securing cryptographic keys from theft and are essential in several high-stakes scenarios. For example, when keys need to live for long periods and be managed independently of individuals in an organization who may come and go, and when physical theft of keys is a concern, HSMs are crucial for a sustainable solution. A great example of this is the key material associated with a root Certificate Authority or cryptocurrency wallet. These keys seldom change, live for many years, and must survive many risks that many other use cases do not face.

Beyond key storage, in some cases, HSMS can be used as part of a larger security system where the consumption of key material is a small part of the security operation. In these cases, code that will be executed within an HSM is written to enable them to be part of how the overall system delivers abuse protection. For example, Apple has developed code that runs on HSMs to help iPhone users recover their accounts with reduced exposure to attacks from Apple staff. Some cryptocurrency companies implement similar measures to protect their wallets. In these use cases, the HSM is used as a trusted execution environment, a stronger confidential computing-like capability, for the TCB of a larger software system. This is achieved by running code on the HSM that exposes a higher-level transactional interface with constraints such as quorums, time-of-day restrictions, rate limiting, or custom workload policies. These solutions often generate the message to be signed or encrypted in the HSM and then use a key protected within the HSM to sign or encrypt that artifact.

HSMs are also often mandated in some environments, largely for historical reasons, but they’re required nonetheless. The impracticality of this requirement has led to modifications in the security model offered in cloud HSMs over their traditional designs, these modifications weaken the original security guarantees that HSMs were expected to deliver to enable modern systems to continue their use. For example, HSMs originally required operators to bring the HSMs back up after a power failure using physically inserted tokens or smart cards and pins, but now they can be configured to automatically unlock. Additionally, the use of HSMs in the cloud is now often gated by simple API keys rather than smart cards or other asymmetric credentials bound to the subject using the key. With all this said, requirements are requirements, and many industries like finance, healthcare, and government have requirements such as FIPS 140-2 Level 2+ and Common Criteria protection of keys which lead to mandated use of HSMs even when they may not be the most appropriate answer to how to protect keys.

The Answer: Last Mile Key and Credential Management

While HSMs provide essential protection for cryptographic keys from theft, for many use cases they fall short in preventing the misuse of keys and credentials. To address this gap, organizations also need robust last-mile key and credential management to complement HSMs, ensuring the entire lifecycle of a key is secured. Video game companies do it, media companies do it, and so should the software and services we rely on to keep our information safe.

  1. Key Isolation and Protection: Protect keys from the workloads that use them by using cryptographic access controls and leveraging the security capabilities provided by the operating system. 
  1. Dynamic Credential Management: Implement systems that automatically rotate credentials and API keys. This limits the value of exfiltrated credentials and keys to an attacker. 
  1. Granular Access Controls: Implementing strong attested authentication of the workload utilizing the keys enabling access controls to ensure that only authorized entities can access the cryptographic keys.
  1. Enhanced Visibility and Auditing: Integrate solutions that provide detailed visibility into how and where keys and credentials are used. Enabling detection usage anomalies, and quick impact assessments to security incidents.
  1. Automated Lifecycle Management: Utilize automated tools to manage the entire lifecycle of keys and credentials, from creation and distribution to rotation. Increasing confidence in your ability to roll keys when needed.

This combination of approaches not only protects keys and credentials from theft and reduces their value to attackers but also ensures their proper and secure use, which turns key management into more of a risk management function. A good litmus test for effective key management is whether, in the event of a security incident, you could rotate keys and credentials in a timely manner without causing downtime, or assess with confidence that the keys and credentials were sufficiently protected throughout their lifecycle so that a compromise of an environment that uses cryptography does not translate to a compromised key.

Thinking more holistically about the true key lifecycle and its threat model can help ensure you pass these basic tests.

Turning Catastrophe into Opportunity: Improving Processes and Vendor Accountability

We often hear about customers not wanting to deploy agents, usually citing the rationale that while each one may only consume a bit of memory and CPU the sum of them slowly but surely grind systems to a halt. The real impact is management overhead, operational risk, and attack surface. Attack surface is also something that requires nuance to understand. For example, software that runs in a privileged context, such as in the kernel, parses data, communicates on a network, or is loaded at boot, represents a lot more risk than software that runs with least privilege in its own user session or in the runtime context of the user.

As they say, you can’t have your cake and eat it too. There are always trade-offs, and when it comes to software, those trade-offs usually involve choosing between performance, security, and delivering on a value proposition.

The full impact of the Crowdstrike outage won’t be understood for some time, but one thing is for sure: organizations that sell solutions reliant on agents will need to provide much more justification and explanation about how their software works and how they manage updates.

It also means that organizations, at least the mature ones, will be re-evaluating what they have on these endpoints, the value they provide, and the risks they represent from both security and operational perspectives. These organizations will also be revisiting the controls they use to manage the ingestion of the software they rely on, and how they manage that risk over time since software is a living entity and not something static.

In short, like most catastrophes, there is a silver lining. This will be a great opportunity to improve existing processes to help prevent entire classes of failures like this. Hopefully, it will include a more robust investment in holding vendors accountable and thoroughly checking their work.

Reading the Tea Leaves: What Led to the Largest IT Outage in History

Last night, on July 18, 2024, a significant IT outage disrupted businesses worldwide, affecting airlines, financial services, TV broadcasters, and more. Some have described this outage as potentially the “largest IT outage in history.” It was linked to a recent update from cybersecurity giant CrowdStrike. According to CrowdStrike CEO George Kurtz, the disruption was due to a defect in a content update for Windows hosts (source). This incident underscores a critical lesson in software deployment: the importance of thorough testing and staged rollouts, often referred to as canarying.

What is Canarying?

Canarying a release involves deploying updates to a small subset of users or systems before a full-scale rollout. Named after the canaries once used in coal mines to detect toxic gases, canarying in software development serves as an early warning system. By releasing software to a limited audience first, potential issues can be identified in a controlled environment, allowing for fixes before broader deployment.

Did Canarying Happen?

The recent outage suggests that these canarying stages were either skipped or inadequately performed. In complex IT environments, the surface area for potential configurations of general-purpose computing endpoints is vast, making it impossible to test all combinations of usage patterns, hardware, software, and configurations. This is where canarying proves invaluable. Issues that are difficult to detect in a general release can be caught by exposing the update to a limited audience first. The failure to do so can lead to significant disruptions, as seen in this case. This isn’t solely CrowdStrike’s problem; it affects customers too. Enterprises often use tools like Microsoft’s Windows Server Update Services (WSUS) to control what and when updates are deployed to their devices, mitigating risks associated with issues just like this.

Looking at the Scale

The broader implications of this incident echo concerns from the 2000s about the risks of a Windows monoculture (source). Back then, critics argued that Microsoft’s dominance created a single point of failure. If a vulnerability or defect affected Windows, it could potentially impact millions of systems worldwide. This monoculture made it easier for malware to propagate, as seen with worms like Code Red (source) and Nimda (source), which exploited Windows systems en masse.

Historical incidents like the WannaCry ransomware attack (source) and the recent CDK Global outage impacting the automotive industry illustrate the widespread impact software vulnerabilities can have. The CDK outage, caused by two cyberattacks, forced the company to shut down its systems for approximately two weeks, resulting in an estimated $1.02 billion loss for dealerships (source). During that time, the impacted dealerships had the option to revert back to pen-and-paper and third-party software workarounds. However, this will likely not be possible in the CrowdStrike outage, as a result, it will likely have a much larger financial impact, especially given the broader scope and scale of the affected industries. Moreover, this incident echoes the CSRB monoculture concerns highlighted in the STORM-0558 incident, where Google’s response to the CSRB report on Microsoft emphasized the risks of relying on a single vendor (source).

Conclusion

This outage reminds us of a key lesson: always thoroughly test software before full deployment. But there’s a bigger lesson here, one that echoes Ken Thompson’s famous words: “You can’t trust code you didn’t totally create yourself.” This outage highlights the risks of relying too heavily on third-party software without proper mitigating controls.

To build more reliable systems, we need to be vigilant. This means not just blindly trusting vendors but verifying their work. It’s also crucial for basic business continuity, not just for avoiding security defects (source) and regressions like we recently saw with regreSSHion (source). And let’s not forget the risks associated with software monocultures; we should diversify, to the extent we can, our software to reduce the risk of widespread failures and where we can’t implement appropriate mitigating controls.

In essence, while we may not write every line of code ourselves, we must manage the risks represented by each piece of software we rely on, or we are doomed to repeat outages like today.

Content is King in Phishing and the Role of Publicly Trusted CAs

Phishing attacks often begin with a seemingly simple email. These emails appear to be from trusted sources and include links to fake websites or messages that create a false sense of urgency, prompting users to act without considering the consequences. Once on the phishing site, everything looks familiar. Attackers meticulously copy the layout, design, and even the content from real websites. The only noticeable difference in a well-executed phishing site is where the data is submitted.

Although studies show that users typically do not scrutinize the address bar or understand URLs, some more technical users may check the domain name hosting the content as a last resort to verify its legitimacy. This approach, however, is problematic for several reasons:

  1. Domain Confusion: Not everyone knows which domain is appropriate for which content. For example, TD Bank, N.A. issues co-branded credit cards for various companies. When paying your credit card bill online, you might see a domain name owned by TD Bank rather than the brand on your credit card.
  2. Brand Ambiguity: The global uniqueness of business names or brands is not something you can rely on. For instance, when I hear “Stripe,” I think of the San Francisco-based payments company, but there could be other companies with the same name, such as one in the pavement striping industry. To put that into context it costs about $100 and 24 hours of somebody else’s money to register a company with virtually whatever name you want.
  3. Unrelated Legal and Domain Names: Company brands and their legal names often differ, as do their domain names. This discrepancy makes domain names more useful to attackers than to users for detection.

Furthermore, compromised websites from small businesses, churches, and other less technical organizations are frequently used to host phishing content. The key takeaway here is that content, not the domain name, is what makes a phishing campaign successful. The familiar look and feel of the content on phishing sites play a crucial role in deceiving users, making them less likely to question the site’s authenticity.

The Role of Certificate Authorities

Certificate authorities (CAs) in the web ecosystem exist to solve the trust-on-first-use problem introduced by our desire to protect the content we submit over the web from prying eyes (for more information, see TOFU and the Web). At its core, this problem arises because if you perform an anonymous key exchange or simply download the public key of a website from an unauthenticated source like DNS (while DNSSEC does technically exist, it is poorly adopted outside of TLDs), an attacker could replace the published key with one they control. This allows them to perform a man-in-the-middle (MITM) attack.

Certificate authorities are the answer to this problem. Browsers maintain a set of requirements that CAs must meet if they are to be distributed as “trusted” within their browsers (for more information, see Why We Trust WebPKI Root Certificate Authorities). In this ecosystem, browsers delegate solving the trust-on-first-use problem to the CAs. Essentially, they require CAs to prove that the requester of a certificate has administrative control over the domain and issue a certificate that attests to this. This enables browsers, as agents of the user, to assure the user that no one else should be able to intercept the information sent to the website (for some value of “website” — the modern website is made up of many third-party components that can potentially see your content also, but that’s a topic for another post).

Where things start to get tricky is how standards are defined. Anyone who works in a standards body knows that getting to a consensus is hard and often leads to less-than-ideal explanations of things. This is made worse by not everyone who participates understands the whole problem domain. Unfortunately, this is as true in the CA/Browser Forum as it is in other standards bodies. In the context of phishing, this comes into play in the “Baseline Requirements for the Issuance and Management of Publicly-Trusted TLS Server Certificates”, which has the concept of a “High-Risk Certificate Request” that states:

High-Risk Certificate Request: A Request that the CA flags for additional scrutiny by reference to internal criteria and databases maintained by the CA, which may include names at higher risk for phishing or other fraudulent usage, names contained in previously rejected certificate requests or revoked Certificates, names listed on the Miller Smiles phishing list or the Google Safe Browsing list, or names that the CA identifies using its own risk-mitigation criteria.

(It is worth noting Google Safe Browsing isn’t a list it is a web service…. sigh….)


And

The CA SHALL provide all personnel performing information verification duties with skills-training that covers basic Public Key Infrastructure knowledge, authentication and vetting policies and procedures (including the CA’s Certificate Policy and/or Certification Practice Statement), common threats to the information verification process (including phishing and other social engineering tactics), and these Requirements.

On its surface, this seems harmless enough. It doesn’t mandate that the CA do anything for phishing beyond training its validation staff about the contents but does allow them to do something about it if they can. After all, maybe they see phishing content and can stop the issuance altogether before a certificate is issued, preventing the use of SSL on the phishing site altogether. The problem is this all does more harm than good.

Why CAs Don’t Make Good Content Police

Determining whether content is phishing or legitimate can be highly subjective and context-dependent. What might appear as a phishing attempt to one person could be a legitimate operation to another.

As of 2024, there are approximately 1.13 billion websites on the internet, of which around 201.9 million are actively maintained and regularly updated. Each of these websites often consists of many different subdomains, all of which receive certificates, the large majority via automation with no opportunity for manual content inspection.

Simply put, this is a scale of a problem that does not lend itself to manual review or subjective assessments of phishing. But it gets worse.

There are around 86 WebPKI trusted certificate authorities on the web today, many of which operate with just a handful of people. Beyond that, though they exist to serve the whole web, they are in many different legal jurisdictions around the world, each with its own cultural norms and expectations. However, the web is an international asset, and if we were to rely on CAs to be the content police on the web, we would have hugely inconsistent results, especially given the current rules.

It is also worth noting if the decision-making power regarding content trustworthiness is applied at the domain name control verification, users are disempowered and it begins to resemble censorship rather than protection. Users should have the ability to choose what kind of subjectivity they want when it comes to protecting themselves from malicious content.

So why don’t we just make the rules clearer on what constitutes appropriate content? Most would agree this is a hugely difficult problem, but even if we put that aside, the reality is that CAs visit websites only at issuance time, often before there is any content published, since SSL is a requirement for a website to even launch. Beyond that, websites often serve different content to different users. In my last role at Microsoft, I was responsible for security engineering for the advertising business. Malicious advertisers would often serve Microsoft IP addresses content that met our policies but serve other users content that violated our policies. So even if CAs did check for phishing, all the phishers would need to do is serve clean content to the CA or change the content after the fact.

Beyond that, there is the question of how a CA would respond to finding a website hosting phishing content. They could revoke it, but as I mentioned earlier, often the website hosting content has been compromised in some way, and revoking that certificate would impact that other website. For example, it’s not uncommon to see phishing content served through shared services like drive.google.com or S3 bucket URLs. Revoking that certificate would impact all of those users, that is if revocation was actually effective, but it is not.

Revocation services like OCSP and CRL repositories are often so slow or unreliable that browsers were never able to deploy what we call hard fail revocation. This means that even when a certificate is revoked, the revocation message might never reach the browser for various reasons, so the CA may feel better that they have revoked the certificate, but in practice, it would at best deliver inconsistent results, making the web less reliable without actually addressing the real problems that enable phishing.

For more insights on the challenges and limitations of CAs in fighting phishing and malware, check outLet’s Encrypt’s post on “The CA’s Role in Fighting Phishing and Malware”.

So What Can We Do to Help with the Problem of Phishing?

To effectively combat phishing, we need a robust reputation system that continuously monitors content over time from many different network perspectives. This includes residential IPs, commercial ones, and more. Here are just a few examples of things we could look at with such a service:

  1. Analyze Domain Registration: Look at the age and subject of the domain registration and the registrar of the domain since some are used more commonly than others.
  2. Examine Hosting Providers: Identify patterns in hosting providers serving the content, as certain providers may be more frequently associated with phishing activities.
  3. Inspect Website Scripts: Evaluate the JavaScript on the website to understand its functionality and detect potentially malicious behavior.
  4. Assess Content Similarity: Compare the content served by the website to other known websites to identify similarities that might indicate phishing.
  5. Utilize Machine Learning Models: Feed all of this data into well-trained machine learning models to generate probability scores indicating the likelihood of phishing.

This system should be integrated into user agents, such as browsers, so phishing checks can occur as part of the rendering process, providing real-time protection for users. Technologies like Google Safe Browsing and Microsoft SmartScreen already do some of this, and similar projects are often staffed with hundreds of engineers and other professionals. They deal with recourse for mislabeling, monitor false positives and false negatives, and tweak algorithms to improve solutions, all while designing privacy-preserving methods.

Are these services perfect? Absolutely not! Can they improve? The answer is unquestionably yes, but the key thing is that ad hoc censorship at issuance time by CAs is simply not the answer. It makes the web less reliable and at best offers a false sense of value in exchange for giving CAs a market message that they are fighting phishing when in reality they deliver no material value to the problem.

Global Consistency for AKD: Using Armored Witness to Prevent Split Views

By Al Cutter and Ryan Hurst

When it comes to end-to-end encryption, securing the directory of public keys for each recipient is crucial so relying parties can have confidence they are talking to the right party. An attacker who can manipulate these directories can impersonate any participant in the communication without detection.

Over the years, various efforts have worked towards addressing this challenge. These efforts aimed to solve the problem of undetectable modifications to these central key servers. For example:

  • CONIKS was the first academic attempt to create a key transparency framework that enabled users to verify the authenticity of keys in a privacy-preserving way without relying solely on the key server’s integrity.
  • Google Key Transparency, based on CONIKS, was Google’s effort to prove it was possible to deploy these patterns at an internet scale.
  • Apple Contact Discovery was the first deployment of these patterns for a large-scale messaging system.

Arguably the most successful and well-understood implementation of these patterns though is Facebook’s Auditable Key Directory (AKD), which is intended to be used across its messaging offerings but is already deployed with WhatsApp. Like the other solutions mentioned above, it is reliant on constant monitoring and verification to ensure global consistency, mitigating “Split View” attacks against its relying parties and ensuring these end-to-end encrypted messaging solutions maintain the promised security properties.

A research paper by Sarah Meiklejohn et al., titled “Think Global, Act Local: Gossip and Client Audits in Verifiable Data Structures” (https://arxiv.org/abs/2011.04551), set out to define a verifiable scheme to perform this verification in a cost-efficient and provable way. The essential idea in the paper is that the problem can be solved by gossiping the state of the underlying verifiable data structure across a network of perspectives. The goal is to ensure that relying parties verify they are seeing the same version of the directory as everyone else who is participating.

Introducing Google’s Armored Witness

Google’s Armored Witness project aims to operationalize this concept by providing a geographically distributed network of secure hardware-based “notaries.” Its design is based on custom hardware, the USB Armory, designed with security, verifiability, and maximal openness given commercially available components and associated constraints. It modifies the USB Armory by removing unused components and adding Power over Ethernet (PoE).

The USB Armory platform uses a Go unikernel design, which has many benefits. For one, Go naturally supports “reproducible builds,” meaning that any attempt to rebuild from an application’s source will yield the same binary. This is important because it enables the hardware to report what firmware it is running and map that back to the source, allowing you to understand what is happening inside. 

The hardware itself also contains a trusted execution environment and a secure element that enables sensitive code to be isolated and the keys to be secured from many classes of logical and physical attacks. These keys are then used to do the “notarization” of their observation of these ledgers.

Check out this presentation from Andrea Barsani at 2024 CanSecWest to learn more about the hardware.

Hardware alone isn’t enough, though. We need a global network based on these devices so that relying parties can get assurance that an adequate number of network perspectives are seeing the same log contents.

If you want to learn more about the broader problem being solved, check out this video of Fillipo Valsorda from Real World Crypto 2024, and for more on the Witness network idea, Fillipo also talked about the work here at the CATS conference.

But How Armored Witness Might Apply to AKD?

Here’s how an integration of Google’s Armored Witness into Facebook’s AKD framework could work:

  1. Log Integration and Epoch Consistency: By integrating Armored Witness into the AKD framework, each Armored Witness device can act as a witness to a log of AKD epoch roots. Specifically::
    • Log Integration: Each AKD epoch root is added to the log, such that the Nth epoch root is present at the Nth log index.
    • Witness Network: The witness network, comprising Armored Witness devices, counter-signs checkpoints from these logs, providing an additional layer of verification.
  2. Client-Side Verification: As the AKD is updated, AKD clients are provided with  epoch update bundles, which include:
  3. Clients verify these bundles by:
    • Checking the signature on the directory epoch root.
    • Verifying the countersignatures on the log checkpoint.
    • Using the inclusion proof to confirm the correct log position of the epoch root, ensuring that the Nth root is in the Nth log position.
  4. Split View Protection: The distributed nature of Armored Witness devices ensures that:
    • Split views of the log are prevented due to the geographically distributed countersigning by the witness network.
    • The log sequence number matches the directory epoch number, maintaining a strict one-to-one correspondence.
  5. Heavy Lift Verifiers: Heavy lift verifiers, which perform more in-depth consistency checks, can source epoch roots from the log for pairwise append-only verification. This process ensures the correct operation of the directory over time, further strengthening the security framework.
  6. The new directory epoch root commitment.
  7. Inclusion proof hashes for the epoch root in the log.
  8. The countersigned log checkpoint.

Conclusion

Integrating Google’s Armored Witness with Facebook’s AKD framework offers a simple solution to a very hard problem. More importantly, adopting this approach not only addresses the split-view problem for Facebook but also provides a substrate that can protect all verifiable data structures.

By leveraging a generic network of geographically distributed Armored Witness devices, solutions dependent on verifiable data structures to prove integrity, such as AKD deployments, can achieve robust split-view protection, ensuring integrity and authenticity across different regions.

Timeless Farm Wisdom

My father grew up on a small farm in eastern Washington. They say you can take the boy out of the farm, but you can’t take the farm out of the boy. As a kid, I was always hearing farm life sayings from my grandfather and father. To this day, I think they are some of the best pieces of advice I have ever received. Here are a few:

  • Don’t wrestle with pigs. You both get filthy and the pig likes it: Avoid pointless arguments with people who thrive on conflict. It only wastes your time and energy.
  • Don’t count your chickens before they hatch: Don’t assume a deal is closed until the contract is signed.
  • Make hay while the sun shines: Capitalize on opportunities while they are available.
  • Don’t put all your eggs in one basket: Diversify your investments and strategies to mitigate risk.
  • The early bird catches the worm: Being proactive and early can lead to the best opportunities.
  • Every cloud has a silver lining: Look for opportunities in every challenge or setback.
  • Don’t cry over spilled milk: Learn from mistakes, but don’t dwell on them; move forward.
  • When the rooster crows, it’s time to get up: Start your day early to maximize productivity.
  • You can’t make an omelet without breaking a few eggs: Achieving success often requires taking some risks and making sacrifices.
  • You reap what you sow: Your results are a reflection of the effort and quality you put in.
  • Don’t look a gift horse in the mouth: Be grateful for opportunities and advantages you receive.
  • When in doubt, let your horse do the thinking: Trust the expertise of your team when you’re unsure.
  • Never approach a bull from the front, a horse from the rear, or a fool from any direction: Avoid unnecessary risks and confrontations in business.
  • Don’t close the barn door after the horse has bolted: Take preventive measures rather than waiting until you need to react.
  • A stitch in time saves nine: Address small issues before they become large problems.
  • The squeaky wheel gets the grease: The most noticeable or pressing issues are the ones that receive the most attention and resources first.

Understanding Patterns in WebPKI CA Issues

There’s a saying, “where there’s smoke, there’s fire.” This adage holds especially true in the context of WebPKI Certificate Authorities (CAs). Patterns of issues are one of the key tools that root programs use to understand what’s happening inside organizations. While audits are essential, they are often insufficient. Historical cases like Wirecard and Enron illustrate how audits can provide a partial and sometimes incorrect picture. Just as in most interactions in life, understanding who you are dealing with is crucial for successful navigation, especially when a power dynamic differential exists.

The Limitations of Audits

Currently, there are 86 organizations in the Microsoft root program. Most root programs have at most two people involved in monitoring and policing these 86 CAs. Technologies like Certificate Transparency make this possible, and open-source tools like Zlint and others use this data to find technically observable issues. However, these tools, combined with audits, only provide a small slice of the picture. Audits are backward-looking, not forward-looking. To understand where an organization is going, you need to understand how they operate and how focused they are on meeting their obligations.

This is where the nuanced side of root program management, the standards, and norms of the ecosystem, come into play. If we look at signals in isolation, they often appear trivial. However, when we examine them over a long enough period in the context of their neighboring signals, a more complete picture becomes apparent.

For example, consider a CA with minor compliance issues that seem trivial in isolation. A single misissued certificate might not seem alarming. But when you see a pattern of such incidents over time, combined with other issues like poor incident response or associations with controversial entities, the picture becomes clearer. These patterns reveal deeper issues within the organization, indicating potential systemic problems.

Root Program Challenges

Root programs face significant challenges in managing and monitoring numerous CAs. With limited personnel and resources, they rely heavily on technology and community vigilance. Certificate Transparency logs and tools like Zlint help identify and flag issues, but they are only part of the solution. Understanding the intentions and operational integrity of CAs requires a deeper dive into their practices and behaviors.

In the WebPKI ecosystem, context is everything. Root programs must consider the broader picture, evaluating CAs not just on isolated incidents but on their overall track record. This involves looking at how CAs handle their responsibilities, their commitment to security standards, and their transparency with the community. A CA that consistently falls short in these areas, even in seemingly minor ways, can pose a significant risk to the ecosystem.

Conclusion

Understanding the nuances of CA operations and focusing on their adherence to obligations is critical. By examining patterns over time and considering the broader context, root programs can better identify and address potential risks. The combination of audits, technological tools, and a keen understanding of organizational behavior forms a more comprehensive approach to maintaining trust in the WebPKI system.

It’s always important to remember that CAs need to be careful to keep this in mind. After all, it’s not just what you do, but what you think you do. Having your house in order is essential. By learning from past mistakes and focusing on continuous improvement, organizations can navigate public reporting obligations more effectively, ensuring they emerge stronger and more resilient.

Exploring Browser Distrust

Browser distrust events of WebPKI Certificate Authorities occur on average approximately every 1.23 years. These events highlight the critical role the WebPKI plays in maintaining secure communications on the internet and how failures within this system can have far-reaching implications. By examining these incidents, we can identify common patterns and underlying causes that lead to distrust, so as implementors and operators, we don’t end up repeating the same mistakes.

Identifying Common Patterns

As they say, those who don’t know history are destined to repeat it, so it is worthwhile to take a look at the history of CA distrust events to understand what, if any common patterns exist:

  • Security Breaches: Involves unauthorized access to the CA’s infrastructure, leading to potential misuse of certificate issuance capabilities.
  • Compromise of CA Infrastructure: Refers to breaches where the core infrastructure of the CA is compromised, resulting in unauthorized certificate issuance.
  • Fraudulent Certificates: Occurs when certificates are issued without proper authorization, often leading to the impersonation of legitimate websites.
  • Poor Incident Response Handling: Indicates that the CA failed to adequately respond to security incidents, exacerbating the impact of the initial problem.
  • Misissuance of Certificates: Happens when CAs issue certificates incorrectly, either to the wrong entities or without proper validation, undermining trust.
  • Facilitating Man-In-The-Middle Attacks: Refers to situations where misissued or improperly handled certificates enable attackers to intercept and alter communications.
  • Improper Practices: Includes actions by CAs that deviate from accepted standards and best practices, leading to security risks.
  • Deceptive Actions: Involves deliberate misleading actions by CAs, such as backdating certificates or other forms of dishonesty.
  • Insecure Practices: Encompasses practices by CAs that fail to maintain adequate security controls, leading to vulnerabilities.
  • Non-Compliance with Industry Standards: Indicates that the CA has repeatedly failed to adhere to industry standards and guidelines, leading to a loss of trust.
  • Ties to Controversial Entities: Involves associations with entities that raise ethical or security concerns, leading to distrust.
  • Limited Value to Ecosystem: Indicates that the CA does not provide significant value to the security ecosystem, often due to questionable practices or minimal compliance.
  • Operational Vulnerabilities: Refers to weaknesses in the CA’s operational security, such as using default passwords or having exposed administrative tools, making them susceptible to attacks.

Browser Distrust Events

  1. DigiNotar (2011):
    • Event: DigiNotar was hacked, leading to the issuance of fraudulent certificates. This prompted Mozilla, Google, and Microsoft to revoke trust in DigiNotar certificates.
    • Labels: Security Breaches, Compromise of CA Infrastructure, Fraudulent Certificates, Poor Incident Response Handling
    • Details: Mozilla Security Blog, Threatpost
  2. TurkTrust (2013):
    • Event: It was discovered that TurkTrust mistakenly issued two intermediate CA certificates, one of which was used to issue a fraudulent certificate for *.google.com. This led to the distrust of the TurkTrust CA by major browsers.
    • Labels: Misissuance of Certificates, Facilitating Man-In-The-Middle Attacks
    • Details: Krebs on Security
  3. ANSSI (French CA) (2013):
    • Event: It was discovered that ANSSI had issued a certificate to a network appliance company, which used it to inspect encrypted traffic. This led Google to revoke trust in the intermediate certificate.
    • Labels: Misissuance of Certificates, Facilitating Man-In-The-Middle Attacks
    • Details: Google Security Blog, Mozilla Blog
  4. CNNIC (China Internet Network Information Center) (2015):
    • Event: CNNIC was distrusted after it issued an intermediate certificate to MCS Holdings, which misused it to issue unauthorized certificates.
    • Labels: Misissuance of Certificates, Facilitating Man-In-The-Middle Attacks, Improper Practices
    • Details: Tom’s Hardware, Mozilla Security Blog, Mozilla Security Blog
  5. WoSign and StartCom (2016):
    • Event: WoSign (and StartCom) were distrusted after discovering multiple security issues, including backdating certificates, lying, and improper issuance.
    • Labels: Misissuance of Certificates, Deceptive Actions, Insecure Practices
    • Details: Tom’s Hardware, Google Security Blog
  6. Symantec (2017):
    • Event: Google announced a gradual distrust of Symantec certificates due to numerous instances of certificate misissuance, impacting millions of websites.
    • Labels: Misissuance of Certificates, Non-Compliance with Industry Standards, Poor Incident Response Handling
    • Details: Bleeping Computer, Google Security Blog
  7. Certinomis (2019):
    • Event: Mozilla distrusted Certinomis due to numerous incidents of misissuance and poor handling of security concerns.
    • Labels: Misissuance of Certificates, Facilitating Man-In-The-Middle Attacks
    • Details: Venafi Blog
  8. PROCERT (2020):
    • Event: Mozilla distrusted Procert due to numerous incidents of misissuance and poor handling of security concerns.
    • Labels: Non-Compliance with Industry Standards, Poor Incident Response Handling
    • Details: Venafi Blog
  9. TrustCor (2022):
    • Event: TrustCor was distrusted due to concerns about its ties to companies linked to the US intelligence community and its failure to provide satisfactory responses to these concerns.
    • Labels: Ties to Controversial Entities, Limited Value to Ecosystem
    • Details: gHacks, SSLs.com, SSL Shopper
  10. Camerfirma (2021):
    • Event: Mozilla and Google removed trust in Camerfirma due to a series of compliance issues and failure to maintain industry standards.
    • Labels: Non-Compliance with Industry Standards, Poor Incident Response Handling
    • Details: The Register
  11. Visa (2022):
    • Event: Issues include incomplete security audits that are required according to the
    • Labels: Non-Compliance with Industry Standards, Poor Incident Response Handling
    • Details: Feisty Duck
  12. e-Tugra (2023):
    • Event: e-Tugra was distrusted due to security concerns. A researcher found numerous vulnerabilities in e-Tugra’s systems, including default passwords and accessible administrative tools, leading to a loss of trust.
    • Labels: Operational Vulnerabilities, Insecure Practices, Poor Incident Response Handling
    • Details: Sectigo, GitHub, Ian Carroll’s Blog
  13. Ecommerce CA (EU) (2024):
    • Event: The Ecommerce CA in the EU faced browser distrust due to various security and compliance issues, leading to its removal from trusted lists.
    • Labels: Non-Compliance with Industry Standards, Operational Vulnerabilities, Poor Incident Response Handling
    • Details: EuroCommerce, Ecommerce Europe
  14. Entrust (2024):
    • Event: The Chrome Security Team announced the distrust of several Entrust roots due to a pattern of compliance failures and unmet improvement commitments.
    • Labels: Non-Compliance with Industry Standards, Poor Incident Response Handling
    • Details: Google Security Blog

Conclusion

The frequency and patterns of browser distrust events underscore the critical importance of preventive measures, transparency, and effective incident response.

Implementing robust security practices, conducting regular audits, and maintaining compliance with industry standards can significantly reduce the risk of such incidents. Transparency in operations and public disclosure of security issues foster an environment of accountability and trust.

An ounce of prevention is indeed worth more than a pound of cure. By focusing on proactive measures and cultivating a culture of continuous improvement, Certificate Authorities can better navigate the complexities of WebPKI. Effective crisis communication and incident response plans are essential for managing the fallout from security breaches and maintaining the trust of users and the broader web community.

By learning from past incidents and addressing their root causes, we can work towards a more secure and resilient internet, where trust in the WebPKI system is consistently upheld. The collective effort of CAs, browser vendors, and security researchers will play a pivotal role in achieving this goal, ensuring the integrity and reliability of our online ecosystem.

Navigating Public Reporting Obligations in WebPKI and Beyond

Incident response is notoriously challenging, and with the rise in public reporting obligations, the stakes have never been higher. In the WebPKI world, mishandling incidents can severely damage a company’s reputation and revenue, and sometimes even end a business. The Cyber Incident Reporting for Critical Infrastructure Act of 2022 has intensified this pressure, requiring some companies to report significant breaches to CISA within 72 hours. This isn’t just about meeting deadlines. The stakes are high, and the pressure is on. Look at the recent actions of the Cyber Safety Review Board (CSRB), which investigates major cyber incidents much like how plane crashes are scrutinized. The recent case of Entrust’s cascade of incidents in the WebPKI ecosystem, and the scrutiny they have gone under as a result, shows how critical it is to respond professionally, humbly, swiftly, and transparently. The takeaway? If you don’t respond adequately to an incident, someone else might do it for you, and even if not, mishandling can result in things spiraling out of control.

The Complexity of Public Reporting

Public reports attract attention from all sides—customers, investors, regulators, the media, and more. This means your incident response team must be thorough and meticulous, leaving no stone unturned. Balancing transparency with protecting your organization’s image is critical. A well-managed incident can build trust, while a poorly handled one can cause long-term damage.

Public disclosures also potentially come with legal ramifications. Everything must be vetted to ensure compliance and mitigate potential liabilities. With tight timelines like the CISA 72-hour reporting requirement, there’s little room for error. Gathering and verifying information quickly is challenging, especially when the situation is still unfolding. Moreover, public reporting requires seamless coordination between IT, legal, PR, and executive teams. Miscommunication can lead to inconsistencies and errors in the public narrative.

The Role of Blameless Post Mortems

Blameless post-mortems are invaluable. When there’s no fear of blame, team members are more likely to share all relevant details, leading to a clearer understanding of the incident. These post-mortems focus on systemic issues rather than pointing fingers, which helps prevent similar problems in the future. By fostering a learning culture, teams can improve continuously without worrying about punitive actions.

It’s essential to identify the root causes of incidents and ensure they are fixed durably across the entire system. When the same issues happen repeatedly, it indicates that the true root causes were not addressed. Implementing automation and tooling wherever possible is crucial so that you always have the information needed to respond quickly. Incidents that close quickly have minimal impact, whereas those that linger can severely damage a business.

Knowing they won’t be blamed, team members can contribute more calmly and effectively, improving the quality of the response. This approach also encourages thorough documentation, creating valuable resources for future incidents.

Evolving Public Reporting Obligations

New regulations demand greater transparency and accountability, pushing organizations to improve their security practices. With detailed and timely information, organizations can better assess and manage their risks. The added legal and regulatory pressure leads to faster and more comprehensive responses, reducing the time vulnerabilities are left unaddressed. However, these strict timelines and detailed disclosures increase stress on incident response teams, necessitating better support and processes. Additionally, when there are systemic failures in an organization, one incident can lead to others, overwhelming stakeholders and making it challenging to prioritize critical issues.

Importance of a Strong Communication Strategy

Maintaining trust and credibility through transparent and timely communication is essential. Clear messaging prevents misinformation and reduces panic, ensuring stakeholders understand the situation and response efforts. Effective communication can mitigate negative perceptions and protect your brand, even in the face of serious incidents. Proper communication also helps ensure compliance with legal and regulatory requirements, avoiding fines and legal issues. Keeping stakeholders informed supports overall recovery efforts by maintaining engagement and trust.

Implementing Effective Communication Strategies

Preparation is key. Develop a crisis communication plan that outlines roles, responsibilities, and procedures. Scenario planning helps anticipate and prepare for different types of incidents. Speed and accuracy are critical. Provide regular updates as the situation evolves to keep stakeholders informed.

Consistency in messaging is vital. Ensure all communications are aligned across all channels and avoid jargon. Transparency and honesty are crucial—acknowledge the incident and its impact, and explain the steps being taken to address it. Showing empathy for those affected and offering support and resources demonstrates that your organization cares. Keep employees informed about the incident and the organization’s response through regular internal briefings to ensure all teams are aligned and prepared to handle inquiries.

Handling Open Public Dialogues

Involving skilled communicators who understand both the technical and broader implications of incidents is crucial. Coordination between legal and PR teams ensures that messaging is clear and accurate. Implement robust systems to track all public obligations, deadlines, and commitments, with regular audits to ensure compliance and documentation. Prepare for potential delays or issues with contingency plans and pre-drafted communications, and proactively communicate if commitments cannot be met on time.

  • Communication with Major Customers: It often becomes necessary to keep major customers in the loop, providing them with timely updates and reassurances about the steps being taken. Build plans for how to proactively do this successfully.
  • Clear Objectives and Measurable Criteria: Define clear and measurable criteria for what good public responses look like and manage to this. This helps ensure that all communications are effective and meet the required standards.
  • External Expert Review: Retain external experts to review your incidents with a critical eye whenever possible. This helps catch misframing and gaps before you step into a tar pit.
  • Clarity for External Parties: Remember that external parties won’t understand your organizational structure and team dynamics. It’s your responsibility to provide them with the information needed to interpret the report the way you intended.
  • Sign-Off Process: Have a sign-off process for stakeholders, including technical, business, and legal teams, to ensure the report provides the right level of information needed by its readers.
  • Track Commitments and Public Obligations: Track all your commitments and public obligations and respond by any committed dates. If you can’t meet a deadline, let the public know ahead of time.

In the end, humility, transparency, and accountability are what make a successful public report.

Case Study: WoSign’s Non-Recoverable Loss of Trust

Incident: WoSign was caught lying about several aspects of their certificate issuance practices, leading to a total non-recoverable loss of trust from major browsers and ultimately their removal from trusted root stores.

Outcome: The incident led to a complete loss of trust from major browsers.

Impact: This example underscores the importance of transparency and honesty in public reporting, as once trust is lost, it may never be regained.

Case Study: Symantec and the Erosion of Trust

Incident: Symantec, one of the largest Certificate Authorities (CAs), improperly issued numerous certificates, including test certificates for domains not owned by Symantec and certificates for Google domains without proper authorization. Their non-transparent, combative behavior, and unwillingness to identify the true root cause publicly led to their ultimate distrust.

Outcome: This resulted in a significant loss of trust in Symantec’s CA operations. Both Google Chrome and Mozilla Firefox announced plans to distrust Symantec certificates, forcing the company to transition its CA business to DigiCert.

Impact: The incident severely damaged Symantec’s reputation in the WebPKI community and resulted in operational and financial setbacks, leading to the sale of their CA business.

Conclusion

Navigating public reporting obligations in WebPKI and other sectors is undeniably complex and challenging. However, by prioritizing clear, honest communication and involving the right professionals, organizations can effectively manage these complexities. Rigorous tracking of obligations, proactive and transparent communication, and a robust incident response plan are critical. Case studies like those of WoSign and Symantec underscore the importance of transparency and honesty—once trust is lost, it may never be regained.

To maintain trust and protect your brand, develop a crisis communication plan that prioritizes speed, accuracy, and empathy. Consistent, transparent messaging across all channels is vital, and preparing for potential incidents with scenario planning can make all the difference. Remember, how you handle an incident can build or break trust. By learning from past mistakes and focusing on continuous improvement, organizations can navigate public reporting obligations more effectively, ensuring they emerge stronger and more resilient.