Gov ID: If at First You Don’t Succeed, Try, Try Again

In the eIDAS 2.0 framework, the identity wallet is central to its expanded scope, mirroring early European government efforts at smart card-based national identity cards as well as subsequent identity wallet attempts. These efforts saw limited adoption, except for a few cases such as the Estonian national identity card,  the Swedish e-identification, and the Dutch eID schemes. It seems that this part of eIDAS 2.0 is an effort to blend the best aspects of these projects with elements of Web3 in an attempt to achieve a uniform solution.

A significant shift from these past identity wallet efforts is the government’s role in identity verification, reminiscent of the earlier smart card national ID initiatives. This approach diverges from the prior identity wallet models, where external entities such as banks, telecoms, and commercial identity verification companies were responsible for verification. This combination potentially helps pave the way for holistic public sector adoption similar to what was seen with Estonia’s national ID project’s success just on a much larger scale.

With that said it is important to remember that the majority of past efforts have struggled to achieve broad adoption. For example, the GOV.UK Verify platform encountered substantial usability issues, leading to resistance and eventually discontinued use by organizations that were mandated to use it. While the software-based nature of identity wallets may reduce deployment costs relative to smart cards, and government mandates could kick-start some level of adoption, the challenge of achieving widespread acceptance does not go away.

As it stands, it does seem that European CAs are betting on this to bootstrap a larger market for themselves. However, in a system as described above, this raises questions about the broader value and future role of third-party trust providers especially in a world where HTTPS on the web is protected with domain-validated certificates that these CAs have largely ignored or resisted.

This brings us to the contentious issue of the eIDAS 2.0 framework’s push for Qualified Web Authentication Certificates (QWACs) and the enforced support by browsers. While it is tempting to look at these two parts of the effort in isolation it is important to remember that regulations like these are made up of horse trading, so it is not surprising to see how clumsily this has all progressed. 

As an aside if you have not seen it there was an interesting talk at Chaos Computer Club last month about how badly these identity schemes have been executed that is worth watching. Only time will tell how effectively eIDAS 2.0 navigates these challenges and whether it can achieve the broad adoption that has eluded past initiatives.

Rethinking How We Assess Risk in the Software We Rely On

Despite today’s widespread use of open-source software, most software is still delivered in binary form. This includes everything from the foundational firmware of our computers to the applications we use for work, extending all the way to the containers running our server software in the cloud.

A significant challenge arises when even if the source code of the software is available, reproducing the exact binary from it is often impossible. Consequently, companies and users are essentially operating on blind faith regarding any qualitative or quantitative assurances received from software suppliers. This stark reality played a critical role in the rapid and broad spread of the SolarWinds incident across the industry.

The SolarWinds Wake-Up Call

The SolarWinds attack underscored the risks inherent in placing our trust in software systems. In this incident, attackers infiltrated build systems, embedding malware into the legitimate SolarWinds software. Customers updating to the latest software version unwittingly became victims in this attack chain. It’s crucial to acknowledge that targeting a software supply chain for widespread distribution is not a new tactic. Ken Thompson, in his 1984 Turing Award Lecture, famously stated, “No amount of source-level verification or scrutiny will protect you from using untrusted code.” Regrettably, our approaches to this challenge haven’t significantly evolved since then.

Progress in the domain of supply chain security was initially slow. In 1996, Microsoft began promoting the concept of code signing with its Authenticode support, allowing customers to verify that their software hadn’t been altered post-distribution. Subsequently, the open-source movement gained traction, particularly following the release of Netscape Navigator’s source code. Over the next two decades, the adoption of open source, and to a lesser extent, code signing increased. The use of interpreted languages aided in understanding software operations, but as software grew in size and complexity, the demand for software engineers began to outstrip the supply. The adage “Given enough eyeballs, all bugs are shallow” suggests that greater openness can enhance security, yet the industry has struggled to develop a talent pool and incentive models robust enough to leverage source code availability effectively.

Before the SolarWinds incident, the industry, apart from some security engineers advocating for practices like reproducible builds, memory-safe languages, and interpreted languages, largely overlooked the topic of supply chain security. Notable initiatives like Google’s work on Binary Transparency, which predates SolarWinds, began to create an environment for broader adoption of code signing-like technologies with efforts like Go SumDBSigStore, and Android’s Binary Transparency (each of which I had the opportunity to contribute to). However, even these solutions don’t fully address the challenge of understanding the issues within a binary, a problem that remains at the forefront of security.

The industry’s response to SolarWinds also included embracing the concept of Software Bill of Materials (SBOM). These artifacts, envisioned to be produced by the build system, document the, often third-party, components used in software. However, this approach faces challenges, such as the possibility of attackers manipulating SBOMs if they compromise the build system.

The complexity of compiled software adds another layer of difficulty. Each compiled dependency has its own dependencies, not all of which are publicly declared, as is the case with static dependencies. When software is compiled, only portions of the dependencies that are used get included, potentially incorporating multiple versions of a single dependency into the final binary. This complexity makes simple statements about software components, like “I use OpenSSL 1.0,” inaccurate for even moderately complex code. Moreover, the information derived from SBOMs is often not actionable. Without access to all sources or the ability to build binaries independently, users are left with CVE lists that provide more noise than actionable insight.

To make things worse compilers, through the optimization of builds can even remove security fixes that developers carefully put in to mitigate known issues, for example, freeing memory to keep keys cryptographic keys and passwords from getting paged to disk.

The Critical Role of Binary Analysis

If all we have is a binary, the only way to understand the risks it represents is to analyze it in the same way an attacker would. However, doing this at scale and making the analysis actionable is challenging. Recent advancements in machine learning and language development are key to addressing this challenge.

Currently, tools that operate on binaries alone fall into two categories. The first are solutions akin to 1990s antivirus programs – matching binaries to known issues. The second category helps skilled professionals reverse engineer the binary’s contents more quickly.

Both categories have struggled to keep pace with the rapid changes in software over the past few decades. A new category of tools is emerging, led by companies like Binarly, which I advise. Binarly’s approach to automated binary analysis began with key goals such as achieving processor architecture independence and language independence. This enables the analysis of binaries across different architectures without duplicating threat intelligence and identifying insecure patterns stemming from ported code or common insecure Stack Overflow examples. Identifying static dependencies and which parts of them are used in a binary is both challenging and crucial for understanding the security issues that lie beneath the surface.

Their approach is remarkable in its ability to detect “known unknowns,” enabling the identification of classes of security vulnerabilities within a binary alone. Furthermore, through symbolic execution, they can perform reachability analysis, ensuring that flagged issues are not just theoretical but can potentially be exploited by attackers.

Though their approaches are not firmware-specific, Firmware is a great example of the problems that come from binary-only distributions and customers’ reliance on blind faith that their vendors are making the right security investments. It is their unique approach to binary analysis that has enabled them to file and report more CVEs in the last two years than have ever been reported before.

Binary analysis of this kind is crucial as it scrutinizes software in its final, executable form—the form in which attackers interact with it.

Conclusion

The lesson from the SolarWinds attack is clear: no build system-based approach to articulate dependencies is entirely secure. Ken Thompson’s 1984 assertion about the limitations of trusting any code you didn’t produce yourself remains relevant. In a world where software vulnerabilities have extensive and far-reaching impacts, binary analysis is indispensable. Binarly’s approach represents a paradigm shift in how we secure software, offering a more robust and comprehensive solution in our increasingly connected world.

Effortless Certificate Lifecycle Management for S/MIME

In September 2023, the SMIME Baseline Requirements (BRs) officially became a requirement for Certificate Authorities (CAs) issuing S/MIME certificates (for more details, visit CA/Browser Forum S/MIME BRs).

The definition of these BRs served two main purposes. Firstly, they established a standard profile for CAs to follow when issuing S/MIME certificates. Secondly, they detailed the necessary steps for validating each certificate, ensuring a consistent level of validation was performed by each CA.

One of the new validation methods introduced permits mail server operators to verify a user’s control over their mailbox. Considering that these services have ownership and control over the email addresses, it seems only logical for them to be able to do domain control verification on behalf of their users since they could bypass any individual domain control challenge anyway. This approach resembles the HTTP-01 validation used in ACME (RFC 8555), where the server effectively ‘stands in’ for the user, just as a website does for its domain.

Another new validation method involves delegating the verification of email addresses through domain control, using any approved TLS domain control methods. Though all domain control methods are allowed for in TLS certificates as supported its easiest to think of the DNS-01 method in ACME here. Again the idea here is straightforward: if someone can modify a domain’s TXT record, they can also change MX records or other DNS settings. So, giving them this authority suggests they should reasonably be able to handle certificate issuance.

Note: If you have concerns about these two realities, it’s a good idea to take a few steps. First, ensure that you trust everyone who administers your DNS and make sure it is securely locked down. 

To control the issuance of S/MIME certificates and prevent unauthorized issuance, the Certification Authority Authorization (CAA) record can be used. Originally developed for TLS, its recently been enhanced to include S/MIME (Read more about CAA and S/MIME).

Here’s how you can create a CAA record for S/MIME: Suppose an organization, let’s call it ‘ExampleCo’, decides to permit only a specific CA, ‘ExampleCA’, to issue S/MIME certificates for its domain ‘example.com’. The CAA record in their DNS would look like this:

example.com. IN CAA 0 smimeemail "ExampleCA.com"

This configuration ensures that only ‘ExampleCA.com’ can issue S/MIME certificates for ‘example.com’, significantly bolstering the organization’s digital security.

If you wanted to stop any CA from issuing a S/MIME certificate you would create a record that looks like this: 

example.com. IN CAA 0 issuemail ";"

Another new concept introduced in this round of changes is a new concept called an account identifier in the latest CAA specification. This feature allows a CA to link the authorization to issue certificates to a specific account within their system. For instance:

example.com. IN CAA 0 issue "ca-example.com; account=12345"

This means that ‘ca-example.com’ can issue certificates for ‘example.com’, but only under the account number 12345.

This opens up interesting possibilities, such as delegating certificate management for S/MIME or CDNs to third parties. Imagine a scenario where a browser plugin, is produced and managed by a SaaS on behalf of the organization deploying S/MIME. This plug-in takes care of the initial enrollment, certificate lifecycle management, and S/MIME implementation acting as a sort of S/MIME CDN.

This new capability, merging third-party delegation with specific account control, was not feasible until now. It represents a new way for organizations to outsource the acquisition and management of S/MIME certificates, simplifying processes for both end-users and the organizations themselves.

To the best of my knowledge, no one is using this approach yet, and although there is no requirement yet to enforce CAA for SMIME it is in the works. Regardless the RFC has been standardized for a few months now but despite that, I bet that CAs that were issuing S/MIME certificates before this new CAA RFC was released are not respecting the CAA record yet even though they should be. If you are a security researcher and have spare time that’s probably a worthwhile area to poke around 😉

The Rise of Key Transparency and Its Potential Future in Email Security

Key Transparency has slowly become a crucial part of providing truly secure end-to-end encrypted messaging. Don’t believe me? The two largest providers of messaging services, Apple and Facebook (along with their WhatsApp service), have openly adopted it, and I am hopeful that Google, one of its early advocates, will follow suit.

At the same time, we are on the precipice of interoperable group messaging as Messaging Layer Security (MLS) was recently standardized. Its contributors included representatives from employees of the mentioned services and more, which suggests they may adopt it eventually. What does this have to do with Key Transparency? It acknowledges the need for secure, privacy-preserving key discovery through its inclusion of Key Transparency in its architecture.

It’s also noteworthy to see that Apple has agreed to support RCS, Android’s messaging protocol. While there is no public hint of this yet, it’s possible that since they have positioned themselves as privacy champions over the last decade frequently toting their end-to-end encryption, we may see them push for MLS to be adopted within RCS, which could net the world its first interoperable cross-network messaging with end-to-end encryption, and that would need a key discovery mechanism.

In that spirit, recently the Internet Engineering Task Force (IETF) has established a Working Group on Key Transparency, and based on the participation in that group, it seems likely we will see some standardization around how to do Key Transparency in the future.

What’s Next for Key Transparency Adoption Then?

I suspect the focus now shifts to S/MIME, a standard for public key encryption and signing of emails. Why? Well, over the last several years, the CA/Browser Forum adopted Baseline Requirements (BRs) for S/MIME to help facilitate uniform and interoperable S/MIME, and those became effective on September 1, 2023 – this means CAs that issue these certificates will need to conform to those new standards.

At the same time, both Google and Microsoft have made strides in their implementations of S/MIME for their webmail offerings.

In short, despite my reservations about S/MIME due to its inability to address certain security challenges (such as metadata confidentiality, etc), it looks like it’s witnessing a resurgence, particularly fueled by government contracts. But does it deliver on the promise today? In some narrow use cases like mail signing or closed ecosystem deployments of encrypted mail where all participants are part of the same deployment, it is probably fair to say yes.

With that said, mail is largely about interoperable communications, and for that to work with encrypted S/MIME, we will need to establish a standard way for organizations and end-users to discover the right keys to use with a recipient outside of their organization. This is where Key Transparency would fit in.

Key Transparency and S/MIME

Today, it is technically possible for two users to exchange certificates via S/MIME, enabling them to communicate through encrypted emails. However, the process is quite awkward and non-intuitive. How does it work? You either provide the certificate out of band to those in the mail exchange, and they add it to their contact, or some user agents automatically use the keys associated with S/MIME signatures from your contacts to populate the recipient’s keys.

This approach is not ideal for several reasons beyond usability. For instance, I regularly read emails across three devices, and the private keys used for signing may not be the same on each device. Since consistent signing across devices isn’t required, if I send you an email from my phone and then you send me an encrypted message that I try to open on my desktop, it won’t open.

Similarly, if I roll over my key to a new one because it was compromised or lost, we would need to go through this certificate distribution workflow again. While Key Transparency doesn’t solve all the S/MIME-related problems, it does provide a way to discover keys without the cumbersome process, and at the same time, it allows recipients to know all of my active and published certificates, not just the last one they saw.

One of the common naive solutions to this problem is to have a public directory of keys like what was used for PGP. However, such an approach often becomes a directory for spammers. Beyond that, you have the problem of discovering which directory to use with which certificate. The above Key Transparency implementations are all inspired by the CONIKS work, which has an answer to this through the use of a Verifiable Random Function (VRF). The use of the VRF in CONIKS keeps users’ email addresses (or other identifiers) private. When a user queries the directory for a key, the VRF is used to generate a unique, deterministic output for each input (i.e., the user’s email). This output is known only to the directory and the user, preserving privacy.

The generic identifier-based approach in Key Transparency means it can neatly address the issue of S/MIME certificate discovery. The question then becomes, how does the sender discover the Key Transparency server?

Key Transparency Service Discovery

The answer to that question probably involves DNS resource records (RRs). We use DNS every day to connect domain names with IP addresses. It also helps us find services linked to a domain. For instance, this is how your email server is located. DNS has a general tool, known as an SRV record, which is designed to find other services. This tool would work well for discovering the services we’re discussing.

_sm._keytransparency._https.example.com. 3600 IN SRV 10 5 443 sm-kt.example.com.

In this example, _sm the identifier is placed before _keytransparency. and _https shows that this SRV record is specifically for a Key Transparency service for Secure Messaging. This allows someone to ask DNS for a S/MIME-specific Key Transparency service. It also means we can add other types of identifiers later, allowing for the discovery of various KT services for the same domain.

Conclusion

While S/MIME faces many challenges, such as key roaming, message re-encrypting on key rollover, and cipher suite discoverability, before it becomes easy to use and widely adopted—not to mention whether major mail services will invest enough in this technology to make it work—there’s potential for a directory based on Key Transparency if they do.

Hopefully, the adoption of Key Transparency will happen if this investment in S/MIME continues, as it’s the only large-scale discovery service for user keys we’ve seen in practice. Unlike other alternatives, it’s both privacy-respecting and transparently verifiable, which are important qualities in today’s world. Only time will tell, though.

Raising the Bar: The Urgent Need for Enhanced Firmware Security and Transparency

Firmware forms the foundation of all our security investments. Unfortunately, firmware source code is rarely available to the public and as a result is one of the least understood (and least secure) classes of software we depend on today.

Despite this, the hardware industry is known for its lack of transparency and inadequate planning for the entire lifecycle of their products. This lack of planning and transparency makes it hard to defend against and respond to both known and unknown vulnerabilities, especially when the industry often knows about issues for ages, but customers do not.

In today’s world, automation allows builders, defenders and attackers to automatically identify zero-day vulnerabilities with just a binary it has become increasingly important that embargo times for vulnerabilities are as short as possible, allowing for quick defense and, when possible, remediation.

Despite this, organizations like the UEFI Forum are proposing extended disclosure periods, suggesting a 300-day wait from initial reporting to the vendor before notifying customers. During this year-long waiting period, customers are exposed to risks without defense options. The longer the period, the more likely it is that automation enables the attacker to identify the issue in parallel, giving them a safe period to exploit the zero-day without detection.

Simply put, this duration seems way too long, considering the ease of proactively catching issues now — especially given the industry’s overall underinvestment in product security. It would be a different case if these organizations had a history of handling issues effectively, but the reality is far from this. Their apparent neglect, demonstrated by unreliable update mechanisms, continuously shipping models with the same issues that have been resolved in other models, and the frequency of industry-wide issues highlight this reality. More often than any other industry, we see hardware manufacturers often reintroducing previously resolved security issues due to poor security practices and poor management of their complex supply chains. This reality makes this position highly irresponsible. We must do better. Concealing vulnerabilities like this is no longer viable — if it ever was.

It is possible we will see changes as a result of shifts in software liability and regulatory changes, like those in White House Executive Order 1428. This order demands that organizations responsible for “critical software” comply with long-standing best practices. Although “critical software” lacks a clear definition, firmware’s role in underpinning all security investments suggests it likely falls into this category. This executive order starts with basics like publishing known dependencies, which is useful but insufficient, especially in this segment given the prevalence of shared reference code and static dependencies that are not expressed as a library dependencies. This language includes adoption of formal vulnerability management practices, bug bounties, and more. This and the EU Cyber Resilience Act are all efforts to get these and other vendors to align with long-time security best practices, like those captured by efforts like the NIST’s vulnerability management recommendations.

This landscape will likely shift once we see enforcement cases emerge, but customers must insist on higher standards from hardware manufacturers and their suppliers, or nothing will change in the near term.

Words matter in cryptography or at least they used to

I was listening to Security Cryptography Whatever today, and they were discussing a topic that has been bothering me for a while.

A common theme in post-quantum cryptography is its pairing with classical cryptography. This “belts and suspenders” approach seems sensible as we transition to relatively new ways to authenticate and protect data. We have already seen some of these new post-quantum methods fail, which underscores the importance of agility in these systems.

However, merging two approaches like this introduces complexity, which is important since as a general rule, complexity is the root of all security issues. Another concern is the labeling of various strategies for doing this as “Hybrid.” This wording makes it challenging to understand what the different approaches are doing and why.

With this background in mind, let’s explore three different “Hybrid” approaches to PQC and classical cryptography. By giving each a unique name and using simple examples, to see if we we can show how they differ: Nested Hybrid Signatures, Side-by-Side Hybrid Protocols, and the proposed Merged Hybrid Signatures.

Nested Hybrid Signatures: A box within a box

In this approach, imagine verifying the authenticity of a letter. The nested hybrid signature method is like putting this letter within a secure box, protected by a classical signature scheme like ECDSA. But we don’t stop there. This box is then placed within another, even stronger box, strengthened with a post-quantum signature scheme like Dilithium. This nested structure creates a situation where even if one layer is broken, the inner core remains trustable..

Side-by-Side Hybrid Protocols: Simultaneous and Nested

In this method, imagine two separate safes, each protecting a part of your secret message. One safe has a classical lock, while the other has a modern, quantum-resistant lock. To see the entire message, one must unlock both safes, as the full message remains trustable unless both safes are broken into. 

Merged Hybrid Signatures: Holding onto the past

This method tries to mix the elements of classical and post-quantum signature schemes into a single, unified signature format. The goal of this approach is to enable minimal changes to existing systems by maintaining a single field that combines a classical signature with a post-quantum signature. This has several issues and seems misguided to me. Firstly, this mixing of PQC and classical cryptography is a temporary problem; eventually, we should have enough confidence that post-quantum cryptography alone is enough at which point this complexity wouldn’t be needed. It also messes with the current assumptions associated with existing signatures, and while it’s not clear what the issues may be, keeping each of the signatures isolated seems less risky. To stick with the lock analogy, it’s somewhat like designing a safe with two different locks on the same door, which must be unlocked at the same time with the same key.

Conclusion

While it’s tough to find the right words to describe new developments as they happen we can do better to avoid using the same terms for different approaches. This will make it easier for everyone to understand what’s being discussed without having to study each protocol in detail. 

Document Authenticity in the Age of Generative AI

In our rapidly evolving lives, the credibility of documents, images, and videos online has emerged as a concern. The pandemic and recent elections have helped highlight this issue. In the case of elections, one area that stands out to me is concerns over voter roll integrity, a pillar of our democratic process in the US.  

As we grapple with these issues, it is important to explore what a solution might look like that balances the associated privacy concerns. Is it possible to provide assurance of integrity and transparency while also providing accommodations for privacy and accountability?

Misinformation in the Digital Age

Despite its challenges, the pandemic did have a silver lining — it brought attention to the internet’s role as a breeding ground for misinformation campaigns. These campaigns featured manipulated images and documents, creating confusion and distrust globally. They also underscored a glaring gap in our current systems — no broad deployment of reliable mechanisms to verify the authenticity and origin of digital content.

The recent advancements in generative AI over the last two years have further complicated this issue. Now with a few words, anyone on the web can create images that at first blush look real. This technology will only continue to get better which means we will need to begin to more formally look at how we build solutions to tackle this new reality.

Existing Solutions and Their Shortcomings

Several technologies have recently been discussed as the way forward to address at least portions of these problems. One such example is the Content Authenticity Initiative which proposes that devices like cameras cryptographically sign all pictures and videos with a device credential, a feature aimed at enabling the detection of any alterations made post-capture. 

This method raises significant privacy concerns. Essentially, it could create a surveillance infrastructure where each content piece could be unexpectedly traced back to an individual or a group of devices, potentially becoming a surveillance tool.

Google DeepMind also recently brought forth the idea of opt-in watermarking for images created through AI technologies. While this initiative seems promising at a glance, it fails to address the nuances of the threat model. For instance, a nation-state with intentions to manipulate an election using generative AI assets wouldn’t likely volunteer to watermark these materials as AI-generated. This significant loophole sets a precarious stage where misinformation can still flourish.

These approaches, though developed with noble intentions, reveal critical gaps in addressing the complex landscape of content authenticity. They either infringe upon individual privacy rights or are vulnerable to exploitation when faced with a real threat model. 

Middle Ground: Publisher Signatures and Blinding as a Potential Solution

A more nuanced approach could utilize optional cryptographic signatures linked to a publisher, instead of devices, when signed, the publisher, not their devices, opts into staking their reputation on the authenticity of the artifact. Coupled with a feature to enable cryptographically blinding the publisher’s identity, this strategy could offer a safe avenue for them to reveal their identity at a later time, if necessary. Such a situation might arise in cases of whistleblower claims, where shielding the publisher’s identity becomes crucial for their safety. This blinding could strike a balance, granting publishers temporary anonymity while preserving the potential to enable them to later opt-in to publicly stand behind the artifact or to take accountability for any misinformation.

In contrast to devices subtly leaking metadata that would put subjects in the position to have to prove a negative, for example, needing to explain a picture does not tell the whole story. Or even worse putting the subject of a picture in a situation where they need to prove that the device that captured it was compromised, This is similar to what happens today with red-light cameras and automated radar guns where poorly calibrated devices result in innocent people being charged.

The proposed model shifts the identification to publishers in the hope of fostering a responsible publishing culture where publishers have the discretion to unveil their identity, promoting accountability without completely compromising privacy.

It is also worth noting that a transition from ink signatures to cryptographic signatures for documents appears more pertinent than ever. Generative AI and graphic design technologies have enhanced the ability to replicate handwriting styles, making traditional signatures highly susceptible to forgery. The use of cryptographic techniques offers a more secure alternative, integrating seamlessly into modern workflows and standing resilient against unauthorized alterations.

Publisher Signatures Are Not Enough

In information security, it’s now accepted that insider threats are a significant risk. This realization has steered us away from merely verifying the identity of a publisher, especially in cryptographic signing systems such as code signing. 

There are a few reasons, but one of the largest is that key management proves to be challenging, often due to the fact that publishers frequently represent groups rather than individuals, leading to key management practices being more permissive than ideal from a security standpoint. 

Additionally, if a solution is to incorporate the possibility of anonymity through cryptographic blinding we can not simply bet on the presence and blind trust in that identity.

This is the same reason that led modern code-signing solutions to adopt ledgers that record an artifact’s provenance and history. For instance, in a Binary Transparency system, a ledger might house not only a list of software packages and their contents but also offer qualitative attestations about the software, for example indicating whether it has been screened for malware or verified to be reproducible from its source. This mechanism allows devices to understand not just the origin of the code but also to grasp the intended release of the software and potentially qualitative aspects to consider before reliance on it.

Applying this pattern to our document provenance and integrity problem, this system can offer value even when the identity remains undisclosed. It can provide a basic idea of the time of existence and allow third parties to vouch for the authenticity, possibly linking to other corroborative artifacts.

Building a continuously verifiable record coupled with supportive evidence for artifacts like documents seems to be a step in the right direction. This approach has demonstrated its value in other problem spaces.

With that said it’s essential to acknowledge that, as with any opt-in system, documents, images, and videos will not all contain this additional provenance and like with all technology this too would not be perfect. As a result, this means that rather than outright dismissal, all content will need to be evaluated based on merit, and the evidence collected about it. At a minimum, we must recognize that it can take years for any new system to gain substantial traction.

Learning from the rest of the world.

This issue is not confined to the US, so we should not restrict ourselves to looking at approaches used by US Big Tech. For instance, the strategies suggested here significantly draw upon the principles of electronic signatures, particularly e-Seals, which are prevalent in the EU and other regions. This European model offers vital insights and presents a robust strategy for resolving disputes, albeit reliant on specific technologies.

Interestingly, US online notarization rules have also borrowed elements from the EU, mandating the use of cryptographic signatures, akin to the EU’s emphasis on Advanced Signatures.

By combining this approach with the lessons learned from Certificate and Binary Transparency, where Merkle trees of published materials, continuous monitoring, and third-party evaluation help ensure a more complete picture — we might find a path forward. 

The addition of blinding the publisher’s identity in a way where they could selectively disclose their identity in the future also seems to provide a plausible way to balance the privacy concerns that could enable this path to become the default in the future.

Motivating Participation through Innovation, Policy and Leadership

Adoption of this approach cannot solely rely on goodwill or regulation. It would require a combination of standardization, regulatory changes, creating incentives for publishers, engagement with civil society and other stakeholders, and some tangible leadership by example by a large player or two. Some ideas in this direction include:

  • Initiating revisions to the existing digital signature legislation seems to be a sensible first step. Given our substantial background with the current laws, it’s clear that without regulatory changes, technological investments are unlikely to shift.
  • The government can lead by example by enhancing initiatives like the current Digital Autopen project. This project allows groups of individuals to access shared signing credentials without tying them to individual users, addressing a notable challenge prevalent in code signing and other organizational signing efforts.
  • I also believe that investing in a singular, immensely impactful use case as a case study could vividly illustrate how these approaches can make a significant difference. The voter registration rolls mentioned earlier seem like an excellent use case for this.
  • Further research and standardization could explore integrating cryptographic blinding of signer identities within the current document signing infrastructure, allowing for future disclosure. Investigating the potential structure and security of ledgers, and considering the extension of signing protocols to other artifact formats, seems to be a natural progression in supporting a system of this kind.
  • Simultaneously, collaboration with civil society, tech companies, and other stakeholders, including publishers, appears vital. This will guarantee that their concerns are integrated into the developed solutions and that appropriate policies are instituted to effectively incorporate this metadata into their operations.
  • I also believe investing in a singular and hugely impactful use case as a case study of how these approaches can make a big difference. The voter registration rolls discussed earlier seem like a great use case for this.

While these efforts would not necessarily lead to adoption it does seem that adoption would minimally be a predicate on efforts like these.

A Pathway to Trust and Accountability

Balancing privacy and accountability in the digital age is a nuanced but achievable goal, especially if we build on top of existing successes. By adopting a well-rounded approach that integrates cryptographic signatures with mechanisms for temporary anonymity, we can carve a pathway toward a society where digital content maintains its integrity and trustworthiness. 

Moreover, by fostering an environment where content, even without clear provenance, is evaluated critically rather than dismissed, we encourage a richer discourse and a healthier digital ecosystem.

Through the union of technology and policy, we can create a more secure, transparent, and accountable future for content authenticity.

The Scale of Consequence: Storm-0558 vs DigiNotar

When we look at the Storm-0558 and DigiNotar incidents side by side, we find striking similarities in their repercussions and severity. Both cases involve significant breaches orchestrated by nation-states – China and Iran respectively, targeting critical digital infrastructure and security protocols that are designed to safeguard user data and communications.

In the case of Storm-0558, the skilled dismantling of Microsoft’s authentication infrastructure not only compromised the integrity of exchange inboxes but potentially rendered confidential information accessible to unauthorized entities.

Similarly, the DigiNotar breach constituted a severe undermining of internet security, as the attackers were able to issue trusted certificates that facilitated man-in-the-middle attacks. This compromised user interactions with sensitive services, including email communications.

Given their similar impact on user privacy and internet security, it begs the question are we treating both incidents with equal gravitas and severity?

If not we must ask the question as to why and what are the consequences of that reality?

To answer these questions it might be useful to think about a different kind of breach of trust that happened in the late 2010s where a fake vaccination campaign was used as a cover to collect DNA samples in the hunt for Osama bin Laden. That move ended up causing a lot of people in the area to give a side-eye to vaccination drives, fearing there’s more than meets the eye.

It almost feels like sometimes, big tech in the US gets to bend the rules a little, while smaller players or those from other parts of the world have to toe the line. It’s this uneven ground that can breed mistrust and skepticism, making folks doubt the systems meant to protect them.

In short, these decisions to compromise core infrastructure and come with long-term consequences that are surely not being fully considered.

The Evolution and Limitations of Hardware Security Modules

Summary

Hardware Security Modules (HSMs) have not substantially evolved in the last two decades. The advent of enclave technology, such as Intel SGX, TDX and AMD SEV, despite their weaknesses [1,2,3,4,5,6,7,8,9], has enabled some to build mildly re-envisioned versions of these devices [10]. However, the solutions currently available on the market today are far from meeting the needs of the market. In this paper, we will explore what some of the shortcomings of these devices look like and discuss how the market for them has evolved.

A Brief History of HSMs

The first Hardware Security Module was introduced in the late 1970s by IBM. It was designed to be attached to a specific host with the goal of never exposing the PINs to the host. By the early 90s, there were a handful of more capable solutions in this market, they were primarily sold to governments, often for military use cases where the cost of key compromise warranted the increased operational burden and associated costs of this particular approach.

In the late 90s, we started to see practical innovation in this space. For example, the existing HSMs moved into hosts that enabled them to be accessed over the network. We also began to recognize the need to use code containing business logic that gated the use of specific keys. nCipher was one of the first to market with a generic offering in this space which was ahead of its time.

During this time period, the US government had established itself in the area of security and cryptographic standards. It was in this decade that we saw the concept of “Military Grade” security being extensively used in marketing. At this time, the idea of a commercial security industry was in its nascent stage, and the traction that was achieved was usually with government agencies and those who provided services to governments.

As these companies expanded their sales to the rest of corporate America, they mostly pushed for adopting the same patterns that had been found to be successful in the government. In the context of key management, these physical HSMs were considered the gold standard. As cryptography was seldom used in commercial software in any extensive way, the adoption of these devices was usually limited to very high-security systems that were often largely offline and had low-volume usage, at least by today’s standards.

During the 2000s, enterprise software began to move to third-party data centers and later to the cloud. This trend continued into the 2010s and gave us SEV/SXG-based appliances offering HSM-like capabilities, as well as the first HSMs designed for some level of multi-tenancy. However, from a product standpoint, these devices were designed much like their predecessors, so they inherited many of their shortcomings while also introducing their own issues along the way.

Evolution of Development in the Era of HSMs

In the 1970s, security was not really considered a profession but rather an academic exercise at best. In the following decades, software and hardware were shipped with huge leaps of assumption, such as assuming that anything on the network that can talk to me is trusted because we share the physical network. Networks were largely closed and not interconnected at the time. It was not until the 1980s that TCP/IP was standardized, and the interconnected ecosystem of networks and devices we all know today became a reality.

During the period of technological evolution, software was often written in Assembler and C. In the late 1990s and 2000s, the concept of memory-safe languages became increasingly important. Originally, these languages required large tradeoffs in performance, but as they evolved and Moore’s Law caught up, those tradeoffs became inconsequential for most applications.

It was during this same period that security, as a whole, was evolving into both a profession and an academic specialty. One of the largest milestones in this transition was the Microsoft Trustworthy Security Memo [11], which stressed that security had to become core to the way Microsoft built software. It took the next two decades, but the industry evolved quickly at this point, and the approaches to produce secure-by-default software, as well as the technology that made it easier to create secure-by-default software, became more common.

During this time the US Government stopped being seen as a leader in security, and its role in developing cryptographic standards began to wain due to several suspected examples of the government pushing insecurities and viabilities into both standards and commercial systems.

As a result, we saw a Cambrian explosion of new cryptographic approaches and algorithms coming out of academia. The government’s role in standardizing cryptography was still strong, and the commercial and regulatory power of the government meant that products using cryptography were effectively limited to using only the algorithms approved by the government.

Around the same time, Bitcoin, cryptocurrencies, and blockchain gained momentum. As they had no interest in governments, this created a once-in-a-lifetime opportunity for the world to explore new design patterns and approaches to solving problems related to cryptography and lit a fire under researchers to further long-standing ideas like Homomorphic Encryption (HE), Multi-Party Computation (MPC), Threshold Signatures, new ECC algorithms and more.

At the same time, we saw quantum computers become far more real, which introduced the question of how the cryptography we rely on will need to change to survive this impending change. Despite the reputational damage experienced by the US government due to its shepherding of cryptographic standards in preceding years, it still had a role to play in establishing what cryptographic algorithms it will rely upon in this post-quantum world. In 2016, the government started a standardization process for the selection of post-quantum cryptographic algorithms. In 2022, they announced the first four approved PQ algorithms based on that process [12].

During this same time, we have seen a substantial increase in security in the applications and operating systems we use, as a result of improvements in the processes and techniques used to build software. Despite this, there is still a long way to go. In particular, although this is changing, a lot of development happens in non-memory safe languages, and legacy applications still see broad deployment, which makes broad assumptions on the software and operating system dependencies they rely on.

Cloud adoption has played a significant role in driving this change, with as much as half of all enterprise computing now happening on cloud service providers. With the move to cloud, the physical computing environment is no longer the primary risk when it comes to protecting keys. The focus has shifted to online access, and as recent cryptocurrency compromises have shown, it is key management, not physical key protection, that is the weak link in modern cryptographic systems.

One notable exception here is Signal. It has been designed to minimize assumptions on underlying dependencies and is seen as the most secure option for messaging. So much so that most large technology companies have adopted their design, and the IETF has standardized its own derivatives of their protocols. This, combined with the earlier trend, signals how we are not only changing the processes, languages, and tooling we use to build everyday software but also the actual designs in order to mitigate the most common vulnerabilities.

This journey is not done, the Whitehouse has recently signaled [13,14,15] it will be using its regulatory power to accelerate the move to more secure services and will be looking to “rebalance the responsibility to defend cyberspace” by shifting liability for incidents to vendors who fail to meet basic security best practices.

What does this mean for HSMs?

The evolving landscape of cybersecurity and key management has significant implications for Hardware Security Modules (HSMs). With businesses moving their computing to cloud service providers, protecting keys has shifted from a physical computing environment to online access, making key management the weak link in modern cryptographic systems.

To stay relevant and effective, HSMs need to adapt and innovate. They will become computing platforms for smart contract-like controls that gate access to keys rather than cryptographic implementations that protect from physical key isolation.

This means that the developer experience for these devices must evolve to match customers’ needs for development, deployment, and operations, including languages, toolchains, management tools, and consideration of the entire lifecycle of services that manage sensitive key material.

Additionally, the HSM industry must address the issue of true multi-tenancy, securing keys for multiple users while maintaining strong isolation and protection against potential breaches.

Moreover, the HSM industry must address the growing demand for quantum-safe cryptography as current cryptographic algorithms may become obsolete with the development of quantum computers. HSMs must support new algorithms and provide a smooth transition to the post-quantum world.

In conclusion, the HSM industry needs to continue evolving to meet the changing needs of the cybersecurity landscape. Failure to do so may lead to new entrants into the market that can better provide solutions that meet the needs of today’s complex computing environments.

What are the opportunities for HSMs?

The changing landscape of cybersecurity and key management presents both challenges and opportunities for Hardware Security Modules (HSMs). One opportunity is the increasing need for secure key management as more and more businesses move their computing to cloud service providers. This presents an opportunity for HSMs to provide secure, cloud-based key management solutions that are adaptable to the evolving needs of modern cryptography.

Furthermore, the shift towards smart contract-like controls gating access to keys presents an opportunity for HSMs to become computing platforms that can be customized to meet specific business needs. This could lead to the development of new HSM-based services and applications that can provide added value to customers.

Another opportunity for HSMs is the growing demand for quantum-safe cryptography. With the development of quantum computers, the cryptographic algorithms used today may become obsolete, requiring the adoption of new quantum-safe cryptographic algorithms. HSMs can play a critical role in providing the necessary support for these new algorithms and ensuring a smooth transition to the post-quantum world.

In addition, the increasing adoption of blockchain and cryptocurrencies presents a significant opportunity for HSMs. These technologies rely heavily on cryptographic keys for security, and HSMs can provide a secure and scalable key management solution for these applications.

Overall, the changing landscape of cybersecurity and key management presents several great opportunities for HSMs to provide innovative solutions that can meet the evolving needs of businesses and the broader cryptographic community.

Towards Greater Accountability: A Proposal for CA Issuance Decision Logs

It took us a long time, but objectively, Certificate Transparency is a success. We had to make numerous technology tradeoffs to make it something that the CAs would adopt, some of which introduced problems that took even longer to tackle. However, we managed to create an ecosystem that makes the issuance practices of the web transparent and verifiable.

We need to go further, though. One of the next bits of transparency I would like to see is CAs producing logs of what went into their issuance decisions and making this information public. This would be useful in several scenarios. For example, imagine a domain owner who discovers a certificate was issued for their domain that they didn’t directly request. They could look at this log and see an audit history of all the inputs that went into the decision to issue, such as:

  • When was the request for the certificate received?
  • What is the hash of the associated ACME key that made the request?
  • When CAA was checked, what did it say? Was it checked via multiple perspectives? Did all perspectives agree with the contents?
  • When Domain Control was checked, did it pass? Which methods were used? Was multiple perspectives used? Did all perspectives agree with the contents?
  • What time was the pre-certificate published? What CT logs was it published to?
  • What time was the certificate issued?
  • What time was the certificate picked up?

This is just an example list, but hopefully, it is enough to give the idea enough shape for the purpose of this post. The idea here is that the CA could publish this information into cheap block storage files, possibly. I imagine a directory structure something like: ” /<CA CERTHASH>/<SUBJECT CERTHASH>/log”

The log itself could be a merkle tree of these values, and at the root of the directory structure, there could be a merkle tree of all the associated logs. Though the verifiability would not necessarily be relied upon initially, doing the logging in this fashion would make it possible for these logs to be tamper-evident over time with the addition of monitors.

The idea is that these logs could be issued asynchronously, signed with software-backed keys, and produced in batches, which would make them very inexpensive to produce. Not only would these logs help the domain owner, but they would also help researchers who try to understand the WebPKI, and ultimately, it could help the root programs better manage the CA ecosystem.

This would go a long way to improving the transparency into CA operations and I hope we see this pattern or something similar to it adopted sooner rather than later.