Monthly Archives: September 2023

Words matter in cryptography or at least they used to

I was listening to Security Cryptography Whatever today, and they were discussing a topic that has been bothering me for a while.

A common theme in post-quantum cryptography is its pairing with classical cryptography. This “belts and suspenders” approach seems sensible as we transition to relatively new ways to authenticate and protect data. We have already seen some of these new post-quantum methods fail, which underscores the importance of agility in these systems.

However, merging two approaches like this introduces complexity, which is important since as a general rule, complexity is the root of all security issues. Another concern is the labeling of various strategies for doing this as “Hybrid.” This wording makes it challenging to understand what the different approaches are doing and why.

With this background in mind, let’s explore three different “Hybrid” approaches to PQC and classical cryptography. By giving each a unique name and using simple examples, to see if we we can show how they differ: Nested Hybrid Signatures, Side-by-Side Hybrid Protocols, and the proposed Merged Hybrid Signatures.

Nested Hybrid Signatures: A box within a box

In this approach, imagine verifying the authenticity of a letter. The nested hybrid signature method is like putting this letter within a secure box, protected by a classical signature scheme like ECDSA. But we don’t stop there. This box is then placed within another, even stronger box, strengthened with a post-quantum signature scheme like Dilithium. This nested structure creates a situation where even if one layer is broken, the inner core remains trustable..

Side-by-Side Hybrid Protocols: Simultaneous and Nested

In this method, imagine two separate safes, each protecting a part of your secret message. One safe has a classical lock, while the other has a modern, quantum-resistant lock. To see the entire message, one must unlock both safes, as the full message remains trustable unless both safes are broken into. 

Merged Hybrid Signatures: Holding onto the past

This method tries to mix the elements of classical and post-quantum signature schemes into a single, unified signature format. The goal of this approach is to enable minimal changes to existing systems by maintaining a single field that combines a classical signature with a post-quantum signature. This has several issues and seems misguided to me. Firstly, this mixing of PQC and classical cryptography is a temporary problem; eventually, we should have enough confidence that post-quantum cryptography alone is enough at which point this complexity wouldn’t be needed. It also messes with the current assumptions associated with existing signatures, and while it’s not clear what the issues may be, keeping each of the signatures isolated seems less risky. To stick with the lock analogy, it’s somewhat like designing a safe with two different locks on the same door, which must be unlocked at the same time with the same key.

Conclusion

While it’s tough to find the right words to describe new developments as they happen we can do better to avoid using the same terms for different approaches. This will make it easier for everyone to understand what’s being discussed without having to study each protocol in detail. 

Document Authenticity in the Age of Generative AI

In our rapidly evolving lives, the credibility of documents, images, and videos online has emerged as a concern. The pandemic and recent elections have helped highlight this issue. In the case of elections, one area that stands out to me is concerns over voter roll integrity, a pillar of our democratic process in the US.  

As we grapple with these issues, it is important to explore what a solution might look like that balances the associated privacy concerns. Is it possible to provide assurance of integrity and transparency while also providing accommodations for privacy and accountability?

Misinformation in the Digital Age

Despite its challenges, the pandemic did have a silver lining — it brought attention to the internet’s role as a breeding ground for misinformation campaigns. These campaigns featured manipulated images and documents, creating confusion and distrust globally. They also underscored a glaring gap in our current systems — no broad deployment of reliable mechanisms to verify the authenticity and origin of digital content.

The recent advancements in generative AI over the last two years have further complicated this issue. Now with a few words, anyone on the web can create images that at first blush look real. This technology will only continue to get better which means we will need to begin to more formally look at how we build solutions to tackle this new reality.

Existing Solutions and Their Shortcomings

Several technologies have recently been discussed as the way forward to address at least portions of these problems. One such example is the Content Authenticity Initiative which proposes that devices like cameras cryptographically sign all pictures and videos with a device credential, a feature aimed at enabling the detection of any alterations made post-capture. 

This method raises significant privacy concerns. Essentially, it could create a surveillance infrastructure where each content piece could be unexpectedly traced back to an individual or a group of devices, potentially becoming a surveillance tool.

Google DeepMind also recently brought forth the idea of opt-in watermarking for images created through AI technologies. While this initiative seems promising at a glance, it fails to address the nuances of the threat model. For instance, a nation-state with intentions to manipulate an election using generative AI assets wouldn’t likely volunteer to watermark these materials as AI-generated. This significant loophole sets a precarious stage where misinformation can still flourish.

These approaches, though developed with noble intentions, reveal critical gaps in addressing the complex landscape of content authenticity. They either infringe upon individual privacy rights or are vulnerable to exploitation when faced with a real threat model. 

Middle Ground: Publisher Signatures and Blinding as a Potential Solution

A more nuanced approach could utilize optional cryptographic signatures linked to a publisher, instead of devices, when signed, the publisher, not their devices, opts into staking their reputation on the authenticity of the artifact. Coupled with a feature to enable cryptographically blinding the publisher’s identity, this strategy could offer a safe avenue for them to reveal their identity at a later time, if necessary. Such a situation might arise in cases of whistleblower claims, where shielding the publisher’s identity becomes crucial for their safety. This blinding could strike a balance, granting publishers temporary anonymity while preserving the potential to enable them to later opt-in to publicly stand behind the artifact or to take accountability for any misinformation.

In contrast to devices subtly leaking metadata that would put subjects in the position to have to prove a negative, for example, needing to explain a picture does not tell the whole story. Or even worse putting the subject of a picture in a situation where they need to prove that the device that captured it was compromised, This is similar to what happens today with red-light cameras and automated radar guns where poorly calibrated devices result in innocent people being charged.

The proposed model shifts the identification to publishers in the hope of fostering a responsible publishing culture where publishers have the discretion to unveil their identity, promoting accountability without completely compromising privacy.

It is also worth noting that a transition from ink signatures to cryptographic signatures for documents appears more pertinent than ever. Generative AI and graphic design technologies have enhanced the ability to replicate handwriting styles, making traditional signatures highly susceptible to forgery. The use of cryptographic techniques offers a more secure alternative, integrating seamlessly into modern workflows and standing resilient against unauthorized alterations.

Publisher Signatures Are Not Enough

In information security, it’s now accepted that insider threats are a significant risk. This realization has steered us away from merely verifying the identity of a publisher, especially in cryptographic signing systems such as code signing. 

There are a few reasons, but one of the largest is that key management proves to be challenging, often due to the fact that publishers frequently represent groups rather than individuals, leading to key management practices being more permissive than ideal from a security standpoint. 

Additionally, if a solution is to incorporate the possibility of anonymity through cryptographic blinding we can not simply bet on the presence and blind trust in that identity.

This is the same reason that led modern code-signing solutions to adopt ledgers that record an artifact’s provenance and history. For instance, in a Binary Transparency system, a ledger might house not only a list of software packages and their contents but also offer qualitative attestations about the software, for example indicating whether it has been screened for malware or verified to be reproducible from its source. This mechanism allows devices to understand not just the origin of the code but also to grasp the intended release of the software and potentially qualitative aspects to consider before reliance on it.

Applying this pattern to our document provenance and integrity problem, this system can offer value even when the identity remains undisclosed. It can provide a basic idea of the time of existence and allow third parties to vouch for the authenticity, possibly linking to other corroborative artifacts.

Building a continuously verifiable record coupled with supportive evidence for artifacts like documents seems to be a step in the right direction. This approach has demonstrated its value in other problem spaces.

With that said it’s essential to acknowledge that, as with any opt-in system, documents, images, and videos will not all contain this additional provenance and like with all technology this too would not be perfect. As a result, this means that rather than outright dismissal, all content will need to be evaluated based on merit, and the evidence collected about it. At a minimum, we must recognize that it can take years for any new system to gain substantial traction.

Learning from the rest of the world.

This issue is not confined to the US, so we should not restrict ourselves to looking at approaches used by US Big Tech. For instance, the strategies suggested here significantly draw upon the principles of electronic signatures, particularly e-Seals, which are prevalent in the EU and other regions. This European model offers vital insights and presents a robust strategy for resolving disputes, albeit reliant on specific technologies.

Interestingly, US online notarization rules have also borrowed elements from the EU, mandating the use of cryptographic signatures, akin to the EU’s emphasis on Advanced Signatures.

By combining this approach with the lessons learned from Certificate and Binary Transparency, where Merkle trees of published materials, continuous monitoring, and third-party evaluation help ensure a more complete picture — we might find a path forward. 

The addition of blinding the publisher’s identity in a way where they could selectively disclose their identity in the future also seems to provide a plausible way to balance the privacy concerns that could enable this path to become the default in the future.

Motivating Participation through Innovation, Policy and Leadership

Adoption of this approach cannot solely rely on goodwill or regulation. It would require a combination of standardization, regulatory changes, creating incentives for publishers, engagement with civil society and other stakeholders, and some tangible leadership by example by a large player or two. Some ideas in this direction include:

  • Initiating revisions to the existing digital signature legislation seems to be a sensible first step. Given our substantial background with the current laws, it’s clear that without regulatory changes, technological investments are unlikely to shift.
  • The government can lead by example by enhancing initiatives like the current Digital Autopen project. This project allows groups of individuals to access shared signing credentials without tying them to individual users, addressing a notable challenge prevalent in code signing and other organizational signing efforts.
  • I also believe that investing in a singular, immensely impactful use case as a case study could vividly illustrate how these approaches can make a significant difference. The voter registration rolls mentioned earlier seem like an excellent use case for this.
  • Further research and standardization could explore integrating cryptographic blinding of signer identities within the current document signing infrastructure, allowing for future disclosure. Investigating the potential structure and security of ledgers, and considering the extension of signing protocols to other artifact formats, seems to be a natural progression in supporting a system of this kind.
  • Simultaneously, collaboration with civil society, tech companies, and other stakeholders, including publishers, appears vital. This will guarantee that their concerns are integrated into the developed solutions and that appropriate policies are instituted to effectively incorporate this metadata into their operations.
  • I also believe investing in a singular and hugely impactful use case as a case study of how these approaches can make a big difference. The voter registration rolls discussed earlier seem like a great use case for this.

While these efforts would not necessarily lead to adoption it does seem that adoption would minimally be a predicate on efforts like these.

A Pathway to Trust and Accountability

Balancing privacy and accountability in the digital age is a nuanced but achievable goal, especially if we build on top of existing successes. By adopting a well-rounded approach that integrates cryptographic signatures with mechanisms for temporary anonymity, we can carve a pathway toward a society where digital content maintains its integrity and trustworthiness. 

Moreover, by fostering an environment where content, even without clear provenance, is evaluated critically rather than dismissed, we encourage a richer discourse and a healthier digital ecosystem.

Through the union of technology and policy, we can create a more secure, transparent, and accountable future for content authenticity.

The Scale of Consequence: Storm-0558 vs DigiNotar

When we look at the Storm-0558 and DigiNotar incidents side by side, we find striking similarities in their repercussions and severity. Both cases involve significant breaches orchestrated by nation-states – China and Iran respectively, targeting critical digital infrastructure and security protocols that are designed to safeguard user data and communications.

In the case of Storm-0558, the skilled dismantling of Microsoft’s authentication infrastructure not only compromised the integrity of exchange inboxes but potentially rendered confidential information accessible to unauthorized entities.

Similarly, the DigiNotar breach constituted a severe undermining of internet security, as the attackers were able to issue trusted certificates that facilitated man-in-the-middle attacks. This compromised user interactions with sensitive services, including email communications.

Given their similar impact on user privacy and internet security, it begs the question are we treating both incidents with equal gravitas and severity?

If not we must ask the question as to why and what are the consequences of that reality?

To answer these questions it might be useful to think about a different kind of breach of trust that happened in the late 2010s where a fake vaccination campaign was used as a cover to collect DNA samples in the hunt for Osama bin Laden. That move ended up causing a lot of people in the area to give a side-eye to vaccination drives, fearing there’s more than meets the eye.

It almost feels like sometimes, big tech in the US gets to bend the rules a little, while smaller players or those from other parts of the world have to toe the line. It’s this uneven ground that can breed mistrust and skepticism, making folks doubt the systems meant to protect them.

In short, these decisions to compromise core infrastructure and come with long-term consequences that are surely not being fully considered.