Tag Archives: Security

When Words Mislead: Cybersecurity’s Terminology Problem

At Black Hat this year, I did my usual walk around the vendor floor. I talked to lots of companies about their products. One thing that stood out to me is vendors either by accident or on purpose are redefining terms in a way that does harm. One vendor in particular was calling “bearer tokens” “attestations” in both their marketing and product documentation. Let’s use this as an example and break down why this matters.

What’s an attestation?

An attestation is when someone says something is true and puts their reputation behind that statement. It’s like when your friend vouches for you at a new job. A good technology example is a TPM attestation. The TPM in your computer can prove certain things about how your machine started up. When we trust the chip’s design and the company that made it, we can believe what it tells us.

What’s a claim?

A claim is just something someone says. It might be true, but there’s no proof. If I tell you my name is Ryan Hurst, that’s a claim. I haven’t shown you my ID or anything. Claims can also be about other people or things. If I say Cloudflare is safe to use, that’s just my opinion unless I back it up with something.

What’s a bearer token?

A bearer token is like a temporary password. It’s a secret that proves who you are to a service. Anyone who has the token can pretend to be you. We use them because they’re necessary, but we try to limit their use in modern systems.

You usually get a bearer token by trading in something more permanent, like an API key, which is essentially a long-lived password. It’s like swapping a house key for a hotel room key. The hotel key only works for a short time, but anyone who finds it can get into your room.

Why does any of this matter?

When companies use the wrong terms to explain what they do it can lead people to make bad security choices. For example, If you hear a vendor say their system relies on bearer tokens and then you do a search on the term, you’ll find experts talking about their risks and how to manage them. But if you search for attestations, you’ll find different info about how they help prove things are reliable, trustworthy or factual.

If a company selling security products tells you it does one thing, but it does another, it’s a bad sign. They either have some technical debt buried in the design that may have a negative impact, don’t know what they’re talking about, or they’re trying to confuse you. Either way, you might want to look at other options.

So, when you’re buying security products for your company, pay attention to how vendors use words. If they’re changing the meaning of important terms, be careful. It could mean you’re not getting what you think you are.

Navigating Content Authentication In the Age of Generative AI

In 1995, SSL was introduced, and it took 21 years for 40% of web traffic to become encrypted. This rate changed dramatically in 2016 with Let’s Encrypt and the adoption of ACME, leading to an exponential increase in TLS usage. In the next 8 years, adoption nearly reached 100% of web traffic. Two main factors contributed to this shift: first, a heightened awareness of security risks due to high-profile data breaches and government surveillance, creating a demand for better security. Second, ACME made obtaining and maintaining TLS certificates much easier.

Similarly, around 2020, the SolarWinds incident highlighted the issue of software supply chain security. This, among other factors, led to an increase in the adoption of code signing technologies, an approach that has been in use at least since 1995 when Microsoft used this approach to help deal with the problem of authenticity as we shifted away from CDs and floppy disks to network-based distributions of software. However, the complexity and cost of using code signing severely limited its widespread use, and where it was used, thanks to poor tooling, key compromises often led to a failure for most deployments to achieve the promised security properties. Decades later, projects like Binary Transparency started popping up and, thanks to the SolarWinds incident, projects that spun out of that like Go ChecksumDB, SigStore, and SigSum projects led to more usage of code signing.

Though the EU’s digital signature laws in 1999 specified a strong preference for cryptographic-based document signing technologies, their adoption was very limited, in part due to the difficulty of using the associated solutions. In the US, the lack of a mandate for cryptographic signatures also resulted in an even more limited adoption of this more secure approach to signing documents and instead relied on font-based signatures. However, during the COVID-19 pandemic, things started changing; in particular, most states adopted remote online notary laws, mandating the use of cryptographic signatures which quickly accelerated the adoption of this capability.

The next shift in this story started around 2022 when generative AI began to take off like no other technology in my lifetime. This resulted in a rush to create tools to detect this generated content but, as I mentioned in previous posts [1,2], this is at best an arms race and more practically intractable on a moderate to long-term timeline.

So, where does this take us? If we take a step back, what we see is that societally we are now seeing an increased awareness of the need to authenticate digital artifacts’ integrity and origin, just like we saw with the need for encryption a decade ago. In part, this is why we already see content authentication initiatives and discussions, geared for different artifact types like documents, pictures, videos, code, web applications, and others. What is not talked about much is that each of these use cases often involves solving the same core problems, such as:

  • Verifying entitlement to acquire the keys and credentials to be used to prove integrity and origin.
  • Managing the logical and physical security of the keys and associated credentials.
  • Managing the lifecycle of the keys and credentials.
  • Enabling the sharing of credentials and keys across the teams that are responsible for the objects in question.
  • Making the usage of these keys and credentials usable by machines and integrating naturally into existing workflows.

This problem domain is particularly timely in that the rapid growth of generative AI has raised the question for the common technology user — How can I tell if this is real or not? The answer, unfortunately, will not be in detecting the fakes, because of generative AIs ability to create content that is indistinguishable from human-generated work, rather, it will become evident that organizations will need to adopt practices, across all modalities of content, to not only sign these objects but also make verifying them easy so these questions can be answered by everyday users.

This is likely to be accelerated once the ongoing shifts take place in the context of software and service liability for meeting security basics. All of this seems to suggest we will see broader adoption of these content authentication techniques over the next decade if the right tools and services are developed to make adoption, usage, and management easy.

While no crystal ball can tell us for sure what the progression will look like, it seems not only plausible but necessary in this increasingly digital world where the lines between real and synthetic content continue to blur that this will be the case.

Update: Just saw this while checking out my feed on X and it seems quite timely 🙂

Tenement Farming and Cloud HSMs

While it’s fair to say that using a Cloud HSM means your keys are protected by a device meeting FIPS 140-3 standards, assuming the HSM in use has this certification, it’s important to realize this doesn’t guarantee the security you might expect. The security model of HSMs was built for the threats of the 1980s. These devices were not network-connected and were single-tenant — if they were “online” it was usually via HSMs attached to physical computers running an application on a machine connected to private networks — not connected to a globally reachable endpoint.

At their core, these devices were designed to protect keys from physical theft, more precisely, to slow down and increase the cost of theft, much like safe ratings (UL TL-15, TL-30, TL30x6) indicate for how effective the associated safes are. For example, early in my career, I worked on a project where we built attacks to extract non-exportable keys from a specific HSM and then imported them into another vendor’s HSM because the prior vendor went out of business. There have also been a number of key exfiltration bugs in these devices over the years as well.

We didn’t see network-connected HSMs until around 1999, but even then, these devices were single-tenant, essentially just a network-connected Linux or BSD box containing fundamentally the same hardware as years earlier. While this change did allow a single company to share an HSM across different application workloads, the assumption was still that this HSM was managed by the company in charge of all of these applications.

Why is this important today? Most computing is now done in shared cloud infrastructure, administered by someone else, with your competitor or an attacker on the same hardware as you. This presents a very different set of security considerations and design constraints than these devices were originally built. You are now exposed to the risks of the physical and logical administrators of these Cloud HSMs, the services they are dependent on, as well as other tenants of the Cloud HSM.

Consider that the compute operator usually can technically access the handle the application uses to talk to the HSM, and likely the secret used to authenticate to this HSM as well, meaning they, or an attacker, could potentially use that handle, or secret,  to sign, or decrypt data as they wish. You might find that an acceptable risk, but did you know some HSMs allow the administrator to blindly add users as operators to the “virtual HSMs” within it? Yup, they do.

What about when keys are stored in a KMS and the key policy dictates the key be managed by an HSM? If the HSM hardware attests that the key is stored in the HSM, and this attestation is verified, it’s nearly the same threat profile we just discussed. In some cases, it could be argued it is better because access to the HSM can have traditional user and service RBAC controls, and rate limiting, and keys can be replicated to many other HSMs without any administrative burden for you, keeping you safe from a common disaster recovery scenario while normalizing the management of these devices so it fits into your normal operational practices which hopefully are well managed and monitored.

Regardless of the approach, the bigger question is whether your provider’s operational and security practices are up to your specific threat model. Imagine a Bitcoin wallet worth 100 million dollars. Has your cloud provider proportionally invested enough into controls and tests around their system to prevent a motivated attacker from using your key to sign a transaction that moves all that to another wallet? Probably not.

The fundamental issue is that today’s HSMs were largely designed for different eras with different security concerns than we typically have today, mainly to protect against physical theft of keys in environments where data centers were effectively closets in dedicated office space. That doesn’t reflect today’s cloud computing scale.

It is worth noting that there are a few HSM solutions on the market that are making efforts to tackle some of these issues, but they still fall short but that is a topic for another post.

In essence, Cloud HSMs are to HSMs what Tenement Farming is to Farming.

That’s not to say there’s no value in these offerings, but as built today, they often fail to deliver the value they are assumed to deliver. And if regulations mandated their use before, say, 2010, chances are they’re not delivering the intended value that those regulations had in mind.

So, how should we be protecting keys now?

To be clear, this is not a case against Cloud HSMs, it is an argument to think about the threat model and use case you are solving for — for example if we look at Storm-0558 where Microsoft appears to have been using the private key material in the process of their IDP, the attacker was able to get a memory dump to be created and then via another attack vector gather the memory dump, and as a result the private key, we can take away at least one solid lesson. Do not load keys into the process of the applications that rely on them. In this case, the least costly method to have prevented this key theft would be simply moving the key to another process running in another user context with a very simple API that is easy to defend and can at least limit the attacker to a handle vs. what happened in this case where the attacker was able to use the key with impunity for years. This approach is the rough equivalent of a workload or node-specific software HSM similar in spirit to the original HSMs.

Another common problem we see in the industry is, that solutions like Hashicorp vault were designed to centralize key management and provide a one-size-fits-all answer to “Where do I keep my secrets?” Architecturally these solutions look much like a passively encrypted database, if you have sufficient permissions you can read the key in the clear and then copy it to whatever node or workload needs to use the key. This took us from secret sprawl to secret spray where we pushed the keys out in environment variables and files on production machines that later get dumped into logs, and backups, continuously exposing the keys to users who should have never had access, and often leaving key remnants all over the place. This is only marginally better than checking keys into dedicated source control repositories.

The problem here isn’t limited to these secret sprawl solutions, considering that almost every web-server TLS private key is sitting in the file system often with weak ACLs without any encryption which is then loaded into memory on that web server in the process. Similarly, most SSH keys are also sitting in some file, usually with a poor ACL, with a key either in the clear or with an easily grindable password so a malicious actor that gains read access to the file system is all that is needed to walk away with the key, for example, see this incident from last week

In both of these cases, we would be much better off if we would move these keys into another user context that is more defensible and constrained.

So how did we end here with such abysmal practices for managing keys?

While there is seldom one single reason to see such neglect, in this case, I think one of the largest is the dogmatic “all keys must be kept in HSMs or smart cards”. It is just too easy of a get-out-of-jail-free card for a security professional. Instead of thinking about the real risks and operational practices and then designing strategies to mitigate those threats that are practical and appropriate people who can afford to complete that checkbox do and those who can not just copy keys around in the clear out of a database.

The reality is we can do a lot better but as they say, the first step is to accept that you have a problem.

In short, as security professionals we need to avoid dogmatic answers to complex questions and spend the time to look more critically at the risks, constraints, obligations, resources, and real-world scenarios those we work with are operating within before we throw generic playbook answers to those coming to us for advice.

Effortless Certificate Lifecycle Management for S/MIME

In September 2023, the SMIME Baseline Requirements (BRs) officially became a requirement for Certificate Authorities (CAs) issuing S/MIME certificates (for more details, visit CA/Browser Forum S/MIME BRs).

The definition of these BRs served two main purposes. Firstly, they established a standard profile for CAs to follow when issuing S/MIME certificates. Secondly, they detailed the necessary steps for validating each certificate, ensuring a consistent level of validation was performed by each CA.

One of the new validation methods introduced permits mail server operators to verify a user’s control over their mailbox. Considering that these services have ownership and control over the email addresses, it seems only logical for them to be able to do domain control verification on behalf of their users since they could bypass any individual domain control challenge anyway. This approach resembles the HTTP-01 validation used in ACME (RFC 8555), where the server effectively ‘stands in’ for the user, just as a website does for its domain.

Another new validation method involves delegating the verification of email addresses through domain control, using any approved TLS domain control methods. Though all domain control methods are allowed for in TLS certificates as supported its easiest to think of the DNS-01 method in ACME here. Again the idea here is straightforward: if someone can modify a domain’s TXT record, they can also change MX records or other DNS settings. So, giving them this authority suggests they should reasonably be able to handle certificate issuance.

Note: If you have concerns about these two realities, it’s a good idea to take a few steps. First, ensure that you trust everyone who administers your DNS and make sure it is securely locked down. 

To control the issuance of S/MIME certificates and prevent unauthorized issuance, the Certification Authority Authorization (CAA) record can be used. Originally developed for TLS, its recently been enhanced to include S/MIME (Read more about CAA and S/MIME).

Here’s how you can create a CAA record for S/MIME: Suppose an organization, let’s call it ‘ExampleCo’, decides to permit only a specific CA, ‘ExampleCA’, to issue S/MIME certificates for its domain ‘example.com’. The CAA record in their DNS would look like this:

example.com. IN CAA 0 smimeemail "ExampleCA.com"

This configuration ensures that only ‘ExampleCA.com’ can issue S/MIME certificates for ‘example.com’, significantly bolstering the organization’s digital security.

If you wanted to stop any CA from issuing a S/MIME certificate you would create a record that looks like this: 

example.com. IN CAA 0 issuemail ";"

Another new concept introduced in this round of changes is a new concept called an account identifier in the latest CAA specification. This feature allows a CA to link the authorization to issue certificates to a specific account within their system. For instance:

example.com. IN CAA 0 issue "ca-example.com; account=12345"

This means that ‘ca-example.com’ can issue certificates for ‘example.com’, but only under the account number 12345.

This opens up interesting possibilities, such as delegating certificate management for S/MIME or CDNs to third parties. Imagine a scenario where a browser plugin, is produced and managed by a SaaS on behalf of the organization deploying S/MIME. This plug-in takes care of the initial enrollment, certificate lifecycle management, and S/MIME implementation acting as a sort of S/MIME CDN.

This new capability, merging third-party delegation with specific account control, was not feasible until now. It represents a new way for organizations to outsource the acquisition and management of S/MIME certificates, simplifying processes for both end-users and the organizations themselves.

To the best of my knowledge, no one is using this approach yet, and although there is no requirement yet to enforce CAA for SMIME it is in the works. Regardless the RFC has been standardized for a few months now but despite that, I bet that CAs that were issuing S/MIME certificates before this new CAA RFC was released are not respecting the CAA record yet even though they should be. If you are a security researcher and have spare time that’s probably a worthwhile area to poke around 😉

The Rise of Key Transparency and Its Potential Future in Email Security

Key Transparency has slowly become a crucial part of providing truly secure end-to-end encrypted messaging. Don’t believe me? The two largest providers of messaging services, Apple and Facebook (along with their WhatsApp service), have openly adopted it, and I am hopeful that Google, one of its early advocates, will follow suit.

At the same time, we are on the precipice of interoperable group messaging as Messaging Layer Security (MLS) was recently standardized. Its contributors included representatives from employees of the mentioned services and more, which suggests they may adopt it eventually. What does this have to do with Key Transparency? It acknowledges the need for secure, privacy-preserving key discovery through its inclusion of Key Transparency in its architecture.

It’s also noteworthy to see that Apple has agreed to support RCS, Android’s messaging protocol. While there is no public hint of this yet, it’s possible that since they have positioned themselves as privacy champions over the last decade frequently toting their end-to-end encryption, we may see them push for MLS to be adopted within RCS, which could net the world its first interoperable cross-network messaging with end-to-end encryption, and that would need a key discovery mechanism.

In that spirit, recently the Internet Engineering Task Force (IETF) has established a Working Group on Key Transparency, and based on the participation in that group, it seems likely we will see some standardization around how to do Key Transparency in the future.

What’s Next for Key Transparency Adoption Then?

I suspect the focus now shifts to S/MIME, a standard for public key encryption and signing of emails. Why? Well, over the last several years, the CA/Browser Forum adopted Baseline Requirements (BRs) for S/MIME to help facilitate uniform and interoperable S/MIME, and those became effective on September 1, 2023 – this means CAs that issue these certificates will need to conform to those new standards.

At the same time, both Google and Microsoft have made strides in their implementations of S/MIME for their webmail offerings.

In short, despite my reservations about S/MIME due to its inability to address certain security challenges (such as metadata confidentiality, etc), it looks like it’s witnessing a resurgence, particularly fueled by government contracts. But does it deliver on the promise today? In some narrow use cases like mail signing or closed ecosystem deployments of encrypted mail where all participants are part of the same deployment, it is probably fair to say yes.

With that said, mail is largely about interoperable communications, and for that to work with encrypted S/MIME, we will need to establish a standard way for organizations and end-users to discover the right keys to use with a recipient outside of their organization. This is where Key Transparency would fit in.

Key Transparency and S/MIME

Today, it is technically possible for two users to exchange certificates via S/MIME, enabling them to communicate through encrypted emails. However, the process is quite awkward and non-intuitive. How does it work? You either provide the certificate out of band to those in the mail exchange, and they add it to their contact, or some user agents automatically use the keys associated with S/MIME signatures from your contacts to populate the recipient’s keys.

This approach is not ideal for several reasons beyond usability. For instance, I regularly read emails across three devices, and the private keys used for signing may not be the same on each device. Since consistent signing across devices isn’t required, if I send you an email from my phone and then you send me an encrypted message that I try to open on my desktop, it won’t open.

Similarly, if I roll over my key to a new one because it was compromised or lost, we would need to go through this certificate distribution workflow again. While Key Transparency doesn’t solve all the S/MIME-related problems, it does provide a way to discover keys without the cumbersome process, and at the same time, it allows recipients to know all of my active and published certificates, not just the last one they saw.

One of the common naive solutions to this problem is to have a public directory of keys like what was used for PGP. However, such an approach often becomes a directory for spammers. Beyond that, you have the problem of discovering which directory to use with which certificate. The above Key Transparency implementations are all inspired by the CONIKS work, which has an answer to this through the use of a Verifiable Random Function (VRF). The use of the VRF in CONIKS keeps users’ email addresses (or other identifiers) private. When a user queries the directory for a key, the VRF is used to generate a unique, deterministic output for each input (i.e., the user’s email). This output is known only to the directory and the user, preserving privacy.

The generic identifier-based approach in Key Transparency means it can neatly address the issue of S/MIME certificate discovery. The question then becomes, how does the sender discover the Key Transparency server?

Key Transparency Service Discovery

The answer to that question probably involves DNS resource records (RRs). We use DNS every day to connect domain names with IP addresses. It also helps us find services linked to a domain. For instance, this is how your email server is located. DNS has a general tool, known as an SRV record, which is designed to find other services. This tool would work well for discovering the services we’re discussing.

_sm._keytransparency._https.example.com. 3600 IN SRV 10 5 443 sm-kt.example.com.

In this example, _sm the identifier is placed before _keytransparency. and _https shows that this SRV record is specifically for a Key Transparency service for Secure Messaging. This allows someone to ask DNS for a S/MIME-specific Key Transparency service. It also means we can add other types of identifiers later, allowing for the discovery of various KT services for the same domain.

Conclusion

While S/MIME faces many challenges, such as key roaming, message re-encrypting on key rollover, and cipher suite discoverability, before it becomes easy to use and widely adopted—not to mention whether major mail services will invest enough in this technology to make it work—there’s potential for a directory based on Key Transparency if they do.

Hopefully, the adoption of Key Transparency will happen if this investment in S/MIME continues, as it’s the only large-scale discovery service for user keys we’ve seen in practice. Unlike other alternatives, it’s both privacy-respecting and transparently verifiable, which are important qualities in today’s world. Only time will tell, though.

Exploring the Potential of Domain Control Notaries for MPDV in WebPKI

In an earlier post on the Role of Multiple Perspective Domain Control Validation (MPDV) in the WebPKI, I discussed how there was an opportunity for CAs to work together to reduce the cost of meeting the upcoming requirements while also improving the security of the ultimate solution.

In this post, I want to talk a little about an idea I have been discussing for a while: specifically, Domain Control Notaries.

Before I explain the idea, let’s first look at how domain control verification (MPDV) happens. In short, the CA generates a random number and asks the certificate requestor to place that number in a location that only an administrator can access, for example, in a DNS record, in part of the TLS exchange, or in a well-known location.

If the CA is able to fetch the number, and the underlying network is living up to its promises, it can have confidence that the requestor is likely authorized for the given domain.

To understand Domain Control Notaries you will also need to have a basic understanding of what MPDV means. Thankfully, that’s easy: do that from multiple network locations and require a quorum policy to be met. This basically means that an attacker would have to tick enough of the quorum participants to bypass the quorum policy.

So that takes us back to the idea of Domain Control Notaries. As you can see, Domain Control Verification is super simple. That means you could put it on a very small computer, and it could be able to perform this simple task. For example, imagine a USB Armory that was fused at manufacturing time with firmware that only did these domain control checks. That this hardware also had a cryptographically unique key derived from the hardware fused to the device at manufacturing time.

Now imagine an aggregator handling admissions control for a network of these Domain Control Notaries. This aggregator would only allow devices that were manufactured to meet these basic requirements. It would enforce this because the manufacturer would publish a list of the public keys of each of the devices to the aggregator, which would use for admission control.

This aggregator would expose a simple REST API that would let the requestor specify some basic policy on how many of these Domain Control Notaries to broadcast their request to, what domain control methods to use, and a callback URL to be used by the aggregator when the verification is complete.

These aggregators would only accept responses from the Domain Control Notaries that signed their responses and whose keys were on this authorized list and were not added to their deny lists.

This basic framework sets you up with a network of very cheap network endpoints that can be used to perform domain control verification. You could even have a few of these aggregators each with its own Domain Control Notaries. CAs could use multiple of these aggregator networks to reduce centralization risk.

You might be asking yourself how these tiny computers could deal with the scale and performance of this task! The reality is that in the grand scheme of things, the WebPKI is relatively small! It is responsible for only 257,035 certificates every hour. The real number is actually smaller than this too because that includes some pre-certificates and in the context of domain control verification. CAs are also allowed to do some re-use of past validations if recent enough. This means we should be able to use this as a worst-case number safely. Simply put, that is only 1.18 certificates every second. That is tiny. If you spread that out over a few hundred Domain Control Notaries, the number of transactions gets that much smaller.

Then there is the question of who would run these Domain Control Notaries? A lesson learned from Certificate Transparency is that if you make it easy and cheap to participate and make it easy to both come and go, most organizations are willing to help. You’re basically asking an operator to provide a few lightbulbs of electricity and a tiny amount of network connectivity. This is easy for most organizations to sign up for since there is no tax in turning it down, and no impact if there is an outage.

Remember how I said there could be multiple aggregators? These aggregators could also expose more expensive heavy-weight nodes that were not reliant on Domain Control Notaries to provide a more reliable substrate to power such a network.

That’s Domain Control Notaries. Maybe they can be a tool to help us get to this world of MPDV everywhere.

Why crawling is not an adequate measurement methodology for the WebPKI

The answer is simple — It’s an incomplete view of the use of the WebPKI.

There are a number of different methodologies a web crawler-based approach might take in measuring the size of the WebPKI. The most naive approach would be to simply scan all IPv4 address space and log all of the certificates you see during this scan.

The problem is that this only shows a small fraction of the certificates that are out there. When you connect to an IP address and the associated web server doesn’t know what host you are trying to connect to it will return its “default” website and use the associated certificate.

That same IP address may literally be responsible for serving millions of sites based on the client’s indicated hostname. With this IP-based enumeration approach at best you would get one certificate from that host, at worse you wouldn’t even get that because some servers are not configured with a default site. This is just one problem with this approach there are many more.

Though most WebPKI market share reports do not document their methodology anecdotally it appears most work on this crawler approach and at least historically some have taken periodic drops from CAs to make their view “more complete”.

Today though the only way to measure CA market share that should be used is by relying on the pre-certificate counts in Certificate Transparency logs.

Evolution of privacy governance and how cryptographic controls can help

Unfortunately, most organizations do not have a formal strategy for privacy, and those that do usually design and manage them similarly to their existing compliance programs.

This is not ideal for several reasons which I will try to explore in this post but more importantly, I want to explore how the adoption of End-To-End encryption and Client-Side encryption is changing this compliance-focused model of privacy to one backed by proper technical controls based on modern cryptographic patterns. 

If we look at compliance programs they usually evolve from something ad-hock and largely aspirational (usually greatly overselling what they deliver) and over time move to a more structured process-oriented approach that is backed with some minimum level of technical controls to ensure each process is followed. 

What is usually missing in this compliance journey is an assessment of sufficiency beyond “I completed the audit gauntlet without a finding”. This is because compliance is largely focused on one objective — proving conformance to third-party expectations rather than achieving a set of contextually relevant security or privacy objectives – This is the root of why you hear security professionals say “Compliance is not Security”.

If you watch tech news as I do, you have surely seen news articles calling the cybersecurity market “the new lemon market “, and while I do agree this is largely true I also believe there is a larger underlying issue — an overreliance on compliance programs to achieve security and privacy objectives.

To be clear, the point here is not that compliance programs for privacy are not necessary, instead, what I am saying is they need to be structured differently than general-purpose compliance programs, as the objectives are different. These differences lend themself to misuse resistant design and the use of cryptographic controls as an excellent way to achieve those objectives.

Misuse resistance is a concept from cryptography where we design algorithms to make implementation failures harder. This is in recognition of the fact that almost all cryptographic attacks are caused by implementation flaws and not fundamental breaks in the cryptographic algorithms themselves. 

Similarly, in the case of privacy, most companies will not say “we intend to share your data with anyone who asks”. Instead, they talk of their intent to keep your information confidential — the problem is that in the long run everyone experiences some sort of failure and those failures can make it impossible to live up to that intent.

So back to this compliance-focused approach to privacy — it is problematic in several cases, including:

  • Where insiders [1,2,3] are abusing their position within an organization,
  • When configuration mistakes result in leakage of sensitive data [4,5,6], 
  • When service providers fail to live up to customers’ expectations on data handling [7,8], 
  • When technical controls around data segmentation fail [9],
  • and of course when service providers fail to live up to their marketing promises for government access requests for your sensitive information [10,11].

If we move to a model where we approach these problems using engineering principles rather than process and manual controls we end can end up in a world where the data and access to it are inherently misused resistant and hopefully verifiable.

This is exactly what we see in the design patterns that have been adopted by Signal, namely they gather only the minimal level of data to deliver the service, and the data they do gather they encrypt in such a way to limit what they can do with it.

They are not the only ones though can see the similar approaches from Android and how they handle encrypted backups as well as in Apple in how they handle device pin recovery

Another great example is how payment providers like Stripe leverage client-side encryption to reduce exposure of payment details to intermediaries or how Square uses client-side encryption in the Cash App to limit adversaries’ access to the data.

While today these patterns are only being applied by the largest and most technically advanced organizations the reality is that as Alexander Pope once said “to err is human” and if we are to truly solve for privacy and security we have to move to models that rely less on humans doing the right thing and for privacy, this means extensive use of cryptographic patterns like those outlined above.

It could be argued that of the reasons we do not see these patterns applied more is security the nihilist’s argument that client-side encryption and cryptographic transparency are exercises of re-arranging deck chairs on a sinking ship.

While there is some truth to that argument you can both limit the amount of trust you have to place in these providers (limiting the amount of trust delegated is the essence of security after all!) and make elements of what the provider does verifiable which again furthers this misuse resistant objective.

If that is the case then why is it that only the largest providers do this today? I would argue it’s just too darn hard right now. For example, when you start applying client-side encryption and cryptographic controls to systems there is still a substantial lift, especially when compared to the blind trust paradigm most systems operate within.

There are a few reasons, but one of the largest in the case of client-side encryption is that you end up having to build the associated key management frameworks yourself and that takes time, and expertise that many projects simply do not have access to.

With that said just like modern systems have moved from self-hosted monoliths to microservices that are globally scaled thanks to solutions like Kubernetes which in turn gives them node to node compartmentalization and other by default security controls, I believe we will see a similar move for sensitive data handling where these cryptographic controls and usage policies become a key tool in the developer’s toolbox.

More work is needed to make it so the smaller organizations can adopt these patterns but, unquestionably, this is where we end up long term. When we do get here, to borrow an over-used marketing term, we end up what might be called Zero Trust Data Protection in today’s market.

Risk variance and managing risk

One of my favorite security sayings is “My threat model is not your threat model”. We broadly accept there are different perspectives for every problem — the same is true with security.

Consider an Enterprises IT organization where you are chartered to support and secure a business. You need to meet this charter with a fairly fixed set of resources but the business requirements you must support are always changing. To deliver a reasonable level of service enterprises standardize on a core set of ways that certain issues (user and access management, compute, etc.) will be provided and force business units down the path of adopting them.  But that standardization often results in non-ideal user experiences, disjointed business workflows, slow innovation and importantly, in this case, it also commonly results in the either over or under mitigating security risks.

Startups, who are on the other end of the spectrum, are in a race to demonstrate market traction before their funding runs out. As a result, startups either virtually ignore security and privacy altogether or re-using a component they do not understand that is not well suited to the business problem they are solving or simply solves the wrong problems for their business risks.

Both enterprises and startups show the hard reality that we are often so close to our problems and set in our way of thinking we simply miss the big picture. This natural bias can hurt our businesses and in the context of security trade-offs, and result in incidents like the recent Equifax breach or the ever-growing list of Bitcoin exchange compromises.

The first step in preventing this “missizing” of risk is to make sure you understand what your risks actually are. The right way to do that is to think adversarially, taking a step outside of your business process or solution and thinking about the structure of the system your protecting and defining a threat model that captures those risks. This is not a one time exercise, it is something you need to constantly be re-visiting and getting new perspectives into.

Consider the typical Bitcoin exchange compromise, the exchanges usually start with a basic system limited to “online hot wallets” with weak architectural protections. They probably know better (for humanity’s sake I hope they do) but decided that the risk, in the beginning, is low enough because they have so little to lose that they go forward. Later they find success and are focused on other parts of their business and never get back to fixing that early trade-off.

As an aside, this bitcoin example showcases lots of problems, the largest being the asymmetric risk distribution. Specifically, the risk here is that of the depositor but the decision to take the risk is made by the exchange. I digress but this class of problem is a real problem in most startups and is the impact of which is multiplied 1000x in Bitcoin startups.

In any event, we can see how these sorts of things happen in hindsight so how can we limit how often these issues happen in the first place? As a technologist it hurts me to say this, the answer is process.

The good news is that process does not need to be heavy-weight. You need to make sure you approach the problem systematically and regularly, for example:

  • Instead of a threat model, you can do a simple threat tree,
  • You can use your bug tracking system to track the security issues you have found,
    • You can make sure those bugs capture the security decisions you have made,
    • The consequences of the identified risks for the actors in the system are captured,
    • What mitigations you have put in place for those issues along with how effective you think they will be are captured,
  • And importantly have a plan for how you will respond when things go wrong because they will regardless of how well you plan and invest in making the right security decisions.

You can then make sure you are reviewing these and acting accordingly on a regular schedule. This will ensure your organization at least has an inventory of issues that can last the individuals on a team and will make sure there is a point of conscious risk acceptance that the organization has taken.

This does not replace a full-on security program but it can at least make sure you are looking at the problem in the context of your business, your users and not just assuming every system has the same security needs.

On a related note for you startups, especially those who are operating in murky waters regarding regulation. The current regulations belong in your threat tree so you can make sure you understand them and begin to understand how they might apply to you even if you have to squint a bit to do it.

The Evolution of Security Thinking

In design sometimes we refer to the strategies used during the design process as Design thinking.  The application of these strategies helps ensure you are solving the right problems and doing so in a repeatable way. You can attribute much of the massive improvements in usability in software and devices over the two decades to these strategies.

If we look at how we have evolved thinking around building secure systems over the last two decades we can see that we have evolved similar strategies to help ensure positive security outcomes.

If we go back to the late 80s we see systems that were largely designed for a world of honest actors. There was little real business happening on the Internet at the time and the hard problems to be solved were all related to how do we enable a global network of interconnected systems so thats where efforts were put. These efforts led us to the Internet of today but it also gave us systems vulnerable to trivial attacks such as the Morris Worm.

By the 90s the modern “security industry” was born and products designed to protect these insecure systems from the internet started to come to market. One of the most impactful examples of this was the TIS Firewall Toolkit, other examples of this way of thinking include Antivirus products and other agents that promised to keep our applications and operating systems safe from “attackers”.

By the late 90s and early 2000s, it was clear that these agents were never going to be effective at keeping the bad guys out and that we needed to be building systems that were Secure by Default, Secure by Design and Private by Design. This shift in thinking meant that solution developers needed to develop their own strategies and tooling to ensure systems could be built to be inherently resilient to the risks they were exposed to. The concept of Threat Modeling is probably the most concrete example of this, believe it or not, this basic concept was essentially absent from software development up until this point.

By this time the technical debt in deployed systems was so great we spent most of a decade just trying to rectify the mistakes of the past. Windows XP SP2 and the Microsoft Security Stand Down is probably the most visible example of the industry making this shift, it also leads to the Security Development Lifecycle that largely informs how we as an industry, approach building secure systems today.

During this timeline, cryptography was treated as something that you sprinkled on top of existing systems with the hope to make them more confidential and secure. As an industry, we largely relied on the US Government to define the algorithms we used and to tell us how to use them securely. As a general rule only products designed for government use or for the small group of “cypherpunks” even considered the inclusion of cryptography due to the complexity of “getting it right”.

Things are changing again, we see the IETF via the CFRG working to standardize on international and independently created and cryptographic algorithms in lieu of relying exclusively on governments to do this standardization. We also see the concept of Formal Verification being applied to cryptographic systems (Galois is doing great work here with Cryptol as are other great projects in the verifiable computing space) which is leading us to have frameworks we can apply to build these concepts into other products securely (check out the Noise Protocol Framework as an example).

I think the Signal Protocol, Rough time, Certificate Transparency and even Blockchain Technologies are examples of the next phase of evolution in our thinking about how we build secure systems. Not because of “decentralization” or some anti-government bent in technologists, instead, these systems were designed with a more-complete understanding of security risks associated with their use.

Trust is a necessary component of human existence. It can give us peace of mind but It can also give us broken hearts. The same is true in the context of system design. Trust cautiously.

These systems, by design, go to great length to limit the need for “trust” for a system to work as intended. They do this by minimizing the dependencies that a system takes in its design, this is because each of those dependencies represents an attack vector as we advance technology our attackers become more advanced as well. They also make extensive use of cryptography to make that possible.

This focus on dependency reduction is why we see Blockchain enthusiasts taking the maximalist position of “Decentralize all the Things”. In my opinion, centralization is not always a bad thing, over-centralization maybe, but centralization can provide value to users and that value is what we should be focused on as solution developers.

My personal take is that when we look back on the next decade we will the say the trend was not “blockchain” but instead this is when we evolved our security thinking and tooling to better utilize cryptography. Specifically that this is when we started to use cryptography to make transparency, confidentiality and verifiability part of the core of the solutions we build instead of thinking of it as a layer we apply once we are done.