How TLS Certificates Can Authenticate DNS TXT Records

Have you found a use case where you think DANE and DNSSEC might be helpful? For example, the discovery of some configuration associated with a domain? Since a practically useful DNSSEC deployment, which requires individual domains (ex: example.com) to adopt DNSSEC and for relevant clients to use a fully validating DNSSEC resolver, which has not happened yet at any reasonable scale, maybe consider using certificates to sign the values you place in DNS instead.

SXG certificates are, in essence, signing certificates tied to a domain, while you could use a regular TLS certificate, for today’s post let’s assume that SXG certificates were the path you chose.

You can enroll for SXG certificates for free through Google Trust Services. This would allow you to benefit from signed data in DNS that is verifiable and deployable today not only once DNSSEC gets broad deployment.

Assuming a short certificate chain, since DNS does have size restrictions, this could be done as simply as:

Get an SXG certificate that you can use for signing….

sudo certbot certonly \
  --server https://dv-sxg.acme-v02.api.pki.goog/directory \
  -d yourdomain.com \
  --manual \
  --eab-kid YOUR_EAB_KID \
  --eab-hmac-key YOUR_EAB_HMAC_KEY

Sign your data….

openssl smime -sign -in data.txt -out signeddata.pkcs7 -signer mycert.pem -inkey mykey.pem -certfile cacert.pem -outform PEM -nodetach

Then put the data into a DNS TXT record….

After which you could verify that using the Mozilla trust list using OpenSSL…

openssl smime -verify -in signeddata.pkcs7 -CAfile cacert.pem -inform PEM

In practice, due to DNS size constraints, you would likely use a simpler signature format, such as encoding just the signing certificate and the signature in Base64 as a type-length-value representation; with a 256-bit ECC certificate and its signature, this record would total around 1152 bytes, which would comfortably fit inside a DNS TXT record, thanks to EDNS which has a capacity of up to 4096 bytes.

While this does not provide the authoritative non-existence property that DNSSEC has, which exists to address downgrade attacks, which means relying parties could not detect if the record was removed, nor does it provide security to DNS overall, but if we are just talking about configuration data, it might be a viable approach to consider.

On the other hand, this form of signing a TXT record using a certificate is never going to take down your domain and DNSSEC adoption cannot say that!

Restoring Memories

As the old saying goes, “You can take the boy out of the farm, but you can’t take the farm out of the boy.” Although I was raised in metro Seattle, my father grew up on a farm in Eastern Washington, in the city of Walla Walla. We made regular trips there during my childhood, especially when my great-grandmother lived there by herself. These visits were more than just familial obligations; they were my introduction to values like hard work, family, and the joy of being close to the earth—values that have profoundly influenced who I am today.

I also fondly recall visits and weekend trips to my uncle’s, where my cousins and I would ride in the bed of his Chevy 3100, sliding around as we drove down the road, laughing and jostling around — back when the world was less concerned about safety regulations. Those moments of freedom are treasures I still carry.

This sense of nostalgia may explain why, after a career in information security, I felt compelled to restore several late 19th and early 20th-century safes. A few years ago, I embarked on a project to restore a Dodge Power Wagon, which encapsulates the strength, reliability, and spirit of those farmstead adventures.

Power Wagons Origin Story

The Dodge Power Wagon earned its legendary status on American farmlands shortly after World War II. Returning servicemen recognized the potential of the Dodge WCs they had used in the war. These vehicles could navigate the rugged farm terrain much like the battlefields they’d left behind. Equipped with a Power Take Off (PTO) and winch, the Dodge WC was not just a means of transport; it transformed into a tool that could till the fields or haul away a fallen tree. Recognizing its demand, Dodge released a civilian version—the Dodge Power Wagon.

The Power Wagon was the first mass-produced civilian 4×4 vehicle, ultimately symbolizing an era when durability and utility were paramount in vehicle design. Its introduction led to the widespread adoption of 4×4 capabilities by nearly every truck manufacturer.

Power Wagons also played a vital role in developing early infrastructure, aiding transportation and communication networks for rail and telephone companies. Coachbuilders would modify these trucks by combining two Power Wagons, to create multi-door vehicles that could transport crews to remote or difficult-to-access sites. It wasn’t until International Harvester introduced the Travelette in 1957 that a production truck with three or more doors became available.

Anyone who has ever done a high-end restoration of a vehicle will tell you it takes way longer than you expect, my project is no different. While we are getting close to the end of the project, after all it runs, drives, stops, has been put back together, and has been painted, and is now getting its interior done I would surely be wrong with whatever guess I gave.

​​
My Power Wagon Restoration

Restoring this piece of history isn’t just about reviving a classic vehicle; in a way, it’s a tribute to my father, my family’s legacy. It’s a pilgrimage back to my roots, a way to share my family’s story with my children and, eventually, my grandchildren.

Navigating Content Authentication In the Age of Generative AI

In 1995, SSL was introduced, and it took 21 years for 40% of web traffic to become encrypted. This rate changed dramatically in 2016 with Let’s Encrypt and the adoption of ACME, leading to an exponential increase in TLS usage. In the next 8 years, adoption nearly reached 100% of web traffic. Two main factors contributed to this shift: first, a heightened awareness of security risks due to high-profile data breaches and government surveillance, creating a demand for better security. Second, ACME made obtaining and maintaining TLS certificates much easier.

Similarly, around 2020, the SolarWinds incident highlighted the issue of software supply chain security. This, among other factors, led to an increase in the adoption of code signing technologies, an approach that has been in use at least since 1995 when Microsoft used this approach to help deal with the problem of authenticity as we shifted away from CDs and floppy disks to network-based distributions of software. However, the complexity and cost of using code signing severely limited its widespread use, and where it was used, thanks to poor tooling, key compromises often led to a failure for most deployments to achieve the promised security properties. Decades later, projects like Binary Transparency started popping up and, thanks to the SolarWinds incident, projects that spun out of that like Go ChecksumDB, SigStore, and SigSum projects led to more usage of code signing.

Though the EU’s digital signature laws in 1999 specified a strong preference for cryptographic-based document signing technologies, their adoption was very limited, in part due to the difficulty of using the associated solutions. In the US, the lack of a mandate for cryptographic signatures also resulted in an even more limited adoption of this more secure approach to signing documents and instead relied on font-based signatures. However, during the COVID-19 pandemic, things started changing; in particular, most states adopted remote online notary laws, mandating the use of cryptographic signatures which quickly accelerated the adoption of this capability.

The next shift in this story started around 2022 when generative AI began to take off like no other technology in my lifetime. This resulted in a rush to create tools to detect this generated content but, as I mentioned in previous posts [1,2], this is at best an arms race and more practically intractable on a moderate to long-term timeline.

So, where does this take us? If we take a step back, what we see is that societally we are now seeing an increased awareness of the need to authenticate digital artifacts’ integrity and origin, just like we saw with the need for encryption a decade ago. In part, this is why we already see content authentication initiatives and discussions, geared for different artifact types like documents, pictures, videos, code, web applications, and others. What is not talked about much is that each of these use cases often involves solving the same core problems, such as:

  • Verifying entitlement to acquire the keys and credentials to be used to prove integrity and origin.
  • Managing the logical and physical security of the keys and associated credentials.
  • Managing the lifecycle of the keys and credentials.
  • Enabling the sharing of credentials and keys across the teams that are responsible for the objects in question.
  • Making the usage of these keys and credentials usable by machines and integrating naturally into existing workflows.

This problem domain is particularly timely in that the rapid growth of generative AI has raised the question for the common technology user — How can I tell if this is real or not? The answer, unfortunately, will not be in detecting the fakes, because of generative AIs ability to create content that is indistinguishable from human-generated work, rather, it will become evident that organizations will need to adopt practices, across all modalities of content, to not only sign these objects but also make verifying them easy so these questions can be answered by everyday users.

This is likely to be accelerated once the ongoing shifts take place in the context of software and service liability for meeting security basics. All of this seems to suggest we will see broader adoption of these content authentication techniques over the next decade if the right tools and services are developed to make adoption, usage, and management easy.

While no crystal ball can tell us for sure what the progression will look like, it seems not only plausible but necessary in this increasingly digital world where the lines between real and synthetic content continue to blur that this will be the case.

Update: Just saw this while checking out my feed on X and it seems quite timely 🙂

Tenement Farming and Cloud HSMs

While it’s fair to say that using a Cloud HSM means your keys are protected by a device meeting FIPS 140-3 standards, assuming the HSM in use has this certification, it’s important to realize this doesn’t guarantee the security you might expect. The security model of HSMs was built for the threats of the 1980s. These devices were not network-connected and were single-tenant — if they were “online” it was usually via HSMs attached to physical computers running an application on a machine connected to private networks — not connected to a globally reachable endpoint.

At their core, these devices were designed to protect keys from physical theft, more precisely, to slow down and increase the cost of theft, much like safe ratings (UL TL-15, TL-30, TL30x6) indicate for how effective the associated safes are. For example, early in my career, I worked on a project where we built attacks to extract non-exportable keys from a specific HSM and then imported them into another vendor’s HSM because the prior vendor went out of business. There have also been a number of key exfiltration bugs in these devices over the years as well.

We didn’t see network-connected HSMs until around 1999, but even then, these devices were single-tenant, essentially just a network-connected Linux or BSD box containing fundamentally the same hardware as years earlier. While this change did allow a single company to share an HSM across different application workloads, the assumption was still that this HSM was managed by the company in charge of all of these applications.

Why is this important today? Most computing is now done in shared cloud infrastructure, administered by someone else, with your competitor or an attacker on the same hardware as you. This presents a very different set of security considerations and design constraints than these devices were originally built. You are now exposed to the risks of the physical and logical administrators of these Cloud HSMs, the services they are dependent on, as well as other tenants of the Cloud HSM.

Consider that the compute operator usually can technically access the handle the application uses to talk to the HSM, and likely the secret used to authenticate to this HSM as well, meaning they, or an attacker, could potentially use that handle, or secret,  to sign, or decrypt data as they wish. You might find that an acceptable risk, but did you know some HSMs allow the administrator to blindly add users as operators to the “virtual HSMs” within it? Yup, they do.

What about when keys are stored in a KMS and the key policy dictates the key be managed by an HSM? If the HSM hardware attests that the key is stored in the HSM, and this attestation is verified, it’s nearly the same threat profile we just discussed. In some cases, it could be argued it is better because access to the HSM can have traditional user and service RBAC controls, and rate limiting, and keys can be replicated to many other HSMs without any administrative burden for you, keeping you safe from a common disaster recovery scenario while normalizing the management of these devices so it fits into your normal operational practices which hopefully are well managed and monitored.

Regardless of the approach, the bigger question is whether your provider’s operational and security practices are up to your specific threat model. Imagine a Bitcoin wallet worth 100 million dollars. Has your cloud provider proportionally invested enough into controls and tests around their system to prevent a motivated attacker from using your key to sign a transaction that moves all that to another wallet? Probably not.

The fundamental issue is that today’s HSMs were largely designed for different eras with different security concerns than we typically have today, mainly to protect against physical theft of keys in environments where data centers were effectively closets in dedicated office space. That doesn’t reflect today’s cloud computing scale.

It is worth noting that there are a few HSM solutions on the market that are making efforts to tackle some of these issues, but they still fall short but that is a topic for another post.

In essence, Cloud HSMs are to HSMs what Tenement Farming is to Farming.

That’s not to say there’s no value in these offerings, but as built today, they often fail to deliver the value they are assumed to deliver. And if regulations mandated their use before, say, 2010, chances are they’re not delivering the intended value that those regulations had in mind.

So, how should we be protecting keys now?

To be clear, this is not a case against Cloud HSMs, it is an argument to think about the threat model and use case you are solving for — for example if we look at Storm-0558 where Microsoft appears to have been using the private key material in the process of their IDP, the attacker was able to get a memory dump to be created and then via another attack vector gather the memory dump, and as a result the private key, we can take away at least one solid lesson. Do not load keys into the process of the applications that rely on them. In this case, the least costly method to have prevented this key theft would be simply moving the key to another process running in another user context with a very simple API that is easy to defend and can at least limit the attacker to a handle vs. what happened in this case where the attacker was able to use the key with impunity for years. This approach is the rough equivalent of a workload or node-specific software HSM similar in spirit to the original HSMs.

Another common problem we see in the industry is, that solutions like Hashicorp vault were designed to centralize key management and provide a one-size-fits-all answer to “Where do I keep my secrets?” Architecturally these solutions look much like a passively encrypted database, if you have sufficient permissions you can read the key in the clear and then copy it to whatever node or workload needs to use the key. This took us from secret sprawl to secret spray where we pushed the keys out in environment variables and files on production machines that later get dumped into logs, and backups, continuously exposing the keys to users who should have never had access, and often leaving key remnants all over the place. This is only marginally better than checking keys into dedicated source control repositories.

The problem here isn’t limited to these secret sprawl solutions, considering that almost every web-server TLS private key is sitting in the file system often with weak ACLs without any encryption which is then loaded into memory on that web server in the process. Similarly, most SSH keys are also sitting in some file, usually with a poor ACL, with a key either in the clear or with an easily grindable password so a malicious actor that gains read access to the file system is all that is needed to walk away with the key, for example, see this incident from last week

In both of these cases, we would be much better off if we would move these keys into another user context that is more defensible and constrained.

So how did we end here with such abysmal practices for managing keys?

While there is seldom one single reason to see such neglect, in this case, I think one of the largest is the dogmatic “all keys must be kept in HSMs or smart cards”. It is just too easy of a get-out-of-jail-free card for a security professional. Instead of thinking about the real risks and operational practices and then designing strategies to mitigate those threats that are practical and appropriate people who can afford to complete that checkbox do and those who can not just copy keys around in the clear out of a database.

The reality is we can do a lot better but as they say, the first step is to accept that you have a problem.

In short, as security professionals we need to avoid dogmatic answers to complex questions and spend the time to look more critically at the risks, constraints, obligations, resources, and real-world scenarios those we work with are operating within before we throw generic playbook answers to those coming to us for advice.

Evolving Challenges in Software Security

In 2023, we observed an average month-to-month increase in CVEs of approximately 1.64%, with this rate accelerating as the year progressed. At the same time, several trends emerged that are associated with this increase. These include a heightened focus on supply chain security by governments and commercial entities, intensified regulatory discussions around how to roll out concepts of software liability, and the expanded application of machine learning technologies in software security analysis.

Despite the broad use of open source, the large majority of software is still delivered and consumed in binary form. There are a few reasons for this, but the most obvious is that the sheer size and complication of code bases combined with the limited availability of expertise and time within consuming organizations makes the use of the source to manage risk impractical.

At the same time, it’s clear this issue is not new, for example in 1984, Ken Thompson, in his Turing Award Lecture, mentioned, “No amount of source-level verification or scrutiny will protect you from untrusted code”. This statement has been partially vindicated recently, as intelligent code analysis agents, although faster ways to produce code, have been found to exert downward pressure on code quality while also reducing the developer’s understanding of the code they produce — a bad combination.

To the extent these problems are resolved we can expect the attackers to be using the same tools to more rapidly identify new and more complex attack chains. In essence, it has become an arms race to build and apply these technologies to both offensive and defensive use cases.

It is this reality that has led to DARPA’s creation of the DARPA’s Artificial Intelligence Cyber Challenge and its various projects on using AI to both identify and fix security defects at scale.

The saying “In the middle of difficulty lies opportunity” aptly describes the current situation, where numerous security focused startups claim to offer solutions to our problems. However, the truth is often quite different.

Some of those racing to take advantage of this opportunity are focusing on software supply chain security, particularly with a focus on software composition analysis. This is largely driven by regulatory pressures to adopt the Software Bill of Materials concept. Yet, most tools that generate these documents only examine interpreted code and declared dependencies. As previously mentioned, the majority of code is delivered and consumed in compiled form, leaving customers unable to assess its correctness and completeness without enough data to do so. As a result, although these tools may help with compliance, they inadvertently cause harm by giving a false sense of security.

There are other vendors still that are essentially scaling up traditional source code reviews using large language models (LLMs). But as we’ve discussed, these tools are currently showing signs of reducing code quality and developers’ understanding of their own code. At the same time these tools produce such a high volume of false positives given the lack of context this analysis has available to it triaging the outputs can turn into a full-time job. This suggests that negative outcomes could ensue over time if we don’t adjust how we apply this technology or see significant improvements in the underlying technology itself.

These efforts are all concentrated on the software creators but if we expand the problem domain to include the consumers of software we see that outside of cloud environments, where companies like Wiz and Aqua Security provide vulnerability assessments at scale, there are hardly any resources aiding software consumers in making informed decisions about the risks they face by the software they use. A big part of this is the sheer amount of noise even these products produce, combined with the lack of actionability in such data for the consumer of the software. With that said these are tractable problems if we just choose to invest in new solutions rather than apply the same old approaches we have in the past.

As we look toward the next decade, it is clear that software security is at a pivotal point, and navigating it goes beyond just technology; it requires a change in mindset towards more holistic security strategies that consider both technical and human factors. The next few years will be critical as we see whether the industry can adapt to these challenges.

Echoes of the Past and Their Impact on Security Today

When I was a boy, my parents often made me read books they thought were important. One of these was “The Republic” by Plato, written around 380 BC. After reading each book, they’d ask me to talk about what I learned. Reading this one, I realized that politics haven’t changed much over time and that people always seem to believe their group should be the ones making the big decisions. This was the first time I truly understood the saying “History doesn’t repeat itself, but it often rhymes.” As someone who works in security, I think it’s important we all remember this. For example, these days, there’s a huge focus on Supply Chain Security in software, almost like it’s a brand-new idea. But if we look back to 1984, Ken Thompson talked about this very concept in his Turing Award lecture where he said, “No amount of source-level verification or scrutiny will protect you from using untrusted code.”

This is a common thread in information security in general, take, for example, the original forged message attack on RSA called the Bleichenbacher’s Oracle attack, it was published at the CRYPTO ’98 conference, and nearly two decades later we see Return Of Bleichenbacher’s Oracle Threat. Or the recent key recovery attack on SIDH, one of the NIST PQC selections, in this attack it was found that SIDH was vulnerable to a theorem known as “glue-and-split” developed in 1997!

While there is certainly an element of human nature involved here, there are also extenuating factors like the sheer amount of knowledge that we as a society have amassed. One of the exciting things about Large Language Models and AI more generically is that these techniques have the potential to harness the entire body of knowledge that society has amassed and to do so with far fewer mistakes enabling us to advance even faster.

With that said, there is a problem larger than that, especially as security practitioners, we often frame our problem wrong. Back in 1998 when Dan Geer was at CertCo (I worked at a competitor called Valicert back then), he wrote an excellent post on how “Risk Management is Where the Money Is”. In this post, he argued that the security industry as it was would be transformed into a risk management industry — something that has certainly happened. In this talk, he also eloquently frames how customers look at risk-reward trade-offs, and how the internet would evolve into a data center (e.g., the Cloud as we know it today), and more.

The reality is there is a lot to learn from our predecessors and by understanding historical patterns, and better utilizing the lessons learned from the past we can better prepare for and address the security issues we face today.

Challenges in Digital Content Authentication and the Persistent Battle Against Fakes

Efforts have been made for years to detect modified content by enabling content-creation devices, such as cameras, to digitally sign or watermark the content they produce. Significant efforts in this area include the Content Authenticity Initiative and the Coalition for Content Provenance and Authenticity. However, these initiatives face numerous issues, including privacy concerns and fundamental flaws in their operation, as discussed here.

It is important to understand that detecting fakes differs from authenticating originals. This distinction may not be immediately apparent, but it is essential to realize that without 100% adoption of content authentication technology—an unachievable goal—the absence of a signature or a watermark does not mean that it is fake. To give that some color just consider that photographers to this day love antique Leica cameras and despite modern alternatives, these are still often their go-to cameras.

Moreover, even the presence of a legitimate signature on content does not guarantee its authenticity. If the stakes are high enough, it is certainly possible to extract signing key material from an authentic device and use it to sign AI-generated content. For instance, a foreign actor attempting to influence an election may find the investment of time and money to extract the key from a legitimate device worthwhile. The history of DVD CSS demonstrates how easy it is for these keys to be extracted from devices and how even just being able to watch a movie on your favorite device can provide enough motivation for an attacker to extract keys. Once extracted, you cannot unring this bell.

This has not stopped researchers from developing alternative authenticity schemes. For example, Google recently published a new scheme they call SynthID. That said, this approach faces the same fundamental problem: authenticating trustworthy produced generative AI content isn’t the same thing as detecting fakes.

It may also be interesting to note that the problem of detecting authentic digital content isn’t limited to generative AI content. For example, the Costco virtual member card uses a server-generated QR code that rotates periodically to limit the exposure of sharing of that QR code via screenshots.

This does not mean that the approach of signing or watermarking content to make it authenticatable lacks value; rather, it underscores the need to recognize that detecting fakes is not the same as authenticating genuine content — and even then we must temper the faith we put in those claims.

Another use case for digital signatures and watermarking techniques involves their utility in combating the use of generative AI to create realistic-looking fake driver’s licenses and generative AI videos capable of bypassing liveness tests. There have also been instances of generative AI being used in real-time to impersonate executives in video conferences, leading to significant financial losses.

Mobile phones, such as iPhones and Android devices, offer features that help remote servers authenticate the applications they communicate with. While not foolproof, assuming a hardened and unmodified mobile device, these features provide a reasonable level of protection against specific attacks. However, if a device is rooted at the kernel level, or physically altered, these protective measures become ineffective. For instance, attaching an external, virtual camera could allow an attacker to input their AI-generated content without the application detecting the anomaly.

There have also been efforts to extend similar capabilities to browsers, enabling modern web applications to benefit from them. Putting aside the risks of abuse of these capabilities to make a more closed web, the challenge here, at least in these use cases, is that browsers are used on a wider range of devices than just mobile phones, including desktops, which vary greatly in configuration. A single driver update by an attacker could enable AI-generated content sources to be transmitted to the application undetected.

This does not bode well for the future of remote identification on the web, as these problems are largely intractable. In the near term, the best option that exists is to force users from the web to mobile applications where the server captures and authenticates the application, but even this should be limited to lower-value use cases because it too is bypassable by a motivated attacker.

In the longer term, it seems that it will fuel the fire for governments to become de facto authentication service providers, which they have demonstrated to be ineffective at. Beyond that, if these solutions do become common, we can certainly expect their use to be mandated in cases that create long-lasting privacy problems for our children and grandchildren.

UPDATE: A SecurityWeek article came out today on this topic that has some interesting figures on this topic.

UPDATE: Another SecurityWeek article on this came out today.

Gov ID: If at First You Don’t Succeed, Try, Try Again

In the eIDAS 2.0 framework, the identity wallet is central to its expanded scope, mirroring early European government efforts at smart card-based national identity cards as well as subsequent identity wallet attempts. These efforts saw limited adoption, except for a few cases such as the Estonian national identity card,  the Swedish e-identification, and the Dutch eID schemes. It seems that this part of eIDAS 2.0 is an effort to blend the best aspects of these projects with elements of Web3 in an attempt to achieve a uniform solution.

A significant shift from these past identity wallet efforts is the government’s role in identity verification, reminiscent of the earlier smart card national ID initiatives. This approach diverges from the prior identity wallet models, where external entities such as banks, telecoms, and commercial identity verification companies were responsible for verification. This combination potentially helps pave the way for holistic public sector adoption similar to what was seen with Estonia’s national ID project’s success just on a much larger scale.

With that said it is important to remember that the majority of past efforts have struggled to achieve broad adoption. For example, the GOV.UK Verify platform encountered substantial usability issues, leading to resistance and eventually discontinued use by organizations that were mandated to use it. While the software-based nature of identity wallets may reduce deployment costs relative to smart cards, and government mandates could kick-start some level of adoption, the challenge of achieving widespread acceptance does not go away.

As it stands, it does seem that European CAs are betting on this to bootstrap a larger market for themselves. However, in a system as described above, this raises questions about the broader value and future role of third-party trust providers especially in a world where HTTPS on the web is protected with domain-validated certificates that these CAs have largely ignored or resisted.

This brings us to the contentious issue of the eIDAS 2.0 framework’s push for Qualified Web Authentication Certificates (QWACs) and the enforced support by browsers. While it is tempting to look at these two parts of the effort in isolation it is important to remember that regulations like these are made up of horse trading, so it is not surprising to see how clumsily this has all progressed. 

As an aside if you have not seen it there was an interesting talk at Chaos Computer Club last month about how badly these identity schemes have been executed that is worth watching. Only time will tell how effectively eIDAS 2.0 navigates these challenges and whether it can achieve the broad adoption that has eluded past initiatives.

Rethinking How We Assess Risk in the Software We Rely On

Despite today’s widespread use of open-source software, most software is still delivered in binary form. This includes everything from the foundational firmware of our computers to the applications we use for work, extending all the way to the containers running our server software in the cloud.

A significant challenge arises when even if the source code of the software is available, reproducing the exact binary from it is often impossible. Consequently, companies and users are essentially operating on blind faith regarding any qualitative or quantitative assurances received from software suppliers. This stark reality played a critical role in the rapid and broad spread of the SolarWinds incident across the industry.

The SolarWinds Wake-Up Call

The SolarWinds attack underscored the risks inherent in placing our trust in software systems. In this incident, attackers infiltrated build systems, embedding malware into the legitimate SolarWinds software. Customers updating to the latest software version unwittingly became victims in this attack chain. It’s crucial to acknowledge that targeting a software supply chain for widespread distribution is not a new tactic. Ken Thompson, in his 1984 Turing Award Lecture, famously stated, “No amount of source-level verification or scrutiny will protect you from using untrusted code.” Regrettably, our approaches to this challenge haven’t significantly evolved since then.

Progress in the domain of supply chain security was initially slow. In 1996, Microsoft began promoting the concept of code signing with its Authenticode support, allowing customers to verify that their software hadn’t been altered post-distribution. Subsequently, the open-source movement gained traction, particularly following the release of Netscape Navigator’s source code. Over the next two decades, the adoption of open source, and to a lesser extent, code signing increased. The use of interpreted languages aided in understanding software operations, but as software grew in size and complexity, the demand for software engineers began to outstrip the supply. The adage “Given enough eyeballs, all bugs are shallow” suggests that greater openness can enhance security, yet the industry has struggled to develop a talent pool and incentive models robust enough to leverage source code availability effectively.

Before the SolarWinds incident, the industry, apart from some security engineers advocating for practices like reproducible builds, memory-safe languages, and interpreted languages, largely overlooked the topic of supply chain security. Notable initiatives like Google’s work on Binary Transparency, which predates SolarWinds, began to create an environment for broader adoption of code signing-like technologies with efforts like Go SumDBSigStore, and Android’s Binary Transparency (each of which I had the opportunity to contribute to). However, even these solutions don’t fully address the challenge of understanding the issues within a binary, a problem that remains at the forefront of security.

The industry’s response to SolarWinds also included embracing the concept of Software Bill of Materials (SBOM). These artifacts, envisioned to be produced by the build system, document the, often third-party, components used in software. However, this approach faces challenges, such as the possibility of attackers manipulating SBOMs if they compromise the build system.

The complexity of compiled software adds another layer of difficulty. Each compiled dependency has its own dependencies, not all of which are publicly declared, as is the case with static dependencies. When software is compiled, only portions of the dependencies that are used get included, potentially incorporating multiple versions of a single dependency into the final binary. This complexity makes simple statements about software components, like “I use OpenSSL 1.0,” inaccurate for even moderately complex code. Moreover, the information derived from SBOMs is often not actionable. Without access to all sources or the ability to build binaries independently, users are left with CVE lists that provide more noise than actionable insight.

To make things worse compilers, through the optimization of builds can even remove security fixes that developers carefully put in to mitigate known issues, for example, freeing memory to keep keys cryptographic keys and passwords from getting paged to disk.

The Critical Role of Binary Analysis

If all we have is a binary, the only way to understand the risks it represents is to analyze it in the same way an attacker would. However, doing this at scale and making the analysis actionable is challenging. Recent advancements in machine learning and language development are key to addressing this challenge.

Currently, tools that operate on binaries alone fall into two categories. The first are solutions akin to 1990s antivirus programs – matching binaries to known issues. The second category helps skilled professionals reverse engineer the binary’s contents more quickly.

Both categories have struggled to keep pace with the rapid changes in software over the past few decades. A new category of tools is emerging, led by companies like Binarly, which I advise. Binarly’s approach to automated binary analysis began with key goals such as achieving processor architecture independence and language independence. This enables the analysis of binaries across different architectures without duplicating threat intelligence and identifying insecure patterns stemming from ported code or common insecure Stack Overflow examples. Identifying static dependencies and which parts of them are used in a binary is both challenging and crucial for understanding the security issues that lie beneath the surface.

Their approach is remarkable in its ability to detect “known unknowns,” enabling the identification of classes of security vulnerabilities within a binary alone. Furthermore, through symbolic execution, they can perform reachability analysis, ensuring that flagged issues are not just theoretical but can potentially be exploited by attackers.

Though their approaches are not firmware-specific, Firmware is a great example of the problems that come from binary-only distributions and customers’ reliance on blind faith that their vendors are making the right security investments. It is their unique approach to binary analysis that has enabled them to file and report more CVEs in the last two years than have ever been reported before.

Binary analysis of this kind is crucial as it scrutinizes software in its final, executable form—the form in which attackers interact with it.

Conclusion

The lesson from the SolarWinds attack is clear: no build system-based approach to articulate dependencies is entirely secure. Ken Thompson’s 1984 assertion about the limitations of trusting any code you didn’t produce yourself remains relevant. In a world where software vulnerabilities have extensive and far-reaching impacts, binary analysis is indispensable. Binarly’s approach represents a paradigm shift in how we secure software, offering a more robust and comprehensive solution in our increasingly connected world.

Effortless Certificate Lifecycle Management for S/MIME

In September 2023, the SMIME Baseline Requirements (BRs) officially became a requirement for Certificate Authorities (CAs) issuing S/MIME certificates (for more details, visit CA/Browser Forum S/MIME BRs).

The definition of these BRs served two main purposes. Firstly, they established a standard profile for CAs to follow when issuing S/MIME certificates. Secondly, they detailed the necessary steps for validating each certificate, ensuring a consistent level of validation was performed by each CA.

One of the new validation methods introduced permits mail server operators to verify a user’s control over their mailbox. Considering that these services have ownership and control over the email addresses, it seems only logical for them to be able to do domain control verification on behalf of their users since they could bypass any individual domain control challenge anyway. This approach resembles the HTTP-01 validation used in ACME (RFC 8555), where the server effectively ‘stands in’ for the user, just as a website does for its domain.

Another new validation method involves delegating the verification of email addresses through domain control, using any approved TLS domain control methods. Though all domain control methods are allowed for in TLS certificates as supported its easiest to think of the DNS-01 method in ACME here. Again the idea here is straightforward: if someone can modify a domain’s TXT record, they can also change MX records or other DNS settings. So, giving them this authority suggests they should reasonably be able to handle certificate issuance.

Note: If you have concerns about these two realities, it’s a good idea to take a few steps. First, ensure that you trust everyone who administers your DNS and make sure it is securely locked down. 

To control the issuance of S/MIME certificates and prevent unauthorized issuance, the Certification Authority Authorization (CAA) record can be used. Originally developed for TLS, its recently been enhanced to include S/MIME (Read more about CAA and S/MIME).

Here’s how you can create a CAA record for S/MIME: Suppose an organization, let’s call it ‘ExampleCo’, decides to permit only a specific CA, ‘ExampleCA’, to issue S/MIME certificates for its domain ‘example.com’. The CAA record in their DNS would look like this:

example.com. IN CAA 0 smimeemail "ExampleCA.com"

This configuration ensures that only ‘ExampleCA.com’ can issue S/MIME certificates for ‘example.com’, significantly bolstering the organization’s digital security.

If you wanted to stop any CA from issuing a S/MIME certificate you would create a record that looks like this: 

example.com. IN CAA 0 issuemail ";"

Another new concept introduced in this round of changes is a new concept called an account identifier in the latest CAA specification. This feature allows a CA to link the authorization to issue certificates to a specific account within their system. For instance:

example.com. IN CAA 0 issue "ca-example.com; account=12345"

This means that ‘ca-example.com’ can issue certificates for ‘example.com’, but only under the account number 12345.

This opens up interesting possibilities, such as delegating certificate management for S/MIME or CDNs to third parties. Imagine a scenario where a browser plugin, is produced and managed by a SaaS on behalf of the organization deploying S/MIME. This plug-in takes care of the initial enrollment, certificate lifecycle management, and S/MIME implementation acting as a sort of S/MIME CDN.

This new capability, merging third-party delegation with specific account control, was not feasible until now. It represents a new way for organizations to outsource the acquisition and management of S/MIME certificates, simplifying processes for both end-users and the organizations themselves.

To the best of my knowledge, no one is using this approach yet, and although there is no requirement yet to enforce CAA for SMIME it is in the works. Regardless the RFC has been standardized for a few months now but despite that, I bet that CAs that were issuing S/MIME certificates before this new CAA RFC was released are not respecting the CAA record yet even though they should be. If you are a security researcher and have spare time that’s probably a worthwhile area to poke around 😉