Category Archives: Uncategorized

Timeless Farm Wisdom

My father grew up on a small farm in eastern Washington. They say you can take the boy out of the farm, but you can’t take the farm out of the boy. As a kid, I was always hearing farm life sayings from my grandfather and father. To this day, I think they are some of the best pieces of advice I have ever received. Here are a few:

  • Don’t wrestle with pigs. You both get filthy and the pig likes it: Avoid pointless arguments with people who thrive on conflict. It only wastes your time and energy.
  • Don’t count your chickens before they hatch: Don’t assume a deal is closed until the contract is signed.
  • Make hay while the sun shines: Capitalize on opportunities while they are available.
  • Don’t put all your eggs in one basket: Diversify your investments and strategies to mitigate risk.
  • The early bird catches the worm: Being proactive and early can lead to the best opportunities.
  • Every cloud has a silver lining: Look for opportunities in every challenge or setback.
  • Don’t cry over spilled milk: Learn from mistakes, but don’t dwell on them; move forward.
  • When the rooster crows, it’s time to get up: Start your day early to maximize productivity.
  • You can’t make an omelet without breaking a few eggs: Achieving success often requires taking some risks and making sacrifices.
  • You reap what you sow: Your results are a reflection of the effort and quality you put in.
  • Don’t look a gift horse in the mouth: Be grateful for opportunities and advantages you receive.
  • When in doubt, let your horse do the thinking: Trust the expertise of your team when you’re unsure.
  • Never approach a bull from the front, a horse from the rear, or a fool from any direction: Avoid unnecessary risks and confrontations in business.
  • Don’t close the barn door after the horse has bolted: Take preventive measures rather than waiting until you need to react.
  • A stitch in time saves nine: Address small issues before they become large problems.
  • The squeaky wheel gets the grease: The most noticeable or pressing issues are the ones that receive the most attention and resources first.

Groundhog Day: Learning from Past Key and Credential Compromises

As they say, Those who cannot remember the past are condemned to repeat it, as we look back at the last decade, it seems we are caught in our own little Groundhog Day, reexperiencing the consequences of weak authentication and poor key management over and over.

It is not that we don’t know how to mitigate these issues; it’s more that, as organizations, we tend to avoid making uncomfortable changes. For example, the recent spate of incidents involving Snowflake customers being compromised appears to be related to Snowflake not mandating multi-factor authentication or a strong hardware-backed authenticator like Passkeys for its customers.

At the same time, not all of these key and credential thefts are related to users; many involve API keys, signing keys used by services, or code signing keys. Keys we have been failing to manage appropriately to the risks they represent to our systems and customers.

Timeline of Notable Incidents

Table of Incidents

IncidentType of CompromiseDescription
Stuxnet (2010)Code Signing KeysUtilized stolen digital certificates from Realtek and JMicron to authenticate its code, making it appear legitimate.
Adobe (2012)Code Signing KeysAttackers compromised a build server and signed malware with valid Adobe certificates.
Target (2013)Developer CredentialsNetwork credentials stolen from a third-party HVAC contractor via malware on an employee’s home computer.
Neiman Marcus (2013)Developer CredentialsMalware installed on systems, possibly through compromised credentials or devices from third-party vendors.
Home Depot (2014)Developer CredentialsMalware infected point-of-sale systems, initiated by stolen vendor credentials, potentially compromised via phishing.
Equifax (2017)API KeysExploitation of a vulnerability in Equifax’s website application exposed API keys used for authentication.
CCleaner (2017)Code Signing KeysAttackers inserted malware into CCleaner’s build process, distributing malicious versions of the software.
Ticketmaster (2018)Developer CredentialsMalware from a third-party customer support product led to the compromise of payment information.
ASUS Live Update (2018)Code Signing KeysAttackers gained access to ASUS’s code signing keys and distributed malware through the update mechanism.
Google Cloud Key Leak (2019)API KeysInternal tool inadvertently exposed customer API keys on the internet, leading to potential data exposure.
Facebook Instagram API Breach (2019)API KeysPlaintext passwords were accessible to employees due to misuse of an internal API.
DoorDash Data Breach (2019)API KeysUnauthorized access to systems included the compromise of API keys, exposing sensitive customer data.
Mission Lane (2020)Developer CredentialsMalware introduced into the development environment from a developer’s compromised personal device.
BigBasket (2020)Developer CredentialsData breach with over 20 million customer records exposed, suspected initial access through compromised developer credentials.
Twitter API Key Compromise (2020)API KeysAttackers gained access to internal systems, possibly compromising API keys or administrative credentials.
Amazon AWS Key Exposure (2021)API KeysMisconfigured AWS S3 bucket led to the exposure of API keys and sensitive customer data.
Nvidia (2022)Code Signing KeysStolen code signing certificates were used to sign malware, making it appear legitimate.
Microsoft Storm 0558 (2023)Signing KeysAttackers gained access to email accounts of government agencies and other organizations by forging authentication tokens using a stolen Microsoft signing key.

When we look at these incidents we see a few common themes including:

  1. Repetitive Failures in Security Practices:
    • Despite awareness of the issues, organizations continue to face the same security breaches repeatedly. This suggests a reluctance or failure to implement effective long-term solutions.
  2. Resistance to Necessary Changes:
    • Organizations often avoid making uncomfortable but necessary changes, such as enforcing multi-factor authentication (MFA) or adopting stronger authentication methods like Passkeys, and adopting work This resistance contributes to ongoing vulnerabilities and compromises.
  3. Diverse Sources of Compromise:
    • Key and credential thefts are not solely due to user actions but also involve API keys, service signing keys, and code signing keys. This highlights the need for comprehensive key management and protection strategies that cover all types of keys.
  4. Broader Key Management Practices:
    • Moving from shared secrets to asymmetric credentials for workloads and machines can significantly enhance security. Asymmetric credentials, especially those backed by hardware, are harder to steal and misuse compared to shared secrets. Techniques like attestation, which is the device equivalent of MFA, can provide additional security by verifying the authenticity of devices and systems.
  5. Third-Party Risks:
    • Even if we deploy the right technologies in our own environment, we often ignore the security practices of our upstream providers. Several incidents involved third-party vendors or services (e.g., Target, Home Depot), highlighting the critical need for comprehensive third-party risk management. Organizations must ensure that their vendors adhere to strong security practices and protocols to mitigate potential risks.
  6. Mismanagement of API and Signing Keys:
    • Incidents such as the Google Cloud Key Leak and Amazon AWS Key Exposure point to misconfigurations and accidental exposures of API keys. Proper configuration management, continuous monitoring, and strict access controls are essential to prevent these exposures.
  7. Importance of Multi-Factor Authentication (MFA):
    • The absence of MFA has been a contributing factor in several breaches. Organizations need to mandate MFA to enhance security and reduce the risk of credential theft.
  8. Need for Secure Code Signing Practices:
    • The use of stolen code-signing certificates in attacks like Stuxnet and Nvidia underscores the importance of securing code-signing keys. Implementing hardware security modules (HSMs) for key storage and signing can help mitigate these risks.

Conclusion

Looking back at the past decade of key and credential compromises, it’s clear that we’re stuck in a loop, facing the same issues repeatedly. It’s not about knowing what to do—we know the solutions—but about taking action and making sometimes uncomfortable changes.

Organizations need to step up and embrace multi-factor authentication, adopt strong hardware-backed authenticators like Passkeys, and move workloads from shared secrets to hardware-backed asymmetric credentials for better security. We also can’t overlook the importance of managing third-party risks and ensuring proper configuration and monitoring to protect our API and signing keys.

Breaking free from this cycle means committing to these changes. By learning from past mistakes and taking a proactive approach to key and credential management, we can better protect our systems and customers – It’s time to move forward.

The Rebirth of Network Access Protection with Microsoft’s Zero Trust DNS

The other day Microsoft announced something it calls Zero Trust DNS. At a high level, it is leveraging clients’ underlying name resolution capabilities to establish and enforce security controls below the application.

In design, it is quite similar to what we did in Windows 2008 and Network Access Protection (NAP), a now deprecated solution that if released today would be considered some flavor of “Zero Trust” network access control.

NAP supported several different enforcement approaches for network policy. One of the less secure methods assessed a client’s posture; if found to be non-compliant with organizational requirements, the DHCP server would assign a restricted IP address configuration. The most secure approach relied on certificate-based IPSEC and IPv6. Healthy clients were provisioned with what we called a health certificate. All participating devices would then communicate only with IPSEC-authenticated traffic and drop the rest.

If ZeroTrust DNS had been released in 2008, it would have been another enforcement mode for Network Access Protection. It operates very similarly, essentially functioning on the basis that :

  1. The enterprise controls the endpoint and, through group policy, dictates DNS client behavior and network egress policy,
  2. Leverage mutual TLS and DNS over HTTPS (DoH) to authenticate clients, and
  3. Transform the DNS server into a policy server for network access.

I was always disappointed to see NAP get deprecated, especially the 802.1X and IPSEC-based enforcement models. Don’t get me wrong, there were many things we should have done differently, but the value of this kind of enforcement and visibility in an enterprise is immense. This is why, years later, the patterns in NAP were reborn in solutions like BeyondCorp and the myriad of “Zero Trust” solutions we see today.

So why might this ZeroTust DNS be interesting? One of the more common concerns I have heard from large enterprises is that the mass adoption of encrypted communications has made it hard for them to manage the security posture of their environment. This is because many of the controls they have historically relied on for security were designed around monitoring clear text traffic.

This is how we ended up with abominations like Enterprise Transport Security which intentionally weakens the TLS 1.3 protocol to enable attackers—erm, enterprises—to continue decrypting traffic. 

One of the theses of this Zero Trust DNS solution appears to be that by turning DNS into what we used to call a Policy Enforcement Point for the network enterprises get some of that visibility back. While they do not get cleartext traffic, they do get to reliably control and audit what domain names you resolve. When you combine that with egress network filtering, it has the potential to create a closed loop where an enterprise can have some confidence about where traffic is going and when. While I would not want my ISP to do any of this, I think it’s quite reasonable for an enterprise to do so; it’s their machine, their data, and their traffic. It also has the potential to be used as a way to make lateral movement in a network, when a compromise takes place, harder and maybe, in some cases, even make exfiltration harder.

Like all solutions that try to deliver network isolation properties, the sticking point comes back to how do you create useful policies that reduce your risk but still let work happen as usual. Having the rules based on high-level concepts like a DNS name should make this better, than with, for example, IPSEC-based isolation models, but it still won’t be trivial to manage. It looks like this will have all of those challenges still but that is true of all network segmentation approaches.

What I appreciate most about this is its potential to reduce an organization’s perceived need to deploy MiTM solutions. From a security protocol design perspective, what is happening here is a trade-off between metadata leakage and confidentiality. The MiTM solutions in use today cause numerous problems; they hinder the adoption of more secure protocol variants and objectively reduce enterprise security in at least one dimension. They also make rolling out new versions of TLS and more secure cryptography that much harder. Therefore, in my opinion, this is likely a good trade-off for some organizations.

To be clear, I do not believe that host-based logging of endpoint communications will lead all enterprises to abandon these MiTM practices. For example, some MiTM use cases focus on the intractable problems of Data Leak Protection, network traffic optimization, or single sign-on and privilege access management through protocol-level manipulation. These solutions clearly require cleartext access, and name-based access control and logging won’t be enough to persuade enterprises that rely on these technologies to move away from MiTM. However, there are some use cases where it might.

So, is this a good change or a bad change? I would say on average it’s probably good, and with some more investment from Microsoft, it could be a reasonable pattern to adopt for more granular network segmentation while giving enterprises more visibility in this increasingly encrypted world without needing to break TLS and other encryption schemes.

How TLS Certificates Can Authenticate DNS TXT Records

Have you found a use case where you think DANE and DNSSEC might be helpful? For example, the discovery of some configuration associated with a domain? Since a practically useful DNSSEC deployment, which requires individual domains (ex: example.com) to adopt DNSSEC and for relevant clients to use a fully validating DNSSEC resolver, which has not happened yet at any reasonable scale, maybe consider using certificates to sign the values you place in DNS instead.

SXG certificates are, in essence, signing certificates tied to a domain, while you could use a regular TLS certificate, for today’s post let’s assume that SXG certificates were the path you chose.

You can enroll for SXG certificates for free through Google Trust Services. This would allow you to benefit from signed data in DNS that is verifiable and deployable today not only once DNSSEC gets broad deployment.

Assuming a short certificate chain, since DNS does have size restrictions, this could be done as simply as:

Get an SXG certificate that you can use for signing….

sudo certbot certonly \
  --server https://dv-sxg.acme-v02.api.pki.goog/directory \
  -d yourdomain.com \
  --manual \
  --eab-kid YOUR_EAB_KID \
  --eab-hmac-key YOUR_EAB_HMAC_KEY

Sign your data….

openssl smime -sign -in data.txt -out signeddata.pkcs7 -signer mycert.pem -inkey mykey.pem -certfile cacert.pem -outform PEM -nodetach

Then put the data into a DNS TXT record….

After which you could verify that using the Mozilla trust list using OpenSSL…

openssl smime -verify -in signeddata.pkcs7 -CAfile cacert.pem -inform PEM

In practice, due to DNS size constraints, you would likely use a simpler signature format, such as encoding just the signing certificate and the signature in Base64 as a type-length-value representation; with a 256-bit ECC certificate and its signature, this record would total around 1152 bytes, which would comfortably fit inside a DNS TXT record, thanks to EDNS which has a capacity of up to 4096 bytes.

While this does not provide the authoritative non-existence property that DNSSEC has, which exists to address downgrade attacks, which means relying parties could not detect if the record was removed, nor does it provide security to DNS overall, but if we are just talking about configuration data, it might be a viable approach to consider.

On the other hand, this form of signing a TXT record using a certificate is never going to take down your domain and DNSSEC adoption cannot say that!

Effortless Certificate Lifecycle Management for S/MIME

In September 2023, the SMIME Baseline Requirements (BRs) officially became a requirement for Certificate Authorities (CAs) issuing S/MIME certificates (for more details, visit CA/Browser Forum S/MIME BRs).

The definition of these BRs served two main purposes. Firstly, they established a standard profile for CAs to follow when issuing S/MIME certificates. Secondly, they detailed the necessary steps for validating each certificate, ensuring a consistent level of validation was performed by each CA.

One of the new validation methods introduced permits mail server operators to verify a user’s control over their mailbox. Considering that these services have ownership and control over the email addresses, it seems only logical for them to be able to do domain control verification on behalf of their users since they could bypass any individual domain control challenge anyway. This approach resembles the HTTP-01 validation used in ACME (RFC 8555), where the server effectively ‘stands in’ for the user, just as a website does for its domain.

Another new validation method involves delegating the verification of email addresses through domain control, using any approved TLS domain control methods. Though all domain control methods are allowed for in TLS certificates as supported its easiest to think of the DNS-01 method in ACME here. Again the idea here is straightforward: if someone can modify a domain’s TXT record, they can also change MX records or other DNS settings. So, giving them this authority suggests they should reasonably be able to handle certificate issuance.

Note: If you have concerns about these two realities, it’s a good idea to take a few steps. First, ensure that you trust everyone who administers your DNS and make sure it is securely locked down. 

To control the issuance of S/MIME certificates and prevent unauthorized issuance, the Certification Authority Authorization (CAA) record can be used. Originally developed for TLS, its recently been enhanced to include S/MIME (Read more about CAA and S/MIME).

Here’s how you can create a CAA record for S/MIME: Suppose an organization, let’s call it ‘ExampleCo’, decides to permit only a specific CA, ‘ExampleCA’, to issue S/MIME certificates for its domain ‘example.com’. The CAA record in their DNS would look like this:

example.com. IN CAA 0 smimeemail "ExampleCA.com"

This configuration ensures that only ‘ExampleCA.com’ can issue S/MIME certificates for ‘example.com’, significantly bolstering the organization’s digital security.

If you wanted to stop any CA from issuing a S/MIME certificate you would create a record that looks like this: 

example.com. IN CAA 0 issuemail ";"

Another new concept introduced in this round of changes is a new concept called an account identifier in the latest CAA specification. This feature allows a CA to link the authorization to issue certificates to a specific account within their system. For instance:

example.com. IN CAA 0 issue "ca-example.com; account=12345"

This means that ‘ca-example.com’ can issue certificates for ‘example.com’, but only under the account number 12345.

This opens up interesting possibilities, such as delegating certificate management for S/MIME or CDNs to third parties. Imagine a scenario where a browser plugin, is produced and managed by a SaaS on behalf of the organization deploying S/MIME. This plug-in takes care of the initial enrollment, certificate lifecycle management, and S/MIME implementation acting as a sort of S/MIME CDN.

This new capability, merging third-party delegation with specific account control, was not feasible until now. It represents a new way for organizations to outsource the acquisition and management of S/MIME certificates, simplifying processes for both end-users and the organizations themselves.

To the best of my knowledge, no one is using this approach yet, and although there is no requirement yet to enforce CAA for SMIME it is in the works. Regardless the RFC has been standardized for a few months now but despite that, I bet that CAs that were issuing S/MIME certificates before this new CAA RFC was released are not respecting the CAA record yet even though they should be. If you are a security researcher and have spare time that’s probably a worthwhile area to poke around 😉

Raising the Bar: The Urgent Need for Enhanced Firmware Security and Transparency

Firmware forms the foundation of all our security investments. Unfortunately, firmware source code is rarely available to the public and as a result is one of the least understood (and least secure) classes of software we depend on today.

Despite this, the hardware industry is known for its lack of transparency and inadequate planning for the entire lifecycle of their products. This lack of planning and transparency makes it hard to defend against and respond to both known and unknown vulnerabilities, especially when the industry often knows about issues for ages, but customers do not.

In today’s world, automation allows builders, defenders and attackers to automatically identify zero-day vulnerabilities with just a binary it has become increasingly important that embargo times for vulnerabilities are as short as possible, allowing for quick defense and, when possible, remediation.

Despite this, organizations like the UEFI Forum are proposing extended disclosure periods, suggesting a 300-day wait from initial reporting to the vendor before notifying customers. During this year-long waiting period, customers are exposed to risks without defense options. The longer the period, the more likely it is that automation enables the attacker to identify the issue in parallel, giving them a safe period to exploit the zero-day without detection.

Simply put, this duration seems way too long, considering the ease of proactively catching issues now — especially given the industry’s overall underinvestment in product security. It would be a different case if these organizations had a history of handling issues effectively, but the reality is far from this. Their apparent neglect, demonstrated by unreliable update mechanisms, continuously shipping models with the same issues that have been resolved in other models, and the frequency of industry-wide issues highlight this reality. More often than any other industry, we see hardware manufacturers often reintroducing previously resolved security issues due to poor security practices and poor management of their complex supply chains. This reality makes this position highly irresponsible. We must do better. Concealing vulnerabilities like this is no longer viable — if it ever was.

It is possible we will see changes as a result of shifts in software liability and regulatory changes, like those in White House Executive Order 1428. This order demands that organizations responsible for “critical software” comply with long-standing best practices. Although “critical software” lacks a clear definition, firmware’s role in underpinning all security investments suggests it likely falls into this category. This executive order starts with basics like publishing known dependencies, which is useful but insufficient, especially in this segment given the prevalence of shared reference code and static dependencies that are not expressed as a library dependencies. This language includes adoption of formal vulnerability management practices, bug bounties, and more. This and the EU Cyber Resilience Act are all efforts to get these and other vendors to align with long-time security best practices, like those captured by efforts like the NIST’s vulnerability management recommendations.

This landscape will likely shift once we see enforcement cases emerge, but customers must insist on higher standards from hardware manufacturers and their suppliers, or nothing will change in the near term.

Words matter in cryptography or at least they used to

I was listening to Security Cryptography Whatever today, and they were discussing a topic that has been bothering me for a while.

A common theme in post-quantum cryptography is its pairing with classical cryptography. This “belts and suspenders” approach seems sensible as we transition to relatively new ways to authenticate and protect data. We have already seen some of these new post-quantum methods fail, which underscores the importance of agility in these systems.

However, merging two approaches like this introduces complexity, which is important since as a general rule, complexity is the root of all security issues. Another concern is the labeling of various strategies for doing this as “Hybrid.” This wording makes it challenging to understand what the different approaches are doing and why.

With this background in mind, let’s explore three different “Hybrid” approaches to PQC and classical cryptography. By giving each a unique name and using simple examples, to see if we we can show how they differ: Nested Hybrid Signatures, Side-by-Side Hybrid Protocols, and the proposed Merged Hybrid Signatures.

Nested Hybrid Signatures: A box within a box

In this approach, imagine verifying the authenticity of a letter. The nested hybrid signature method is like putting this letter within a secure box, protected by a classical signature scheme like ECDSA. But we don’t stop there. This box is then placed within another, even stronger box, strengthened with a post-quantum signature scheme like Dilithium. This nested structure creates a situation where even if one layer is broken, the inner core remains trustable..

Side-by-Side Hybrid Protocols: Simultaneous and Nested

In this method, imagine two separate safes, each protecting a part of your secret message. One safe has a classical lock, while the other has a modern, quantum-resistant lock. To see the entire message, one must unlock both safes, as the full message remains trustable unless both safes are broken into. 

Merged Hybrid Signatures: Holding onto the past

This method tries to mix the elements of classical and post-quantum signature schemes into a single, unified signature format. The goal of this approach is to enable minimal changes to existing systems by maintaining a single field that combines a classical signature with a post-quantum signature. This has several issues and seems misguided to me. Firstly, this mixing of PQC and classical cryptography is a temporary problem; eventually, we should have enough confidence that post-quantum cryptography alone is enough at which point this complexity wouldn’t be needed. It also messes with the current assumptions associated with existing signatures, and while it’s not clear what the issues may be, keeping each of the signatures isolated seems less risky. To stick with the lock analogy, it’s somewhat like designing a safe with two different locks on the same door, which must be unlocked at the same time with the same key.

Conclusion

While it’s tough to find the right words to describe new developments as they happen we can do better to avoid using the same terms for different approaches. This will make it easier for everyone to understand what’s being discussed without having to study each protocol in detail. 

Using Caddy with Google Trust Services

Caddy is a powerful and easy-to-use web server that can be configured to use a variety of certificate authorities (CA) to issue SSL/TLS certificates. One popular CA is Google Trust Services, which offers an ACME endpoint that is already compatible with Caddy because it implements the industry-standard ACME protocol (RFC 8555). 

This means that Caddy can automatically handle the process of certificate issuance and renewal with Google Trust Services, once the External Account Binding (EAB) credentials required have been configured. 

How do I use it? 

Using global options

To use the Google Trust Services ACME endpoint you will need an API key so you can use a feature in ACME called External Account Binding. This enables us to associate your certificate requests to your Google Cloud account and allows us to impose rate limits on a per-customer basis. You may easily get an API key using the following commands:

$ gcloud config set project <project ID>
$ gcloud projects add-iam-policy-binding project-foo \  –member=user:[email protected] \  –role=roles/publicca.externalAccountKeyCreator
# Request a key:
$ gcloud alpha publicca external-account-keys create

You will need to add this API key and specify the Google Trust Services ACME directory endpoint along with your email address in your Caddyfile:

{    
acme_ca https://dv.acme-v02.api.pki.goog/directory
email  [email protected]
acme_eab {
        key_id  <key_id>
        mac_key <mac_key>
    }}

It is important to remember that when you use this configuration Google Trust Services explicitly is used as the only CA.

If you want to use multiple CAs for redundancy Caddy which is recommended the configuration would look something like this:

{     
cert_issuer acme https://dv.acme-v02.api.pki.goog/directory  {
          eab <key_id>  <key>
     }
     cert_issuer acme
}

In this example Google Trust Services will be tried and if there is a problem it will fall back to Let’s Encrypt.

It is also worth noting that the Google Trust Services EAB key is one time use only. This means that once Caddy has created your ACME account these can be safely removed.

Using the tls directive

If you want to use Google Trust Services for only some of your sites, you can use the tls directive in your Caddyfile like you’re used to:

tls [email protected] {
    ca https://dv.acme-v02.api.pki.goog/directory
   eab <key_id> <mac_key>
}

The email address in this example identifies the ACME account to be used when doing enrollment.

In conclusion, using Caddy with Google Trust Services is a simple and simple and secure way to issue and manage SSL/TLS certificates for your websites. With the easy-to-use Caddyfile configuration, you can quickly and easily configure your server to use Google Trust Services for all of your sites or just a select few. With Google Trust Services, you can trust that your websites will be secure and your visitors’ data will be protected.

Top #5 CAs by issuance volume as of 09/19/22

As Google Trust Services has been available for a few weeks I thought it would be interesting to look at where it stands relative to other CAs based on its issuance volume.

#CA OwnerCertificatesPre-Certificates
AllUnexpiredAllUnexpired
1Internet Security Research Group2,972,072,131270,986,3172,689,923,862233,570,311
2DigiCert115,808,4067,603,151443,129,508138,144,685
3Sectigo580,982,48145,262,868517,794,47746,204,659
4Google Trust Services LLC13,909,548467,650120,070,01321,232,287
5Microsoft Corporation17,56717032,448,45316,959,805

For more on this methodology of counting see this post.

Information system security and how little things have changed

When I was a boy my father had me read Plato’s Republic – he wanted to give an oral report on what the key points of the book were and what my personal takeaways were after reading the book.

The first question was easy to answer from the dust jacket or maybe a Cliff Notes (For those of you who have not read the book it is an exploration of the ideas of justice and the ideal government).

With that said, I knew from experience that those personal takeaways are buried in the nuance and no shortcut would satisfy him so off to read I went. What were those takeaways? According to him what I said was: 

  1. The nature of people has not changed much,
  2. The problems we have in government have not changed much.

Why do I bring this up in the context of security? Unfortunately, it is because I do not think things have changed much in security either! I’ll give two examples that stand out to me:

Every program and every privileged user of the system should operate using the least amount of privilege necessary to complete the job.

— Jerome Saltzer, 1974, Communications of the ACM

The moral is obvious. You can’t trust code that you did not totally create yourself. (Especially code from companies that employ people like me.) No amount of source-level verification or scrutiny will protect you from using untrusted code.

Ken Thompson, 1984, Reflections on Trusting Trust

The first quote is the seminal quote referring to the term “least privilege” – a concept we still struggle to see deployed nearly 50 years later. The term is old enough now the marketers have latched onto it so when you speak to many enterprises they talk about it in the scope of group management and not the more fundamental design paradigm it actually represents.

To put this concept in the context of the network in the 90s we talked about how Firewalls, however necessary, were a bit of an antipattern since they represented “the hard candy shell” containing the “soft gooey sweet stuff” the attacker wants to get at and that as a result, it was better to design security into each endpoint. 

A decade later we were talking about using network-level enforcement via “Network Admission Control” at the switch, later yet via DirectAccess and Network Access Protection we were pushing those same decisions down as close to the end device as we could, and in some cases making each of those endpoints capable of enforcing these access requests.

Today we call this pattern ZeroTrust networking, a leading example of this pattern is called BeyondCorp, but again marketers have latched onto ZeroTrust and as a result, it seems almost every enterprise product I hear about these days claims to offer some sort of ZeroTrust story but few objectively meet the criteria I would define for such a lofty term.

Similarly, if we look at the second quote all we have to do is take a look at the recent SolarWinds debacle and realize that almost nothing has changed since Ken Thompson wrote that paper. We also have dozens of examples of keys being compromised being used to attack the software supply chain, or package repositories and open source dependencies being used as attack vectors. Despite us knowing how significant these issues can be for nearly 40 years we have made very little progress in mitigating these issues.

As they say, there is nothing new under the sun, and this appears to be especially true with security. If so why is this the case? How is it we have made so little progress on these fundamental problems as an industry?

Unfortunately, I think it boils down to that customers don’t care until it is too late and this makes it hard for the industry to justify the kinds of fundamental investments necessary to protect the next generation from these decades old.

How do we improve the state of affairs here? Thats really the question, one I don’t have a good answer to.