Category Archives: Standards

The Evolution of Security Thinking

In design sometimes we refer to the strategies used during the design process as Design thinking.  The application of these strategies helps ensure you are solving the right problems and doing so in a repeatable way. You can attribute much of the massive improvements in usability in software and devices over the two decades to these strategies.

If we look at how we have evolved thinking around building secure systems over the last two decades we can see that we have evolved similar strategies to help ensure positive security outcomes.

If we go back to the late 80s we see systems that were largely designed for a world of honest actors. There was little real business happening on the Internet at the time and the hard problems to be solved were all related to how do we enable a global network of interconnected systems so thats where efforts were put. These efforts led us to the Internet of today but it also gave us systems vulnerable to trivial attacks such as the Morris Worm.

By the 90s the modern “security industry” was born and products designed to protect these insecure systems from the internet started to come to market. One of the most impactful examples of this was the TIS Firewall Toolkit, other examples of this way of thinking include Antivirus products and other agents that promised to keep our applications and operating systems safe from “attackers”.

By the late 90s and early 2000s, it was clear that these agents were never going to be effective at keeping the bad guys out and that we needed to be building systems that were Secure by Default, Secure by Design and Private by Design. This shift in thinking meant that solution developers needed to develop their own strategies and tooling to ensure systems could be built to be inherently resilient to the risks they were exposed to. The concept of Threat Modeling is probably the most concrete example of this, believe it or not, this basic concept was essentially absent from software development up until this point.

By this time the technical debt in deployed systems was so great we spent most of a decade just trying to rectify the mistakes of the past. Windows XP SP2 and the Microsoft Security Stand Down is probably the most visible example of the industry making this shift, it also leads to the Security Development Lifecycle that largely informs how we as an industry, approach building secure systems today.

During this timeline, cryptography was treated as something that you sprinkled on top of existing systems with the hope to make them more confidential and secure. As an industry, we largely relied on the US Government to define the algorithms we used and to tell us how to use them securely. As a general rule only products designed for government use or for the small group of “cypherpunks” even considered the inclusion of cryptography due to the complexity of “getting it right”.

Things are changing again, we see the IETF via the CFRG working to standardize on international and independently created and cryptographic algorithms in lieu of relying exclusively on governments to do this standardization. We also see the concept of Formal Verification being applied to cryptographic systems (Galois is doing great work here with Cryptol as are other great projects in the verifiable computing space) which is leading us to have frameworks we can apply to build these concepts into other products securely (check out the Noise Protocol Framework as an example).

I think the Signal Protocol, Rough time, Certificate Transparency and even Blockchain Technologies are examples of the next phase of evolution in our thinking about how we build secure systems. Not because of “decentralization” or some anti-government bent in technologists, instead, these systems were designed with a more-complete understanding of security risks associated with their use.

Trust is a necessary component of human existence. It can give us peace of mind but It can also give us broken hearts. The same is true in the context of system design. Trust cautiously.

These systems, by design, go to great length to limit the need for “trust” for a system to work as intended. They do this by minimizing the dependencies that a system takes in its design, this is because each of those dependencies represents an attack vector as we advance technology our attackers become more advanced as well. They also make extensive use of cryptography to make that possible.

This focus on dependency reduction is why we see Blockchain enthusiasts taking the maximalist position of “Decentralize all the Things”. In my opinion, centralization is not always a bad thing, over-centralization maybe, but centralization can provide value to users and that value is what we should be focused on as solution developers.

My personal take is that when we look back on the next decade we will the say the trend was not “blockchain” but instead this is when we evolved our security thinking and tooling to better utilize cryptography. Specifically that this is when we started to use cryptography to make transparency, confidentiality and verifiability part of the core of the solutions we build instead of thinking of it as a layer we apply once we are done.

Certificate based Encryption in PDFs

The PDF format is the most used file format on the internet but unfortunately, the specification that documents it leaves a lot to be desired when it comes to producing signed and encrypted documents.

PDF is still the only truly cross-platform “paper like” experience available to users. It also has a number great of features that many are not aware of, one of which is the ability to encrypt the PDFs so they are not readable without having access to the appropriate secrets.

It supports two approaches to this encryption, one based on passwords and one based on digital certificates. In a later post I will discuss the issues in password based encryption but here I want to talk about the second approach as it offers the potential for the most security.

When looking at our findings it’s important to keep in mind the history and timeline of the PDF format. It was released as a free to implement specification in 1993 and then was standardized by ISO in 2008. It has been left largely unchanged since then. This is important because much of the practices and approaches in the standard were considered state-of-the-art in the 90s but are no longer considered strong today.

NOTE: The below is based on our findings while reading a pre-release of ISO 32000-2 that was approximately one-year-old. It is notably more readable  than its predecessor, 32000-1, in many areas but it seems little change has been made to how signing and encryption is handled. Unfortunately the ISO standardization process does not produce public “intermediate” documents and this was the freshest document we could find.

Message Format

The granddaddy of signature formats is something called PKCS #7. This was defined by RSA in the mid-90s and later handed off to the IETF when they republished in as Cryptographic Message Syntax (CMS) in 1999. CMS is now a superset of PKCS #7 where additional attributes and cryptographic algorithms are also supported. The PDF specification however still references signature format as being PKCS #7 but its references are to the at least one CMS RFC and not the PKCS#7 one. Most implementations, however, such as Adobe Acrobat, have added support for algorithms and options that are available in the latest CMS specifications so we can assume this is what they mean. This is not a security issue, but it does create a mess when it comes to interoperability.

This inconsistency in the specification makes it harder for an implementer to know which bits and pieces to implicitly pull in from different unreferenced specifications. This not only makes interoperability more challenging, but it also results in a lack of common capabilities across implementations.  

Padding

The most popular asymmetric cryptographic algorithm of all time is clearly RSA, it serves as the foundation of most key management and distributions solutions in use today. When encrypting data with RSA you need to “pad” the data you are encrypting. In the 90’s you would pad using a scheme called RSA PKCS v1.5. The problem is that in the late 90s it became clear this scheme was attackable.

These attacks are most relevant to systems that are online, for example, a website or API applying verifying electronic signatures or decrypting documents. As a result of these weaknesses, in 2001 the world started moving to something called OAEP that addressed the identified risks in PKCS v1.5. It takes a decade or more to depreciate a cryptographic technique that is broadly in use, and thankfully we now see cryptographic libraries (such as the new WebCrypto) deprecating these weaker constructs to prevent future scenarios from accidentally supporting them.

Unfortunately, it seems the PDF specification was never updated to use the more secure OAEP padding scheme. While there is nothing stopping a client that implements the standard from also using the more modern padding algorithm also (just as many have adopted features in the latest CMS specification even though not part of the PDF specification) it seems the most popular client, Adobe Acrobat, has not decided to do that. This means if people want to encrypt documents using the more secure approach they won’t be able to work with Adobe Acrobat.

It is about time the standard incorporated OAEP and thankfully doing so is almost as easy as “search and replace”.

Content Encryption

The PDF specification states that AES should be used in Cipher Block Chaining (CBC) mode. There is nothing wrong with CBC per-se, with that said it is “easy to get it wrong” and for this reason, most practitioners will tell you to use a mode called Galois Counter Mode (GCM). This mode of encryption is an “authenticated” mode where you can easily tell if a message has been modified when decrypting.

It is true that the PDF format includes an MD5 as an integrity mechanism, but unless special care is given one could still easily expose the associated attack vectors.

The ability to verify the integrity of an encrypted message before decryption is materially important to systems that handle lots of documents. Doing so as part of the encryption mechanism, as is done with GCM, ensures this happens. GCM has been around since 2005, three years before the ISO standard for PDF was published — It is about time the standard incorporated it.

Key Strength

In the certificate based PDF encryption scheme there are two “secrets” that need to be protected, the first is called a “seed” by the specification. This seed is used to derive the content encryption key (CEK) that is used to encrypt the actual PDF content.

This “seed” is a 20-byte random value. The 20-bytes was more than sufficient when working with shorter key lengths like those used in 3DES but with AES-256 it is not sufficient. When deriving the key in accordance with the specification this “seed” is mixed with a hash of portions of the document. It is possible the authors thought the use of the hash was sufficient to provide the additional entropy, but it is not. These inputs are not random, they may be unique to a given document, but all instances of that document would have the same values. As a result, there is insufficient entropy to get all the security benefits of AES-256.

The specification should be updated to indicate that for a 128-bit AES key you use at least 16-bytes, for a 256-bit AES key you use at least 32-bytes of entropy.

ECC Support

ECC represents an important cryptographic tool with lots of great security properties. Several PDF clients, including Acrobat Reader, support signing and verifying signatures based on ECC.  There would probably be even more, but again, the PDF specification we have states the format is PKCS #7 even though it links to an older copy of the CMS standard.

Those clients that do this do support ECC do so via ECDSA (this was specified in 2002 for CMS and later updated in 2010). 

It is also possible to encrypt with ECC. This would be done using ECDH and is also documented in the same RFCs. Adding support into the specification would make sense since many cryptographic guidelines across the world mandate the use of ECC based algorithms. Additionally signing and encryption go hand-in-hand and if one signs with ECC they surely would also like to encrypt the same document and today that’s not possible, at least in Adobe Reader, without also being enrolled for an RSA certificate.

Long story short, it is about time both ECC signing and encryption are supported by the PDF standard.

Implementation Guidance

The specification as written offers essentially no implementation guidance. There are numerous cases of this, but one of the more glaring implications comes up when we think about the key hierarchy used in encryption.

While the specifications use of two keys in the key hierarchy is a convoluted approach, if done correctly it can work fine. One of the larger issues here an implementor needs to be mindful of is the “effective key strength” they are offering the users of their products. For example, if you encrypt the “seed” using 3DES and the “content” with AES then the content is only as secure as the 3DES key. We have encountered at least one client that does exactly this. Another variant we have seen is a client that encrypts the “seed” with AES-128 and the content with “AES-256”, of course, the client tells the users the file was protected with the larger key length.

The specification should be updated to make it clear that both keys need to be protected with algorithms that offer the same effective security, and in fact, should simply be protected using the same algorithm.

Backward Compatibility

Another example of missing implementation guidance is that of backwards compatibility. While I am sure there are examples outside of how documents are signed and encrypted what is directly obvious to us is that the specification includes support for many algorithms that are weak and/or broken (for example MD2, MD5, DES, and 3DES).

The text really should be updated to make it clear that no new documents should be created using these algorithms and recommend that clients warn when viewing documents that have been produced with them since the guarantees of privacy, integrity and authentication the user expects are likely not being met.

Conclusion

The PDF format uses cryptographic approaches from the 90s and implementers have pulled in, on an as needed basis of more modern cryptographic approaches. Today algorithms that would be used to build such a standard would be AES-GCM, ECDSA, ECDH, and RSA-OAEP almost all of which are not supported by PDF as specified.

Thankfully the ISO 32000-2 specification is not yet complete, it is my hope that it will be soon, and that the editors of this specification read this and update the draft to incorporate this feedback.  If they do then we will all be better off.

Ryan & Yury

Uniform Electronic Legal Material Act and Digital Signatures

Apparently one of the reasons states have been reluctant to publish legal material online is that there is a concern over how relying parties can tell if the material is authentic and has not been tampered with.

In an attempt to address this concern a law has been proposed called the Uniform Electronic Legal Material Act (UELMA) the text of which at a high-level states this must be addressed.

“An official publisher of legal material in an electronic record that is designated as official under Section 4 shall authenticate the record. To authenticate an electronic record, the publisher shall provide a method for a user to determine that the record received by the user from the publisher is unaltered from the official record published by the publisher.”

UELMA which was proposed in 2011 has been enacted into law in 12 states (including California, Colorado, Connecticut, Delaware, Hawaii, Idaho, Illinois, Minnesota, Nevada, North Dakota, Oregon, and Pennsylvania). With that said it looks like it may be yet another an unfunded mandate in that there doesn’t appear to be much activity in the way of publishing data signed data.

As with most US laws UELMA doesn’t specify how one would meet this requirement but the most obvious way would be to publish these documents as PDF files and sign them using PAdES. In many cases (especially legal text) this would be the ideal solution given how easy it is  to both apply and verify signatures thanks to the broad support of the standard.

But why is there such broad support for this standard? It’s simple the EU takes a totally different approach to the problem of specifying what makes a “legal” electronic signature than we do. The US basically doesn’t specify any format or requirements for signatures while the EU specifies 4 formats (each with a different use cases) that are allowable of which PAdES is one.

But why did they choose four formats and not just one? That is easy. A signed PDF may be an great way to make content accessible and verifiable to people it is not a good solution for structured data that would be parsed by machines. In these machine readable cases the Europeans rely on CAdES, XAdES and ASiC which are better signature formats for machine readable data.

Since the US doesn’t specify how one should address this problem a non-profit called US Open Data is advocating a solution they helped develop called Data Seal which is a web application that sits on top of PGP to verify files to be used for all of the above cases.

In my opinion this is a bad approach, here are just a few reasons:

  • PGP is 24 years old and has a wonderful mix of usability and interoperability issues that have not been solved in a meaningful way (though there are many who are trying [like Data Seal] but even many of these supporters now see it a lost cause).
  • Dependency on what is today in-essence a single vendor commercial solution, even if is based on an open standard and open sourced means that if these tiny vendors go out of business there is no practical way for “real users” to verify the authenticity of the documents/data.
  • Legal documents need to be verifiable long into the future and and this approach does not consider the concept of long term signature verification (time-stamping, crypto-periods, etc).
  • Pushing for the adoption of a single machine readable signature format (PGP) across the board at the expense of providing an easy-to-use and verify human readable solution is a short-sighted and bad tradeoff.
  • The world is getting smaller, interoperability is more important today than ever. If were going to adopt a different way of solving the same problem than a large majority of the globe it should provide sufficient material benefits to offset the interoperability and accessibility impacts such a decision caries with it.

I could even argue the that as architected Data Seal actually doesn’t even meet the ULEMA requirements in that ULEMA requires that the solution preserves the data and makes it permanently available but the solution does not provide a way for the signatures themselves to be verified long-term.

Anyway all of this is an interesting side-effect of the US approach to legislature. We try to allow innovation by not overly specifying how the market solves a problem while the EU tends to be overly specific and restrictive which tends to hurt innovation. I am almost always a fan of the US approach as governments move much much slower than the market and tend to create structural barriers to innovation. With that said I think interoperability is a case where standards are needed and when it comes to how governments publish and authenticate documents there should be a standard.

The (soon to be) not-so Common Name

If you are reading this post you are probably familiar with the use of digital certificates in SSL even if you are you may not be familiar with their history. Before we go there though we should start with what, at its core a digital certificate actually is.

Fundamentally a digital certificate is a binding of entitlements and constraints to a key, in other words they say things like “The holder of the private key associated with this certificate can rightfully use the name Ryan Hurst when signing emails”.

When originally conceived they were to be used to help bind subjects (people and resources) to their representations in directories. This is why the Subject Name in a certificate is structured as a Distinguished Name (DN) as this is how a directory uniquely identify a subject.

This makes sense when looking up an encryption key for a user in an enterprise directory but not so well on the Internet where there is no global directory of users.

This brings us to SSL, it was introduced in the mid 1990s and at the time nearly every large enterprise was already deploying Directories and Certificate Authorities as part of their identity management frameworks. The technology of X.509 was tested, broadly accepted and fit the bill for the problem the designers of the protocol had so they included it as is.

At the time there was only one way to represent concept of a subject of a certificate and that was the Common Name (CN) so they chose to put the DNS name of the SSL server there. This was technically acceptable but was a re-purposing of a field that was really intended for a users actual name.

After SSL was finalized the IETF released their profile of X.509 for use on the Internet this standard introduced the concept of a Subject Alternative Name (SAN) where you can put names that are not associated with a directory. The problem is that ship had sailed, by the late 90s when this was standardized everyone had already settled on using the Common Name.

This led us down a bad path, first of all many servers (especially today) have multiple DNS names and application that supported only the Common Name field couldn’t work with a single certificate with more than one DNS name in it. This was addressed in the short term by using a single certificate for each DNS name but this came at a high cost, we also needed to use a single IP address for each domain name.

Another problem with this approach is applications never really knew what to expect in the Common Name field. Is the value in that field a person’s name or is it a DNS name? This is a problem because often times there are rules you need to use to validate a piece of data before using it and this is especially true for DNS names.

For these reasons (and more) since at least 1999 (when RFC 2459 was standardized) we have been on a slow path to moving away from the use of Common Names for domain names to using Subject Alternative Names.

Fast forward to 2012 some Stanford researchers publish a paper titled “The most dangerous code in the world: validating SSL certificates in non-browser software” which identifies a bunch of applications who fail to do the most basic certificate validation tasks correctly and as a result are the source of a bunch of security vulnerabilities.

These applications gave their users a false sense of security not out of malice but as a result of a lack of understanding of the technology they used to deliver on that promise. A big part of that is the complexity 18 years of technological evolution carries with it.

To address this a number of things need to change but one of the most immediate changes is what that the definition of what constitute a “valid” SSL certificate is changing to make the rule-set a little simpler for the application developer and to rule out options that are no longer considered good practice.

We see this happening in a few ways. First the CA/Browser Forum has worked with Browsers to define a set of Baseline Practices that all Certificates must meet, we are also seeing Browsers doing sanity checks to ensure these practices are in-fact followed.

These baseline requirements mandate that certificate authorities always include at least one Subject Alternative Name in the SSL certificates they issue, this means that today an application doesn’t need to look in both the Common Name and the Subject Alternative Name they only need to check the latter.

Currently most Certificate Authorities will include the first DNS Name from the Subject Alternative Name in the Common Name field also but this is done primarily for legacy reasons and at some point in the not so distant future will stop.

When it does certificates will be a little smaller and developers lives will be a little easier.

Ryan

Resources

· Baseline Requirements

· Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile

· Microsoft Security Advisory: Update for minimum certificate key length

Abstract: Using least privileged design principals to improve trust in the online marketplace

Weekends, they are overrated 🙂

Tomorrow is the cut-off for submissions for the NIST workshop on “Workshop on Improving Trust in the Online Marketplace” to be held April 2013 and I have spent part of the day thinking about what talks might be interesting. I have already submitted one on “Revocation reality and the path to becoming effective” but I also wrote this one up and I might submit it also, posting here so I can get a little feedback before the submission deadline.

In 2010 security researchers with the EFF collected the certificates of all of the publicly-visible SSL certificates on the IPv4 internet and published their analysis and data-sets from their research. This work made it clear to the world how extensively PKI is used to facilitate commerce on the web but it also raised he concern that there were as many as 650 organizations capable of issuing publicly trusted certificates on the internet.

While this conclusion is exaggerated as many of those certificates and keys are in-fact operated by the same organizations that their certificates are ultimately issued by, the conclusion that there has been an un-needed expansion of the number of keys that are technically trusted to issue certificates for SSL for the entire Internet is sound.

To address this problem one of the steps that is needed is the application of least privilege principals to how one designs and manages publicly trusted keys and certificates. Thankfully in the late 90’s the foundation for addressing much of this problem was developed as a means to enable the Federated PKI in use by the U.S. Federal Government.

For the last year we have been working to broadly deploy X.509 Name Constraints’ along with other least privilege design principals to our customers PKIs both internally managed by our own staff as well as those on premise. This talk will explore these concepts, the client support for them, the challenges we have experienced in their deployment and identify the remaining issues that must be addressed to obtain the full benefits of this approach.

Abstract: Revocation reality and the path to becoming effective

Just submitted my first abstract for the NIST workshop on “Workshop on Improving Trust in the Online Marketplace” in April, the title of the talk is “Revocation reality and the path to becoming effective”, the abstract of which is:

 

The concept of certificate revocation is core to the X.509 trust model however 18 years after its introduction the reality is as implemented and deployed it falls short of its promise to enable an issuer certificate issuers to protect relying parties from malicious actors and miss-issuance

This talk will discuss the findings of a project where I have observed the behavior (https://revocation-report.x509labs.com), up time and performance of revocation repositories for a number of commercial Certificate Authorities for a period of over six months.

Additionally I will overview the revocation behavior of the most common browsers, identifying the gaps as they exist in those implementations.

And finally I will provide a set of recommendations that I believe if followed can address the current gaps which would move us to a world where revocation checking is an effective means of protecting relying parties from known bad actors and miss-issuance.

Using ECC keys in X509 certificates

Recently the CAB Forum published a document called the Baseline Requirements for the Issuance and Management of Publicly Trusted Certificates.

This document was authored by both browsers and public CAs and is used by the browser vendors to mandate what minimum technical requirements need to be met for inclusion into their “Root Programs”.

One of the changes specified in this document is that subscriber certificates (aka SSL certificates) containing RSA keys must have a bit length of 2048. This is a change for a lot of CAs (GlobalSign had made this change some time ago) one that has implications to server operators.

Just take a look at the Crypto Plus Plus Benchmarks to see how much more expensive 2048 bit RSA. For most users this additional computational cost won’t be an issue but in some cases customers may need to increase the computing power they allocate for SSL establishment.

But what alternatives do you have? Well there is one, certificates with ECC keys; using these have the potential to significantly decrease the computational costs for SSL negotiations (even over your old 1024bit RSA certificate) but they come at a significant penalty – compatibility.

ECC was not supported in Windows until VISTA which was released in 2009, this basically means 100% of the XP clients out there (around 29% of the browsers on the internet as of July 2012) would be unable to establish a session with your website if you switched exclusively to ECC.

This is important for more than just Internet Explorer users since even Chrome and Safari use CryptoAPI for certificate validation when on Windows.

This would mean these users would see something like this:

 

That is pretty scary, so how long until we can use this more broadly? It’s hard to say there is a good article titled “The developers guide to browser adoption rates” that sheds some light, that and the historic gs.statcounter.com results. Based on these unless there is a sudden change (which is possible these machines are getting pretty old) I would assume that we have around 4-5 years of XP out there yet.

Hope this helps,

Ryan

Using SHA2 based signatures in X509 certificates

It’s been an exciting decade for cryptography; as a result we see smaller key sizes and weaker algorithms getting deprecated.

One driver of such things is the U.S. Federal Government, specifically NIST.

One example of this would be NIST Special Publication 800-131A which disallows the use of SHA1 after December 2013. What this means is if you are in the U.S. Federal Government or you work with them you may have to revise your technology strategy to use SHA2 in its place.

But what if you don’t have any policy mandate forcing you to do this switch? Well it’s a good idea but it has consequences too, namely compatibility.

You see SHA2 was published in 2001 so anything produced before then will not support it. The most notable example is Windows XP which as of July 2012 has about 29% presence on the Internet.

This is important for more than just Internet Explorer users since even Chrome and Safari use CryptoAPI for certificate validation when on Windows.

The good news is that XP SP3 which was released in 2008 added support for this new suite of hash algorithms, that begs the question how many of those XP machines have XP SP3?

Unfortunately I don’t have any public references that can answer this question but let’s that 85% of all XP machines on the Internet have gotten this update (I have good confidence in this number) that means that 15% of those 29% would not be able to connect to your server over SSL if you used SHA2.

This would mean these users would see something like this:

 

 

That is pretty scary, so how long until we can use this more broadly? It’s hard to say there is a good article titled “The developers guide to browser adoption rates” that sheds some light, that and the historic gs.statcounter.com results. Based on these unless there is a sudden change (which is possible these machines are getting pretty old) I would assume that we have around 4-5 years of XP out there yet.

Hope this helps,

Ryan

How to get your own OID arc

X509 uses Object Identifiers (OIDs) to uniquely identify things, for example one assigns a OID to their Certificate Policy Statements (CPS) so that it is possible to programmatically detect if a certificate meets a specific policy.

OIDs are managed as a namespace, this prevents “collision”. As such one needs to request an OID be assigned to them.

The “arc” part comes when you get your OID, you can assign any number you want at the end of your OID. For example, one might be assigned 1.1.1.1 and decide to “break” it up into chunks as follows:

  1. 1.1.1.1.2 – Documents
  2. 1.1.1.1.3 – Certificate Extensions
  3. 1.1.1.1.4 – Resource Identifiers

Underneath each of these you would assign unique numbers by appending a new number, for example 1.1.1.1.2.1 might be your CAs Certificate Practice Statement (CPS).

So how do you get one of these OIDs then? That’s easy it’s Internet Assigned Numbers Authority (IANA) who assigns these, they call them Private Enterprise Numbers. Getting one is easy enough just fill out a web application form. To do that you will only need 7 pieces of information, these include:

  1. Organization Name
  2. Organization Address
  3. Organization Phone
  4. Contact Name
  5. Contact Address
  6. Contact Phone
  7. Contact Email

Remember the idea is that the information you provide here will be used for people to reach you if they want to ask questions about these things you have uniquely identified so choose the values wisely.

It can take up to 60 days to get one of these (although usually the application is processed in about one week).

Once you got the object identifier, you should register the code on the site www.oid-info.com and/or www.alvestrand.no in this way will be easily accessible by those who are seeking information about the owner of object identifier.

Hope this helps.

Ryan

MSRC 2718704 and Nested EKU enforcement

There are a number of technical constraints a Certificate Authority can put into place on a subordinate Certificate Authority; the general concept is referred to as Qualified Subordination.

One of the most important ways to constrain a certificate is through by restricting what it can be good for.

The foundation for such a constraint is provided by PKIX in the Extended Key Usage extension (RFC 5280), this extension can be put into a certificate to restrict what it is trusted for – for example a certificate might be OK for SSL Server Authentication but not for S/MIME.

The problem is the RFC provides no practical guidance on how to act when this certificate is encountered in a CA certificate, all it says is:

In general, this extension will appear only in end entity certificates.

One can interpret this to mean that its semantics are the same in issuer or subscriber certificates, this makes sense but isn’t very useful as a CA is not very likely to ever perform “application tasks” like S/MIME or SSL Server authentication with its signing key, so why would you put it in a CA certificate?

Also if you look back at the history this extension was really one of the first that was introduced, it came into existence in a time where PKIs were only one level deep – the absence of guidance on how to handle this could easily be seen as an omission.

Microsoft saw it this way and decided to have their implementation treat this extension as a constraint, in other words if no EKU is present in the chain then the chain is considered good for all usages. But once a single EKU is added into the path nothing bellow it can be considered good for a non-listed EKU.

In Windows applications validate certificates using the CertGetCertificateChain API takes a number of control parameters via the PCERT_CHAIN_PARA structure, one can specify what EKUs they want to make sure a certificate is good for via the RequestedUsage parameter.

This logic (frankly almost all of the certificate validation) is all wrapped into this one call.

So what does this have to do with MSRC 2718704? Well it has reduced the risk of this mess up in a meaningful way I thought I would explain but before I do let me explain that I am not trying to downplay the significance of this issue I am just trying to clarify where the risks are.

As we know now the “licensing solution” deployed for terminal services has put a signing CA that is trusted for Code Signing in ever enterprise that uses the product.   But how is it restricted to just Code Signing, that’s really what this post is about.

Let’s look at the EKUs included in the offending “MS” certificate, in that chain we see:

  • Microsoft Root Authority
    • No EKUs
  • Microsoft Enforced Licensing Intermediate PCA
    • EKUs = Code Signing, Key Pack Licenses, License Server Verification
    • Effective EKUs = Code Signing, Key Pack Licenses, License Server Verification
  • Microsoft Enforced Licensing Certificate Authority CA
    • EKUs = Code Signing, License Server Verification
    • Effective EKUs = Code Signing, License Server Verification
  • Microsoft LSRA PA
    • EKUs = None
    • Effective EKUs = Code Signing, License Server Verification
  • MS
    • EKUs = None
    • Effective EKUs = Code Signing, License Server Verification

You will notice that the “Microsoft LSRA PA” certificate lists no EKUs but the Effective EKUs are listed as “Code Signing” and “License Server Verification”, this is because of the Nested EKU behavior I describe above.

The same thing happens in the end “MS” certificate; even though it has no EKUs listed I can only be used to validate licenses and sign-code because that’s all it’s issuers are entitled to bestow onto its subordinates.

OK so what does all of this mean to you and me? It basically means as long as the application is written using CryptoAPI in the intended way (and all do that I am aware of in this context) those CAs out there cannot be used to issue SSL certificates (or any other usages not listed) that would be “valid”, they can of course sign code as Microsoft which is a larger issue in my book.

Anyway over the years I have proposed in IETF that this same behavior be adopted, it was always rejected as an evil Microsoft conspiracy (I was at Microsoft at the time) it of course was nothing of the sort but in the end I gave up. Recently I have started trying to convince the browsers directly to implement this same behavior as I feel it is beneficial, for example here is a NSS bug tracking the same request, if implemented that would take care of Chrome and Firefox, that still leaves Safari and Opera but it’s a step in the right direction.

 

Additional Resources

http://lists.randombit.net/pipermail/cryptography/2012-June/002966.html