Tag Archives: Standards

MSRC 2718704 and Nested EKU enforcement

There are a number of technical constraints a Certificate Authority can put into place on a subordinate Certificate Authority; the general concept is referred to as Qualified Subordination.

One of the most important ways to constrain a certificate is through by restricting what it can be good for.

The foundation for such a constraint is provided by PKIX in the Extended Key Usage extension (RFC 5280), this extension can be put into a certificate to restrict what it is trusted for – for example a certificate might be OK for SSL Server Authentication but not for S/MIME.

The problem is the RFC provides no practical guidance on how to act when this certificate is encountered in a CA certificate, all it says is:

In general, this extension will appear only in end entity certificates.

One can interpret this to mean that its semantics are the same in issuer or subscriber certificates, this makes sense but isn’t very useful as a CA is not very likely to ever perform “application tasks” like S/MIME or SSL Server authentication with its signing key, so why would you put it in a CA certificate?

Also if you look back at the history this extension was really one of the first that was introduced, it came into existence in a time where PKIs were only one level deep – the absence of guidance on how to handle this could easily be seen as an omission.

Microsoft saw it this way and decided to have their implementation treat this extension as a constraint, in other words if no EKU is present in the chain then the chain is considered good for all usages. But once a single EKU is added into the path nothing bellow it can be considered good for a non-listed EKU.

In Windows applications validate certificates using the CertGetCertificateChain API takes a number of control parameters via the PCERT_CHAIN_PARA structure, one can specify what EKUs they want to make sure a certificate is good for via the RequestedUsage parameter.

This logic (frankly almost all of the certificate validation) is all wrapped into this one call.

So what does this have to do with MSRC 2718704? Well it has reduced the risk of this mess up in a meaningful way I thought I would explain but before I do let me explain that I am not trying to downplay the significance of this issue I am just trying to clarify where the risks are.

As we know now the “licensing solution” deployed for terminal services has put a signing CA that is trusted for Code Signing in ever enterprise that uses the product.   But how is it restricted to just Code Signing, that’s really what this post is about.

Let’s look at the EKUs included in the offending “MS” certificate, in that chain we see:

  • Microsoft Root Authority
    • No EKUs
  • Microsoft Enforced Licensing Intermediate PCA
    • EKUs = Code Signing, Key Pack Licenses, License Server Verification
    • Effective EKUs = Code Signing, Key Pack Licenses, License Server Verification
  • Microsoft Enforced Licensing Certificate Authority CA
    • EKUs = Code Signing, License Server Verification
    • Effective EKUs = Code Signing, License Server Verification
  • Microsoft LSRA PA
    • EKUs = None
    • Effective EKUs = Code Signing, License Server Verification
  • MS
    • EKUs = None
    • Effective EKUs = Code Signing, License Server Verification

You will notice that the “Microsoft LSRA PA” certificate lists no EKUs but the Effective EKUs are listed as “Code Signing” and “License Server Verification”, this is because of the Nested EKU behavior I describe above.

The same thing happens in the end “MS” certificate; even though it has no EKUs listed I can only be used to validate licenses and sign-code because that’s all it’s issuers are entitled to bestow onto its subordinates.

OK so what does all of this mean to you and me? It basically means as long as the application is written using CryptoAPI in the intended way (and all do that I am aware of in this context) those CAs out there cannot be used to issue SSL certificates (or any other usages not listed) that would be “valid”, they can of course sign code as Microsoft which is a larger issue in my book.

Anyway over the years I have proposed in IETF that this same behavior be adopted, it was always rejected as an evil Microsoft conspiracy (I was at Microsoft at the time) it of course was nothing of the sort but in the end I gave up. Recently I have started trying to convince the browsers directly to implement this same behavior as I feel it is beneficial, for example here is a NSS bug tracking the same request, if implemented that would take care of Chrome and Firefox, that still leaves Safari and Opera but it’s a step in the right direction.

 

Additional Resources

http://lists.randombit.net/pipermail/cryptography/2012-June/002966.html

Getting beyond 1% — How do we increase the use of SSL?

Today about 1% of the traffic on the Internet is protected with SSL (according to Sandyvine), there are a few key issues keeping this number so small and I thought I would put together a quick post on what I think those issues are.

 

Interoperability

For over a decade we have been working towards migrating to IPv6, despite that we have made little progress, in-fact they say that at the end of 2012 we will run out of IPv4 addresses.

As far as I know not one of the top 10 CAs support IPv6 yet (yes, not even GlobalSign though were working on it). This means it is impossible to host a pure IPv6 SSL solution today (because of the need for revocation data).

This is also interesting because today many sites are hosted on virutal hosting solutions that share the same IP address — this is primarily because IP addresses are a scarce resource, it has the side effect of making it hard (sometimes impossible) to deploy SSL on these hosts.

In 2003 an extension to TLS was proposed to address this problem it’s called Server Name Indication (SNI – now defined RFC 6066).

Today the server support for this extension is quite good but the same can not be said for client support (due to the lingering XP population and influx of mobile devices).

In my opinion this is the #1 issuing holding back the adoption of SSL everywhere.

 

Complexity

It is amazing to me but very little has changed in the CA industry since it’s birth in the mid 90s, certificates are still requested and managed in essentially the same way – it’s a shame, it’s wasteful.

One of the reasons I joined GlobalSign is they have been trying to address this issue by investing in both clients and APIs (check out OneClick SSL and CloudSSL) — with that said that there is still a lot more that can be done in this area.

Then there is the problem of managing and deploying SSL, the SSL Pulse data shows us it’s hard to get SSL configured right; we are getting better tools for this but again there is still a ton of room for improvement.

 

Performance

There has been a bunch of work done in this area over the years; the “solutions” relating to  performance of SSL seem to be broken up into:

  1. Protocol improvements (SPDYFalseStartOCSP Stapling, etc.)
  2. Using different cryptography to make it faster (Smaller keys, DSA, ECDSA, etc.)
  3. Using accelerator products (F5 BigIP, NetScaller, SSL Accelerators, etc.)

I won’t spend much time on protocol improvements as I think it gets a ton of coverage from the likes of Google who have made several proposals in this area over the last few years. I do have concerns with these protocol changes introducing interoperability issues, but I can’t argue with the performance benefits they offer.

You will notice I also included OCSP Stapling in this group, I think this is a great way to improve revocation checking but it’s not about security, it’s about performance and reliability – you should just use this today, it’s safe and very likely supported by your servers already.

The use of different cryptography is an interesting one, however again the issue of compatibility rears its ugly head. Though every implementation of an algorithm will perform differently the Crypto++ benchmarks are a nice way to get high level understanding of an algorithms performance characteristics.

There is a lot of data in there, not all of it related to SSL but one thing definitely is the performance characteristics of RSA vs DSA:

Operation Milliseconds/Operation Megacycles/Operation
RSA 1024 Signature 1.48 2.71
RSA 1024 Verification 0.07 0.13
DSA 1024 Signature 0.45 0.83
DSA 1024 Verification 0.52 0.94

 

You will notice that with RSA it is more expensive to sign than it is to verify, you will also notice that with DSA the opposite is true (it is also faster in this sampling).

Since in the case of SSL it is the server doing the signature and the client doing the verify this is an important fact, it means a server using a DSA certificate will spend less time doing crypto and more time doing other stuff like serving content.

On the surface this sounds great, there are of course problems with this though – for one because of the work researchers have done to “break” RSA over the last few years the browsers are moving CAs to not issue 1024bit RSA keys (by 2013) an effort which CAs have also applied to DSA.

Another not-so trivial factor is that Microsoft only supports DSA keys up to 1024 bits in length which means the larger DSA keys are not viable on these platforms.

So what of the new ciphers like AES and ECDH-ECDSA? This will represent a very large performance boon for web server operators but they too like SNI are not supported by legacy browsers.

What this means for you is for the next few years we have to make do with the “legacy cipher suites” as a means to facilitate TLS sessions.

Miscellaneous

Not everything fits neatly into the above taxonomy, here are a few common topics that don’t:

  1. Increased cost of operation
  2. Inability to do “legitimate” packet inspection

Increased cost of operation can be summerized needing more servers for the same load due to the increased SSL computational costs.

Inability to do “legitimate” packet inspection can be summarized as limiting he practical value of existing security investments of technologies like Intrusion Detection and Network Optimization since once the traffic is encrypted they become totally innefective. To work around this issue networks need to be designed with encryption and these technologies in mind.

 

Summary

I personally think the biggest barriers is ineroperability, the biggest part of this being the lingering XPs installations; the silver lining being the last few years XP has lost market share at about 10% per year, at the current rate we are about three years from these issues being “resolved”.

In the mean time there is a lot the industry can do on the topic of complexity, I will write more on this topic another time.

Generic Identity Device Specification Published

In the PC ecosystem, when a new device (say mass storage) technology is introduced, commonly there is little standardization, vendors produce proprietary software stacks for interacting with that device, they have custom hardware interfaces for interacting with the device, custom software for managing those devices, etc.

As a device picks up in popularity common platform programing interfaces typically emerge, sometimes these are commercial in nature, other times they are standards based; in either case the goals of these interfaces are simple: abstract out the variety in the hardware ecosystem for the application developer allowing them to write software that can run on any machine regardless of which vendor manufactured a given device. These abstractions also commonly allow the sharing of devices so that multiple applications can use them at the same time.

The next phase in a devices maturity is normally the definition of a class interface for interacting with hardware, it’s this last phase that allows the “no driver needed” story that users like so much; we all reap the rewards of this with flash drives today, plug in the device and it just works (the same is true for display technologies like VGA).

These class drivers commonly cater to the lowest common denominator when it comes to functionality, but vendors are always able to add additional capabilities that are exposed when their drivers and custom software are present (again think about display technologies here as a good example).

There is one device in particular that has not entirely followed this flow that I wanted to talk about and that is Smart Cards; as a concept was they emerged in the 1970s, the first cards went into production in the late 70s. Here we are 40 years later and there is no clear “class driver” for these devices, that is not to say there have not been attempts, some even with success, but those that have had success have been closed system solutions, for example the PIV interfaces used within the US Federal Government.

In the commercial space however, no class specification that has been attempted really was viable, there are lots of reasons for this but I am cautiously optimistic that there is now a candidate.

One of the projects I was working on over the last few years was the specification of the Generic Identity Device Specification, this attempts to build on the success of the government based card specifications and extend it to commercial applications as well.

I had opportunities to work with some great folks on this effort, we all had the same goal make smart cards as reliable, cost effective and accessible as possible; I believe this work does just that.

This specification has now been released by Microsoft under the Microsoft Community Promise, that means it is available royalty free for anyone to adopt; this is a big win for our partners and above all the customers who will benefit the most from it.

So what does this mean for you? Well if you’re a customer looking to deploy smart cards you should seriously look for vendors who produce cards that are compliant with this specification, it means lower cost of deployment, makes it easier for you to multi-source cards and in the end it will likely reduce the overall cost of cards as volumes go up based on function of scale.

For a card manufacturer there are a number of benefits as well, it is possible to develop a GIDS card that is compatible with the PIV card-edge, this means you can develop a single card stock get it evaluated for FIPS (or whatever other standard) that can be sold into commercial or government applications (reducing cost) and these cards will have a great experience in Windows.

If you are a platform or operating system developer you now have a specification you can use as a baseline for testing card scenarios, a way to (hopefully) support a large number of “real” cards that will exist on the market (soon I hope), if this happens we can experience driver coverage numbers similar to other device classes.

For those of you not in this segment, this last point is super important, there is so much fragmentation in the market no solution has over a couple percent of card coverage in-box, if this specification gets adopted that number can start to look more like other device classes where the number is in the 90 percentile range.

In any event, I am pleased to see this out there, here’s hoping it gets adopted broadly…