Microsoft Root Program and Excel

The other day I was was studying up on my Excel so I could help someone with a project, today I met with a friend where we were discussing the composition of root programs.

Since when you’re a plumber you fix everything with a wrench out came Excel and since sometimes I can’t let a problem sit still I spent far to much time slicing and dicing the Microsoft Root Program membership list.

There is a ton more that can be done, for example:

  1. Root CA adoption relative to UN membership.
  2. Root CA certificates based on validity dates.
  3. Comparing the Microsoft Root Program membership to the Mozilla Root Program membership.
  4. CAs per network (using AIA:OCSP urls as an indicator)
  5. CA adoption of CDNs for OCSP and CRLs.
  6. A look at how many operational facilities are associated with each organization (for example Symantec has 66 root CAs for there various brands are they all in the same facilities?).

 

Anyways it was a fun exercise and I thought I would share the results with you, here is the XLSX.

For those who do not want to look at the XSLX here are some statistics you may think are interesting:

  1. 31.30% of the CAs were owned explicitly by governments.
  2. 64.35% of the CAs were owned by commercial entities in the business of being third-party trust providers.
  3. Of the 352 certificates they are owned by 115 organizations, 36 government, 74 commercial and 5 enterprise.
  4. The USA has the most organizations who own CAs coming in at 13.04%.
  5. Spain is next with 12.17% of the organizations owning  CAs being based there..
  6. France is number three with 5.22% of the organizations owning  CAs being based there..
  7. 17.33% of the roots use 1024bit keys.
  8. 60.23% of the roots use 2048bit keys.
  9. 18.47% of the roots use 4096bit keys.
  10. 1.70% of the roots use ECDSA384 keys.
  11. 115 of the 206 sovereign nations have CAs within their borders.

Advanced Troubleshooting of Certificate Validation Related Problems on Windows Part 2

One of the most complicated things to troubleshoot in X.509 is failures related to Name Constraints handling, there are a few ways to approach this but one of the easiest is to use the Extended Error Information in the Certificate viewer.

Lets walk through an exercise so you can try using this on your own, first download this script, if you extract its contents and run makepki.bat you get a multi-level PKI that looks something like this:

clip_image002

Before we begin its useful to understand that Name Constraints are applied at an Issuing CA level, in this case the constraints say that the CA is authoritative for any DNS or RFC822 name within in the example.com domain, it also allows a specific base distinguished name (DN).

In our example these constraints are applied by the “Test Partner CA” on to the “Customer CA”; you can see the restrictions in its certificate:

clip_image003

This script creates two certificates, one for email (user.cer) and another for SSL (www.cer), normally the script makes both of these certificates fall within the example.com domain space but for the purpose of this post I have modified the openssl.cfg to put the email certificate in the acme.com domain. — The idea here is that since this domain is not included in the constraints a RFC 3280 compliant chain engine will reject it and we have an error to diagnose.

Normally we don’t already know what the failure associated with a certificate chain is, after all that’s why we are debugging it but the process we use here will help us figure out the answer — applications are notorious for sharing why a certificate chain was rejected.

Since we can’t rely on the applications one way for us to figure this out on our own is to look at the chain outside of the application in the Windows Certificate viewer to see if we can tell what the issue is there.

Let’s start by looking at the “good” certificate:

clip_image005

What’s important here is that we see no errors, but how do we know that? Well let’s look at the bad certificate and see what the difference is:

clip_image007

The certificate clearly has a problem and Windows has done a decent job of telling us what the problem was, there is a name in this certificate that is inconsistent with the Name Constraints associated with this certificate chain. The question is — which name?

To answer that question we need to look at the Name Constraints, remember that this is in the CA certificates it can be in any or all of them which means we need to first figure out where the Constraints are.

To do that we start at the “Certification Path”, here you see something like this:

clip_image008

Notice the little yellow flag, that’s the certificate viewer saying this is where the problem is, let’s take a look at that certificate:

clip_image009

But wait! There isn’t a problem with this one, that’s because this certificate is actually good it’s the certificate it issued that is the problem. Here is the non-obvious part, let’s look at the details tab:

clip_image010

Here we see Windows is telling us exactly what the problem was with the certificate that this CA has issued, it is the email address and its inclusion of the acme.com domain.

Remember that it was restricted, via the Name Constraints extension, to the example.com domain when issuing email certificates. The chain engine would be non-compliant with the associated RFC if it trusted this certificate.

But why did we see this Extended Error Information in this certificate and not the leaf itself? This is actually by design and makes sense when you think about it. Name Constraints can exist in each issuing CA certificate in the chain and what is happening here is that CryptoAPI is telling us that the issuer certificate that was flagged is the offending actor.

This Extended Error Information is available in the Certificate Viewer in a number of cases (for example Revocation), though most of the interesting cases require the application to launch the certificate viewer as you get to view the certificate chain (and all of the associated state) that they were using when they encountered the problem.

Advanced Troubleshooting of Certificate Validation Related Problems on Windows Part 1

The Windows platform for validating X.509 certificates has a feature I don’t see many discuss — its robust logging subsystem.

This allows a non-developer (and developers) to get insights into what is happening with applications interactions with CryptoAPI 2 and to some degree what is happening inside those APIs.

To use this feature you must first enable it, the easiest way to do that is via the EventViewer Management Console (eventvwr.msc), once in there you must navigate to the CryptoAPI 2 (CAPI2) node of the viewer:

clip_image001[6] clip_image002[6] clip_image003[6]

 

Once you get there you select Properties on the operational log, which will give you a dialog that looks something like this:

clip_image007

 

Once you “Enable Logging” and “Apply” the changes, immediately all calls to CryptoAPI will be logged. CryptoAPI is used all the time so by the time you close that dialog you will have some events you can look at:

clip_image008

 

The high level view of each event doesn’t tell you much, for example in the above picture we really only know that the action that was being performed with a “Build Chain”, this particular event corresponds to a call to CertGetCertificateChain.

To really understand what is going on you need to look at the Details tab:

clip_image009

 

Here we can see what was passed into a given into that API call, we can see what certificates we passed in by the calling application, what settings they chose when making that call:

clip_image011

 

Some other things you can see in this particular API call include what the disposition of the call was and of course what certificate chain was built.

Every major API in CryptoAPI has logging similar to this, you get to see what is passed in and what came out. Additionally major “objects” are sometimes logged as well, for example here is an event showing a certificate that CryptoAPI was operating against:

clip_image012

 

Another very useful API to be able to look at is CertVerifyRevocation, this will be logged in this fashion:

clip_image013

 

Some of these events include references to temporary files, for example in this case you see:

clip_image014

This is a CRL for the “GlobalSign PersonalSign 1 CA – G2”, its stored in the Time Valid Object Cahce (TvoCache), you can look at this cache with the “certutil –urlcache” command.

 

The events will all tell you which application was the caller and if they are part of a sequence of tasks you get enough information to correlate them and put them in order.

clip_image015

 

I won’t go through all of the events but as you can see this is super valuable when trying to figure out – Why did that application do that?!

As you might imagine this logging can slow things down and produces a bunch of data so be sure to turn it off when you are done.

Abstract: Using least privileged design principals to improve trust in the online marketplace

Weekends, they are overrated 🙂

Tomorrow is the cut-off for submissions for the NIST workshop on “Workshop on Improving Trust in the Online Marketplace” to be held April 2013 and I have spent part of the day thinking about what talks might be interesting. I have already submitted one on “Revocation reality and the path to becoming effective” but I also wrote this one up and I might submit it also, posting here so I can get a little feedback before the submission deadline.

In 2010 security researchers with the EFF collected the certificates of all of the publicly-visible SSL certificates on the IPv4 internet and published their analysis and data-sets from their research. This work made it clear to the world how extensively PKI is used to facilitate commerce on the web but it also raised he concern that there were as many as 650 organizations capable of issuing publicly trusted certificates on the internet.

While this conclusion is exaggerated as many of those certificates and keys are in-fact operated by the same organizations that their certificates are ultimately issued by, the conclusion that there has been an un-needed expansion of the number of keys that are technically trusted to issue certificates for SSL for the entire Internet is sound.

To address this problem one of the steps that is needed is the application of least privilege principals to how one designs and manages publicly trusted keys and certificates. Thankfully in the late 90’s the foundation for addressing much of this problem was developed as a means to enable the Federated PKI in use by the U.S. Federal Government.

For the last year we have been working to broadly deploy X.509 Name Constraints’ along with other least privilege design principals to our customers PKIs both internally managed by our own staff as well as those on premise. This talk will explore these concepts, the client support for them, the challenges we have experienced in their deployment and identify the remaining issues that must be addressed to obtain the full benefits of this approach.

Abstract: Revocation reality and the path to becoming effective

Just submitted my first abstract for the NIST workshop on “Workshop on Improving Trust in the Online Marketplace” in April, the title of the talk is “Revocation reality and the path to becoming effective”, the abstract of which is:

 

The concept of certificate revocation is core to the X.509 trust model however 18 years after its introduction the reality is as implemented and deployed it falls short of its promise to enable an issuer certificate issuers to protect relying parties from malicious actors and miss-issuance

This talk will discuss the findings of a project where I have observed the behavior (https://revocation-report.x509labs.com), up time and performance of revocation repositories for a number of commercial Certificate Authorities for a period of over six months.

Additionally I will overview the revocation behavior of the most common browsers, identifying the gaps as they exist in those implementations.

And finally I will provide a set of recommendations that I believe if followed can address the current gaps which would move us to a world where revocation checking is an effective means of protecting relying parties from known bad actors and miss-issuance.

Is SSL Broken?

[ This is a re-post of a article I wrote for the GlobalSign corporate blog, you can find it here]

It seems every month a new flaw is identified in SSL, and while that’s a slight exaggeration, after a while one starts to ask the question – is SSL broken? My answer would to that question would be no, but the protocol is nearly twenty years old and even though it now carries a new name (TLS) it also carries much of the baggage of the past in its design.

Despite this fact, my faith in TLS is stronger today than it ever was. My reasoning is simple – today we understand the strengths and weaknesses of this protocol better than we ever have. It is continuously reviewed by the world’s best engineers and cryptographers, trying to find the bad assumptions their predecessors made, strengthening it in response to identified weaknesses, and modernizing it to use the strongest forms of cryptography available.

This continuous investment in this foundational technology gives me faith.

Today another attack on TLS was made public.  “Lucky Thirteen” is a derivative of the work of French cryptographer Serge Vaudenay (Padding Oracles against CBC based ciphers – 2010), though unlike Vaudenay’s attack, Lucky Thirteen uses a known Timing Attack previously believed to be impractical. A successful application of this attack enables an attacker to decrypt your SSL communications.

Unlike other recent attacks, such as BEAST,  Lucky Thirteen requires a server-side fix. This means that complete and effective protection against this attack will require all webservers to be updated or patched.

That said, it is possible to mitigate the attack by removing CBC cipher suites, since the attack is against SSL/TLS’s use of CBC. But what to use in its place? The consensus of security researchers is to adopt suites based on AES-GCM, and while I agree, this has one problem – the large population of clients that do not yet support it.

This recommendation is complicated slightly by the BEAST attack from last year, the resolution of which required a client side fix which has, in all likelihood, not yet been deployed ubiquitously. As such, I still recommend prioritizing the older and less secure RC4 based suites above AES-GCM since it addresses both issues.

But should you be worried? It depends. If you are using TLS (and not its little brother DTLS) I would say your best bet is to walk calmly to the nearest exit, and use this as an excuse to ensure you are following industry Best Practices when deploying SSL – if  you’re not, this attack is the least of your worries.  Specifically I would recommend visiting the SSL Configuration Checker and make the critical (red) and important (yellow) configuration changes it suggests.

I would also encourage you to deploy HTTP Strict Transport Security  on your site since the attack this mitigates (SSL stripping) is much easier for an attacker to execute.

The good news is that if you were already following the advice of the SSL Configuration Checker you were prioritizing RC4 over other ciphers and most sessions to your server were resistant to this attack. This doesn’t mean you should not be deploying the patch to this issue, you just don’t need to do so in a crazed rush.

So are there any lessons we can take away from this? Of course there are. As a server operator, I would say this finding underscores the importance of regularly reviewing your server configuration to ensure that it follows industry best, and that you are always operating the most recent and stable release of your web server.

If you want a more technical walk through of this attack, I highly recommend this post by Mathew Green on TLS Timing Oracles or this one by Adam Langly.

SSL 3.0 Usage in the Wild

Recently I had an opportunity to look at some logs that showed the cipher suites and protocol versions being negotiated for a large cross-section of websites.

I have always wanted to look at data like this and as such have instrumented my own sites to look at it but let’s face it some uber geek blog or security product company website just isn’t going to have representative traffic for the internet at large.

One of the easiest and most useful things to gleam from this data is that the impact of disabling SSL 3.0 is actually quite small.

So of the sampling 2.48% of all SSL/TLS sessions were done with SSL 3.0, if we look at (and believe) the User Agents that negotiated these sessions we see 74.98% of these were Windows clients, the next biggest chunk was Gecko at 16.39%.

Browser %
Internet Explorer

74.98%

Gecko

16.39%

Apple

4.12%

Playstation

2.85%

Chrome

1.36%

Other

0.30%

100.00%

 

Of these Windows clients 45.45% of them were Windows 2000 or XP but only 6.67% of them were running versions of Internet Explorer that did not support TLS 1.0; this basically boils down the the IE versions before version 7 as this was the first to enable TLS by default. So why did we see the remaining 68.31% of the 2.48% negotiating SSL when they support TLS?

There are a few possible explanations:

  1. Some TLS implementations will fall back to SSL in the event of a failure, one common example of a failure would be an intermittent TCP connection problem. Basically if this is the case the client had a problem reaching the server and thought it might be related to TLS and so it tried again. In this case its likely that if it had tried with TLS it probably would have succeeded also.  It also seems that its likley in this case the user did not get a working experience — the assumption here is that the TCP problems they are experiencing were not a one time thing.
  2. Some old TLS implementations had problems with TLS extensions as such some TLS implementations added logic to fall back to SSL when they encountered a this extension intolerance, again falling back to TLS (without extensions) would have likely also worked.
  3. Some enterprises may have used group policy to disable the use of TLS due to the TLS extension intolerance problems (see #2).
  4. Some clients are lying; they may be crawlers, bots and other such automated agents looking to profile these websites.

So what can we do with this data?

Well for one we can understand what interoperability implications we may encounter by disabling SSL 3.0 on our servers – on the surface the answer is up to 2.48% of clients will not be able to get to our servers.

The real answer is that it’s likely that figure is much smaller, probably half that if not even less than.

OK, so we understand the interoperability impact but why should I care? Well there are a few reasons:

  1. NIST 140-2 compliance requires disabling SSL 3 ciphers and by disabling SSL 3 you do just that.
  2. The browsers that only support this decade old protocol are nearly as old and a have a litany of issues of their own.
  3. TLS has a number of security, performance and deploy-ability enhancing  features such such as stronger cipher suites, Session Tickets and SNI that you will benefit from.

Another thing you should ask yourself is did you design your site for these old browsers? If not by leaving SSL 3 enabled you really are not getting much if any benefit since those users who require it would likely not be able to use your site effectively anyways.

When we consider this data I believe the natural conclusion is that disabling SSL 3.0 it is the right thing to do.

Ryan

Understanding Windows Automatic Root Update

Windows has a feature called Automatic Root Update, when CryptoAPI does a chain build, exhausts the locally installed root certificates it downloads (if it has not already done so) a list of certificates it should trust.

This list contains attributes about those certificates (hashes of their subject name and keys, what Microsoft believes it should be trusted for, etc.).

If it finds the certificate it needs in that list it downloads it and installs it.

So how does this technically work?

There is a pre-configured location where Windows looks for this Certificate Trust List (CTL), it is http://www.download.windowsupdate.com/msdownload/update/v3/static/trustedr/en/authrootstl.cab.

NOTE: Technically they do use different language codes in the URLs based on the client local but to the best of my knowledge they all map to the same files today.

This list is downloaded and de-compressed; you can do the same thing on Windows (with curl) like this:

mkdir authroot

cd authroot

curl http://www.download.windowsupdate.com/msdownload/update/v3/static/trustedr/en/authrootstl.cab>authrootstl.cab

expand authrootstl.cab .\authroot.stl

You can open authroot.stl with explorer and look at some of its contents but for the purpose of our exercise that won’t be necessary.

This file is a signed PKCS #7 file, Windows expects this to be signed with key material they own (and protect dearly).

It contains a Microsoft specific ContentInfo structure for CTLs and a few special attributes. WinCrypt.h has the OIDs for these special attributes defined in it but for brevity sake here are the ones you will need for this file:

1.3.6.1.4.1.311.10.1 OID_CTL

1.3.6.1.4.1.311.10.3.9 OID_ROOT_LIST_SIGNER

1.3.6.1.4.1.311.10.11.9 OID_CERT_PROP_ID_METAEKUS

1.3.6.1.4.1.311.10.11.11 CERT_FRIENDLY_NAME_PROP_ID

1.3.6.1.4.1.311.10.11.20 OID_CERT_KEY_IDENTIFIER_PROP_ID

1.3.6.1.4.1.311.10.11.29 OID_CERT_SUBJECT_NAME_MD5_HASH_PROP_ID

1.3.6.1.4.1.311.10.11.83 CERT_ROOT_PROGRAM_CERT_POLICIES_PROP_ID

1.3.6.1.4.1.311.10.11.98 OID_CERT_PROP_ID_PREFIX_98

1.3.6.1.4.1.311.10.11.105 OID_CERT_PROP_ID_PREFIX_105

1.3.6.1.4.1.311.20.2 szOID_ENROLL_CERTTYPE_EXTENSION

1.3.6.1.4.1.311.21.1 szOID_CERTSRV_CA_VERSION

1.3.6.1.4.1.311.60.1.1 OID_ROOT_PROGRAM_FLAGS_BITSTRING

Copy this list of OIDs and constants into a file called authroot.oids and put it where we extracted the STL file.

We can now use the following OpenSSL command to inspect the contents in more detail:

openssl asn1parse -oid authrootstl.oids -in authroot.stl -inform DER

There is lots of stuff in here, but for this exercise we will just download the certificates referenced in this list.

To do that we need to understand which of the attributes are used to construct the URL we will use to download the actual certificate.

This list values we want are just after the OID_ROOT_LIST_SIGNER, the first one being “CDD4EEAE6000AC7F40C3802C171E30148030C072”, its entry will look like this:

SEQUENCE

  128:d=8  hl=2 l=  20 prim:         OCTET STRING      [HEX DUMP]:CDD4EEAE6000AC7F40C3802C171E30148030C072

  150:d=8  hl=3 l= 246 cons:         SET

  153:d=9  hl=2 l=  30 cons:          SEQUENCE

  155:d=10 hl=2 l=  10 prim:           OBJECT            :OID_CERT_PROP_ID_PREFIX_105

  167:d=10 hl=2 l=  16 cons:           SET

  169:d=11 hl=2 l=  14 prim:            OCTET STRING      [HEX DUMP]:300C060A2B0601040182373C0302

  185:d=9  hl=2 l=  32 cons:          SEQUENCE

  187:d=10 hl=2 l=  10 prim:           OBJECT            :OID_CERT_SUBJECT_NAME_MD5_HASH_PROP_ID

  199:d=10 hl=2 l=  18 cons:           SET

  201:d=11 hl=2 l=  16 prim:            OCTET STRING      [HEX DUMP]:F0C402F0404EA9ADBF25A03DDF2CA6FA

  219:d=9  hl=2 l=  36 cons:          SEQUENCE

  221:d=10 hl=2 l=  10 prim:           OBJECT            :OID_CERT_KEY_IDENTIFIER_PROP_ID

  233:d=10 hl=2 l=  22 cons:           SET

  235:d=11 hl=2 l=  20 prim:            OCTET STRING      [HEX DUMP]:0EAC826040562797E52513FC2AE10A539559E4A4

  257:d=9  hl=2 l=  48 cons:          SEQUENCE

  259:d=10 hl=2 l=  10 prim:           OBJECT            :OID_CERT_PROP_ID_PREFIX_98

  271:d=10 hl=2 l=  34 cons:           SET

  273:d=11 hl=2 l=  32 prim:            OCTET STRING      [HEX DUMP]:885DE64C340E3EA70658F01E1145F957FCDA27AABEEA1AB9FAA9FDB0102D4077

  307:d=9  hl=2 l=  90 cons:          SEQUENCE

  309:d=10 hl=2 l=  10 prim:           OBJECT            :CERT_FRIENDLY_NAME_PROP_ID

  321:d=10 hl=2 l=  76 cons:           SET

  323:d=11 hl=2 l=  74 prim:            OCTET STRING      [HEX DUMP]:4D006900630072006F0073006F0066007400200052006F006F007400200043006500720074006900660069006300610074006500200041007500740068006F0072006900740079000000

We can download this certificate with the following command:

curl http://www.download.windowsupdate.com/msdownload/update/v3/static/trustedr/en/CDD4EEAE6000AC7F40C3802C171E30148030C072.crt> CDD4EEAE6000AC7F40C3802C171E30148030C072.crt

If we look at the contents of the certificate we can see this is the 4096 bit “Microsoft Root Certificate Authority”.

The next in the list is 245C97DF7514E7CF2DF8BE72AE957B9E04741E85, and then the next is 18F7C1FCC3090203FD5BAA2F861A754976C8DD25 and so forth.

We could go through and parse the ASN.1 to get these values, iterate on them and download them all if we wanted. Of course if we wanted to get all of the root certificates we might as well just download the most recent root update here: http://www.microsoft.com/en-us/download/details.aspx?id=29434

It contains the same information just in a self-contained format (this is whats used by down-level clients that do not have Automatic Root Update).

Anyhow hopefully this will be useful for someone,

Ryan

How Facebook can avoid losing $100M in revenue when they switch to always-on SSL

Recently Facebook announced that they will be moving to Always-On-SSL, I for one am thrilled to see this happen – especially given how much personal data can be gleamed from observing a Facebook session.

When they announced this change they mentioned that users may experience a small performance tax due to the addition of SSL. This is unfortunately true, but when a server is well configured that tax should be minimal.

This performance tax is particularly interesting when you put it in the context of revenue, especially when you consider that Amazon found that every 100ms of latency cost them 1% of sales. What if the same holds true for Facebook? Their last quarter revenue was 1.23 billion, I wanted to take a few minutes and look at their SSL configuration to see what this tax might cost them.

First I started with WebPageTest; this is a great resource for the server administrator to see where time is spent when viewing a web page.

The way this site works is it downloads the content twice, using real instances of browsers, the first time should always be slower than the second since you get to take advantage of caching and session re-use.

The Details tab on this site gives us a break down of where the time is spent (for the first use experience), there’s lots of good information here but for this exercise we are interested in only the “SSL Negotiation” time.

Since Facebook requires authentication to see the “full experience” I just tested the log-on page, it should accurately reflect the SSL performance “tax” for the whole site.

I ran the test four times, each time summing the total number of milliseconds spent in “SSL Negotiation”, the average of these three runs was 4.111 seconds (4111 milliseconds).

That’s quite a bit but can we reduce it? To find out we need to look at their SSL configuration; when we do we see a few things they could do to improve things, these include:

Let’s explore this last point more, the status check the browser does is called an OCSP request. For the last 24 hours their current CA had an average world-wide OCSP response time of 287 ms, if they used OCSP Stapling the browser would need to do only one OCSP request, even with that optimization that request could be up to 7% of the SSL performance tax.

Globalsign’s average world-wide OCSP response time for the same period was 68 milliseconds, which in this case could have saved 219 ms. To put that in context Facebook gets 1.6 billion visits each week. If you do the math (219 * 1.6 billion / 1000 / 60 / 24), that’s 12.7 million days’ worth of time saved every year. Or put another way, it’s a lifetime worth of time people would have otherwise spent waiting for Facebook pages to load saved every two and a half hours!

If we consider that in the context of the Amazon figure simply changing their CA could be worth nearly one hundred million a year.

Before you start to pick apart these numbers let me say this is intended to be illustrative of how performance can effect revenue and not be a scientific exercise, so to save you the trouble some issues with these assumptions include:

  • Facebook’s business is different than Amazons and the impact on their business will be different.
  • I only did four samples of the SSL negotiation and a scientific measurement would need more.
  • The performance measurement I used for OCSP was an average and not what was actually experienced in the sessions I tested – It would be awesome if WebPageTest could include a more granular breakdown of the SSL negotiation.

With that said clearly even without switching there are a few things Facebook still can do to improve how they are deploying SSL.

Regardless I am still thrilled Facebook has decided to go down this route, the change to deploy Always-On-SSL will go a long way to help the visitors to their sites.

Ryan

Making a Windows smartcard login certificate with OpenSSL.

I use OpenSSL for testing certificate related stuff all the time, while using its test clients as a administrative tool can require contortions sometimes it’s very useful thing to have in my toolbox.

Today I needed to throw together a certificate for Windows smartcard login, a valid Windows Smart Card Login certificate has the following attributes:

  1. Is issued by an CA that is trusted as an Enterprise CA
  2. Is issued by a CA that has the “Smartcard Logon” EKU (1.3.6.1.4.1.311.20.2.2)
  3. Has the “Smartcard Logon” EKU
  4. Has the “Digital Signature” “Key Usage”
  5. Has the principal name of the subscriber in the SubjectAltName extension as a UPN (1.3.6.1.4.1.311.20.2.3)

With that background how does one do this in OpenSSL? Well lets focus on the last 3 (3,4,5) as they are about the subscriber certificate.

To create this certificate you would create an OpenSSL section that looks something like this:

[ v3_logon_cert ]

# Typical end-user certificate profile

 

keyUsage = critical, nonRepudiation, digitalSignature, keyEncipherment

extendedKeyUsage = critical, clientAuth, emailProtection, msSmartcardLogin

basicConstraints = critical, CA:FALSE

 

subjectKeyIdentifier = hash

authorityKeyIdentifier = keyid,issuer

 

authorityInfoAccess = @customerca_aia

 

subjectAltName = otherName:msUPN;UTF8:[email protected], email:[email protected]

 

certificatePolicies=ia5org,@rootca_polsect

There are a few other “reference” sections you can find the INF file I used these additions with in my script for testing Qualified Subordination.

Hope this helps you too,

Ryan