Category Archives: Programming

Generating signed messages using CMS and PKI.js

One of the most common signature formats on the web is known as CMS SignedData, this is the signature format used in PDF files, CAdES, S/MIME and several other digital signature solutions.

As a signature it has a few notable features:

  1. Having multiple signers.
  2. Including meta-data that will be signed along with the data that is being signed.
  3. Including meta-data that is outside the scope of the signature.
  4. Signing data contained within the signature or data referenced by it.

These traits mean you can do some interesting things like implementing counter-signing in-turn enabling notarization scenarios.

Utilizing PKI.js you can now create and verify this signature format, bellow is an example of how creating one of these messages looks using this library:

// #region Put a static values 
var sample_data = new Uint8Array(sample_data);
sample_data[0] = 0x00;
sample_data[1] = 0x01;
sample_data[2] = 0x02;
sample_data[3] = 0x03;
sample_data[4] = 0x04;

cms_signed_simpl = new org.pkijs.simpl.CMS_SIGNED_DATA({
    digestAlgorithms: [
        new org.pkijs.simpl.ALGORITHM_IDENTIFIER({ algorithm_id: "1.3.14.3.2.26" }) // SHA-1
    ],
    encapContentInfo: new org.pkijs.simpl.cms.EncapsulatedContentInfo({
        eContentType: "1.2.840.113549.1.7.1", // "data" content type
        eContent: new org.pkijs.asn1.OCTETSTRING({ value_hex: sample_data })
    }),
    signerInfos: [
        new org.pkijs.simpl.CMS_SIGNER_INFO({
            sid: new org.pkijs.simpl.cms.IssuerAndSerialNumber({
                issuer: cert_simpl.issuer,
                serialNumber: cert_simpl.serialNumber
            }),
            digestAlgorithm: new org.pkijs.simpl.ALGORITHM_IDENTIFIER({ algorithm_id: "1.3.14.3.2.26" }), // SHA-1
            signatureAlgorithm: new org.pkijs.simpl.ALGORITHM_IDENTIFIER({ algorithm_id: "1.2.840.113549.1.1.5" }), // RSA + SHA-1
        })
    ],
    certificates: [cert_simpl]
});
// #endregion 

return cms_signed_simpl.sign(privateKey, 0);

In this sample you can see we are putting our content to be signed within the SignedData message and then signing it with RSA and SHA1, this is in-the exact same thing that is needed to implement what is called opaque signed email in S/MIME.

 

Frankencerts, new extensions and PKI.js

One of the things that we wanted you to be able to use the PKI.js libraries for is the X.509 Certificates, including those with new extensions (like testing for OCSP MUST Staple or for testing other certificate processing libraries like was done in the Frankencert paper.

Here is an example of what that might look like:

function create_cert()
{
    // #region Initial variables 
    var sequence = Promise.resolve();

    var cert_simpl = new org.pkijs.simpl.CERT();

    var publicKey;
    var privateKey;
    // #endregion 

    // #region Get a "crypto" extension 
    var crypto = org.pkijs.getCrypto();
    if(typeof crypto == "undefined")
    {
        alert("No WebCrypto extension found");

        return;
    }
    // #endregion 

    // #region Put a static values 
    cert_simpl.serialNumber = new org.pkijs.asn1.INTEGER({ value: 1 });
    cert_simpl.issuer.types_and_values.push(new org.pkijs.simpl.ATTR_TYPE_AND_VALUE({
        type: "2.222.333",
        value: new org.pkijs.asn1.PRINTABLESTRING({ value: "RU" })
    }));
    cert_simpl.issuer.types_and_values.push(new org.pkijs.simpl.ATTR_TYPE_AND_VALUE({
        type: "2.222.444",
        value: new org.pkijs.asn1.PRINTABLESTRING({ value: "Test" })
    }));
    cert_simpl.subject.types_and_values.push(new org.pkijs.simpl.ATTR_TYPE_AND_VALUE({
        type: "2.222.333",
        value: new org.pkijs.asn1.PRINTABLESTRING({ value: "RU" })
    }));
    cert_simpl.subject.types_and_values.push(new org.pkijs.simpl.ATTR_TYPE_AND_VALUE({
        type: "2.222.444",
        value: new org.pkijs.asn1.PRINTABLESTRING({ value: "Test" })
    }));
    cert_simpl.notBefore.value = new Date();
    cert_simpl.notAfter.value = new Date(2016, 01, 01);

    cert_simpl.extensions = new Array(); // Extensions are not a part of certificate by default, it's an optional array

    // #region "BasicConstraints" extension
    var basic_constr = new org.pkijs.simpl.x509.BasicConstraints({
        cA: true,
        pathLenConstraint: 3
    });

    cert_simpl.extensions.push(new org.pkijs.simpl.EXTENSION({
        extnID: "2.5.29.19",
        critical: false,
        extnValue: basic_constr.toSchema().toBER(false),
        parsedValue: basic_constr // Parsed value for well-known extensions
    }));
    // #endregion 

    // #region "KeyUsage" extension 
    var bit_array = new ArrayBuffer(1);
    var bit_view = new Uint8Array(bit_array);

    bit_view[0] = bit_view[0] | 0x02; // Key usage "cRLSign" flag
    bit_view[0] = bit_view[0] | 0x04; // Key usage "keyCertSign" flag

    var key_usage = new org.pkijs.asn1.BITSTRING({ value_hex: bit_array });

    cert_simpl.extensions.push(new org.pkijs.simpl.EXTENSION({
        extnID: "2.5.29.15",
        critical: false,
        extnValue: key_usage.toBER(false),
        parsedValue: key_usage // Parsed value for well-known extensions
    }));
    // #endregion 

    cert_simpl.signatureAlgorithm.algorithm_id = "1.2.840.113549.1.1.5"; // RSA + SHA-1
    cert_simpl.signature.algorithm_id = cert_simpl.signatureAlgorithm.algorithm_id; // Must be the same value
    // #endregion 

    // #region Create a new key pair 
    sequence = sequence.then(
        function()
        {
            return crypto.generateKey({ name: "RSASSA-PKCS1-v1_5", modulusLength: 2048, publicExponent: new Uint8Array([0x01, 0x00, 0x01]), hash: { name: "sha-1" } }, true, ["encrypt", "decrypt", "sign", "verify"]);
        }
    );
    // #endregion 

    // #region Store new key in an interim variables
    sequence = sequence.then(
        function(keyPair)
        {
            publicKey = keyPair.publicKey;
            privateKey = keyPair.privateKey;
        },
        function(error)
        {
            alert("Error during key generation: " + error);
        }
    );
    // #endregion 

    // #region Exporting public key into "subjectPublicKeyInfo"  
    sequence = sequence.then(
        function()
        {
            return cert_simpl.subjectPublicKeyInfo.importKey(publicKey);
         }
    );
    // #endregion 

    // #region Signing final certificate 
    sequence = sequence.then(
       function()
       {
           return cert_simpl.sign(privateKey);
       },
       function(error)
       {
           alert("Error during exporting public key: " + error);
       }
    );
    // #endregion 

    sequence = sequence.then(
        function()
        {
            alert("Good result");
        },
        function(error)
        {
            alert("Error during signing: " + error);
        }
    );

    sequence.then(
        function()
        {
            return cert_simpl.verify();
        }
    ).then(
        function(result)
        {
            alert("Verification passed: " + result);
        },
        function(error)
        {
           alert("Verification failed: " + eror);
        }
    );
}

As you can see the library is designed in such a way you are not limited to the creation of some static pre-conceived layouts of these structures, you can fairly easily construct any type of certificate (or any of the other supported message types).

At a later date (if it makes sense to do so) we may also decide to add a simple layer ontop of this that abstracts out the need to understand encoding concepts as well.

This layered approach ensures the library can be used to create real-applications without the need to hack up the underlying APIs enabling developers to avoid the need to understand low-level  ASN.1 formats in detail.

Potential projects for PKI.JS

This is a list of potential projects that would benefit from PKI.js, can you think of any others that would be interesting?

  1. Adding signing, time-stamping and verifying support to pdf.js.
  2. Creating a smime.js on email.js that provides signing, verifying, encryption, decryption of S/MIME.
  3. Creating a CAdES library for web pages.
  4. Adding signature support to web document management solutions.
  5. Adding encrypted mail support in webmail offerings.
  6. Building a usable and modern certificate based authentication solution using web markup for credential selection.
  7. Creating a rich user experience for the short-lived certificates providing temporary access via SSH and TLS.

With some changes the libraries could be used in server side projects:

  1. Creating an OCSP responder based on node.js.
  2. Creating a simple CA on node.js

WebCrypto and PKI

Like it or not here it comes — within the next few months WebCrypto will be supported in various degrees across all mainstream browsers. There are plenty of posts out there talking about the security concerns of performing cryptography in the browser so I wont go into those here.

What I wanted to talk about was now that it’s here (mostly) what can we do with it? There will be those who say you do harm by making any claims about the assurances a web application makes relating to identity and confidentiality when WebCrypto is involved. The reality though is its happening and we should think about how we enable applications to use it responsibly.

This is why I started work on PKI.JS and ASN1.JS with Yuri Strozhevsky. Now that browsers have these basic crypto primitives available to them it is possible to build web applications that are interoperable with the security services used off the web, it is also possible to build new services on the web that simply were not possible before.

Now there have been libraries that that offered ASN.1 and PKI related capabilities (for example jsRSAsign, Forge and Lapo-asn1js) but none of these were complete and none built around WebCrypto as the source of crypto.

What Yuri and I set out to do is create a set of libraries that addressed these gaps and used public test suites (when available) to ensure conformance with the associated standards, including:

  1. X.509 and CRLs– RFC 5280
  2. CMS / PKCS 7 (Signed & EnvelopedData) – RFC 5652
  3. PKCS10 – RFC 2986
  4. PKCS8 – RFC 5208
  5. OCSP – RFC 6960
  6. Time-stamping – RFC 3161

For example for ASN.1 Yuri used his freely available test suite and for path building he tested against the PKITS test suite.

This of course does not mean the libraries are 100% compliant or defect free, in-fact I can promise you they are not but where test suites were clearly available we tried to utilize them so we would end up with a highly stable and standards compliant library.

At this point the libraries work in all modern browsers but only support signing, verifying, encrypting and decrypting in the Chrome dev-channel but in theory should work on Firefox nightlies as well. Unfortunately the profile and version of WebCrypto supported by Internet Explorer is outdated enough at this point these features do not work there at all yet.

These libraries have not yet been published to their public repositories but I expect them to be within the week under an BSD style license, to give some perspective on the size of this project I expect it to be just under 20,000 lines of code when released. It’s my hope that other people take this and build upon them so that the Internet has a browser friendly way to interact with these technologies.

NOTE: While I hate disclaimers like this but these libraries have not undergone any significant review please do not consider them production ready more work is needed before that’s the case.

NOTE: It’s also worth noting that until at least two browsers release their WebCrypto implementations as final products that these libraries may stop working or not work uniformly across browsers, for example at this time the nightly Chrome builds do nor support key exports which prevents implementation of the key storage structures.

P.S I actually miss spoke earlier, we did not end up include PKCS #12 in this version but most of the base structures are supported.

The origin of the Windows Biometric Framework

One of the projects I was responsible for when I was at Microsoft was the design and delivery of the Windows Biometric Framework. This was actually one of my favorite projects during my entire tenure at the company.

What most people don’t know is the feature was a very late addition to Windows 7, an analysis of the crash data sent by clients to Windows showed that right after Video Cards these fingerprint readers were one of the largest sources of the “Blue Screen of Death”.

Something else most people don’t know that this was not the first attempt to provide a platform for biometrics into Windows, the prior attempts failed for various reasons but they did not have the same data we had – we knew these devices that were barley used were creating a bad experience for anyone just because the OEM decided to include these sensors as a differentiator.

To fix this a stellar team of senior engineers were re-assigned from other projects and began work almost halfway through the Windows 7 release cycle. Our goal was to make it possible for these devices (and in the future other forms of Biometrics) to work well with the minimal amount of vendor code.

You see these devices need to respond really quickly, especially swipe sensors because they take a picture of your finger as you drag it across the sensor and that data needs to get to a number of components very quickly to make a workable user experience. For this reason the sensor manufacturers all thought they needed to be in kernel, and since most driver developers have only been doing driver development for a few years they did a really bad job of it which resulted in the blue screens.

The value proposition of this project for the vendors was that ultimately they wouldn’t need to write so much software to get their sensors to work, for some this was seen as a positive – especially the new entrants since they didn’t have as much software. For those vendors with complete software suites, they were less thrilled in that a driver framework of any kind places architectural restrictions on the way you build things and since they didn’t get the support calls they didn’t really care that much about the crashes their software caused.

Despite this nearly every biometric vendor begun work on new driver’s client software based on this platform that was being developed at almost the same time they were writing their components. Every vendor who was in the first round of development made significant time investments to make the platform a reality.

The software team at Microsoft also executed amazingly, I am confident that I will allways think of this group of individuals as one of the best I worked with at the company.

In the end the platform made it out the door with more features than we had originally hoped with all the major vendors on board and by the general availability to OEMs people were ready to support real customers.

The late start did mean OEMs had to work very hard to get the new device drivers in their initial images, they had very little runway to do so but several did and those that did not at first got their revised images out shortly later.

This new model delivered a kernel mode and user mode driver model that significantly reduced the amount of software that was needed to get one of these devices working in Windows, we delivered a test suite that helped ensure the devices and their drivers worked reliably. And we were even able to deliver a framework for use, management, group policy, and a hook for enrollment.

The combination of the above meant that we essentially eliminated the historic problems these devices caused, improved the security of how they worked when integrating with windows, made them more supportable and simply work well.

The team went on to work on some very cool additions, many of the team changed but some remained and the work that we began after the “freeze” in the Windows 7 release cycle ultimately found their way out in Windows 8/8.1 – I hope the rest of the things this team were doing find their way out someday soon also.

Windows Certificate Enrollment Protocols

Windows has two sets of certificate enrollment protocols; these protocols are used to enable a client to acquire and maintain certificates with little to no manual interaction.

The first implementation of certificate enrollment in Windows was introduced with Windows XP, it was was based on DCOM; this DCOM dependency limited where you could deploy it. The solution could never be practically or securely could be used over the internet and in segmented corporate networks it in essence required each island of trust to deploy their own certificate authority which was both costly and impractical.

The second implementation was introduced in Windows 7, it is based on SOAP based web services and has two core APIs one for determining what certificates a client should enroll for and another for acquiring those certificates. This newer approach addresses the security and deployability concerns of its predecessor and enables the deployment of CA as a pure Software As A Services (SAAS) solution.

Normally I don’t write about GlobalSign products here but when we decided how we would build our own solution in this space we decided the security and deployment limitations of the DCOM approach were too limiting to enable our goals of reducing TCO and enabling broader use of certificates so we went with the more flexible and secure approach.

Enterprise logon with challenge response (OTP) tokens

So from time to time I am asked about how to add support for a new authentication method for Windows Logon, the answer to that question is buried in what authentication methods Windows natively supports.

For the purpose of this blog post lets scope that question to the Windows Kerberos Implementation as this is the “modern” authentication platform in Windows (in other words Kerberos as they do not want you to be using NTLM any further), additionally it supports most authentication methods.

So what are those methods? Essentially there are two passwords and public/private keys. What about the Windows Native implementation of Biometrics and Picture Passwords? These like nearly every other modality of authentication in Windows is simply a layer on top of the native support of passwords that is built into Windows.

I should note I was the Lead Program Manager for the Windows Biometric Framework and I do think it is a good solution for what it was designed for.

Let me explain; let’s use Biometric as an example. In this solution there is a service that works with the biometric sensors to perform the biometric match. When the match occurs this service releases a clear text copy of your password which in turn is stuffed into the Windows logon path just as if the user entered the password by hand.

This means that somewhere in the file-system there is a clear text copy of the password stored in a reversibly encrypted form. Specifically the encryption done in this case is performed using Windows subsystem called the Data Protection API (DPAPI). DPAPI is used by applications when they need keep a secret — secret, you see this is a Catch 22 scenario unless a password (or key) is entered there is no secret to encrypt with which means ultimately there is a secret that is stored in the clear. This means if you can get at the root secret you can get at the clear text values encrypted with DPAPI.

There are ways to mitigate some of the associated risks; one example being the use of Bitlocker and a TPM protector but unfortunately this is not broadly deployed and doesn’t address the full risk profile; this is why Microsoft positions the Biometric feature as one designed Windows as a convince. That is not to say it cannot be used in a secure way or to suggest it can not be used in an enterprise but unless thought is given to the threat model and the right mitigations are deployed it actually weakens your security.

So what about those enterprise OTP and challenge response solutions that we keep hearing about, surely they must be different right? Unfortunately no they are not. In fact they are probably worse because unlike the Windows Biometric solution they create a central repository of all clear text passwords.

You see for them to work they introduce a client (once called a GINA, now called a Logon Provider) that collects the challenge and likely has a service component or SSPI provider that implements a protocol that interacts with their product’s service which then validates the challenge and returns a clear text password which they then stuff on the users behalf.

No matter how good the design and implementation of this system is they are reliant on that singular repository of clear text / reversibly encrypted passwords. Things get even worse when you look at how these systems have to deal with offline and local logons, you know the scenario where a user takes their laptop on a plane and still needs to do work? For this to work they need to store the clear text password on the clear on the local machine just like the Biometrics solution.

What about the public / private key approach? There are essentially two variants of this in windows PKINIT and PKU2U. The most common example is the use of a smart card, this is what is used by many Fortune 500s and governments to secure their logons. It is also possible to add these other modalities into Windows by emulating this approach with a software virtual smartcard but they too fall prey to the same attack vectors, namely what key do you use to keep the private key protected (the answer is you use DPAPI).

So does this mean OTP is a bad solution? No in fact it’s much better than passwords if a system was designed to use it and it makes a great additional factor, the thing is Windows was not.

Deploying forward secrecy on RedHat, Centos or Fedora based systems

If you want to deploy perfect forward secrecy and you use a RedHat, Centos or Fedora based system you likely won’t be able to do so without building your own OpenSSL. This is because by default the OpenSSL packages for these systems do not include ECC or ECDH and when web-servers like apache and Nginx are built against libraries that do not support them they obviously omit support for the algorithms.

Hopefully these distributions and the community repositories that support them will start including support for these algorithms by default soon but in the meantime you can add support by building your own.

You can check if your OpenSSL supports these algorithms easily enough, just run this command:

> openssl ciphers

If you see ciphers like “ECDHE-RSA-AES256-GCM-SHA384” then you have a version of OpenSSL that was built with ECC and ECDHE support enabled which is required if you want forward secrecy today. If you don’t see such a suite don’t worry, just check out this blog post and it will step you through how to build a new OpenSSL.

Unfortunately building a software package on each Linux distribution is slightly different, I will share what worked for me on Centos 6.3.

1. We don’t want to break your current Nginx deployment so lets start by finding out what options your Nginx was built with, you do this with the following command:

> nginx -V

This will return something like this:

nginx version: nginx/1.5.1

built by gcc 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC)

TLS SNI support enabled

configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 --with-cc-opt='-O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic'

To ensure we don’t omit something your using we will use the same options when we build our Nginx.

 

2. Next let’s make a backup of everything you have in your current Nginx installation just in case something doesn’t go right:

root> cp /usr/sbin/nginx /usr/sbin/nginx.orig

root> cp -r /etc/nginx /etc/nginx.orig

3. Now let’s make sure we get our dependencies installed, download our source and extract it so we can do our build.

root> cd /tmp

root> yum install pcre

root> yum install pcre-devel

root> yum install zlib

root> yum install zlib-devel

root> wget http://nginx.org/download/nginx-1.5.1.tar.gz

root> tar -xvzf nginx-1.5.1.tar.gz

root> cd nginx-1.5.1

4. Then we need to configure our build. Lets start by taking the options you saw when you ran “nginx -V” and add the following to the end of them:

opt='-L/usr/local/ssl/lib' --with-cc-opt='-I/usr/local/ssl/include'

This is where your OpenSSL libraries and headers should be, just append the combined set of parameters to the “./configure” command, this will look like this:

root> ./configure --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 --with-cc-opt='-O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --with-ld-opt='-L/usr/local/ssl/lib' --with-cc-opt='-I/usr/local/ssl/include'

5. Make and install your Nginx

root> make
root> make install

6. Restart Nginx

services nginx restart

Now if you are using the SSL configuration I recommend here you already have set your server to prefer the commonly supported forward secret algorithms and if you visit https://sslcheck.x509labs.com you will see these ECDH based algorithms at the top of the list of negotiated suites.

 

For more information see:

1. How to botch TLS forward secrecy, AGL
2.
Getting the Most Out of SSL Part 1: Choose the Right Certificate, CA Security
3. SSL Configuration Checker, X509 Labs
4. High Performance Browser Networking, Ilya Grigorik
5. Bulletproof SSL/TLS and PKI, Ivan Ristic

How to Build Your Own OpenSSL

So you have been reading all the press on forward secrecy and want to deploy it? But does your OpenSSL support it? Thankfully it is easy to tell, just run this command:

> openssl ciphers

If you see ciphers like “ECDHE-RSA-AES256-GCM-SHA384” then you have a version of OpenSSL that was built with ECC and ECDHE support enabled which is required if you want forward secrecy today.

So how do you go about doing this? Thankfully you don’t need to be a developer of cryptographer, with the following commands you should be able to get the latest (as of the time of this post) OpenSSL with ECC and ECDH enabled.

root> cp /usr/bin/openssl /usr/bin/openssl.orig
root> cd /tmp
root> wget http://www.openssl.org/source/openssl-1.0.1e.tar.gz
root> tar -xvzf openssl-1.0.1e.tar.gz
root> cd openssl-1.0.1e
root> ./config no-shared no-threads 
root> make depend
root> make
root> make install

You may also need to re-build your web server,you see even though the latest versions of Nginx and Apache include the necessary changes to enable ECDH if the version you are running was built against a version of OpenSSL that did not include support your going to have to rebuild it also. Here is a quick post on how to do that for Nginx.

 

Good luck!

 

Ryan

Certificate-based Mozilla Persona IdP

My name is David Margrave, I am a guest author on unmitigatedrisk.com.  I have worked in the security sphere for 20 years at various U.S. federal agencies, financial institutions, and retailers.  My interests include improving the state of client authentication on the Internet, which is an area that saw robust developments in the 1990s, then languished for a number of years as the Internet at large seemed content with reusable passwords and form-based authentication over SSL/TLS, but has received renewed scrutiny because of recent large scale data breaches and the veiled promise from the Federal government to ‘fix this mess or we will fix it for you’.

 

The Mozilla Persona project is a recent initiative to improve and standardize browser-based authentication.  For a long time (over 10 years) the most widely-used form of browser-based authentication has been based on HTML forms.  At its most basic level, a user will enter an identifier and reusable password into an HTML form, and submit the form in an HTTPS request to access a protected resource.  The server will receive these values, validate them, and typically return state information in an encrypted and encoded HTTP cookie.  Subsequent visits to the protected resource will send the cookie in the HTTP request, and the server will decrypt and validate the cookie before returning the protected content.   This entire exchange usually takes place over HTTPS, but in many instances the authentication cookie is used over an HTTP connection after initial authentication has completed successfully.  There are other forms of HTTP authentication and other previous attempts at standardization, but a quick survey of the largest retailers and financial institutions will show that HTML form-based authentication is still the most common by far.

 

Assuming that the implementers of these cookie schemes are competent amateur cryptographers and avoided the most glaring mistakes (see this paper by MIT researchers), all of these authentication schemes which rely on HTTP cookies suffer from the same critical flaw:  An attacker who obtains the cookie value can impersonate the user.  The crucial problem is that HTML form-based authentication schemes have not been capable of managing cryptographic keying material on the client side.  More secure schemes such as Kerberos V5 use a ticket in conjunction with an accompanying session key, both of which are stored in a credentials cache.  In contrast to flawed cookie-based schemes, in the Kerberos V5 protocol, a service ticket is useless to an adversary without the accompanying service ticket session key.  An authentication exchange in Kerberos V5 includes the service ticket, and a value encrypted with the service ticket session key, to prove possession.There are some proprietary or enterprise-level solutions to this situation.  For example, Microsoft Internet Explorer and IIS have long had (for over 10 years) the capability to use HTTP Negotiate authentication and to use GSS-API with Kerberos V5 as the underlying mechanism.  The Apache web server has had the capability to accept HTTP Negotiate authentication for several years as well, but the adoption of these solutions on the Internet at large has not been widespread.  At a high level, the Mozilla Persona project improves this situation by bringing the credentials cache and cryptographic capabilities into the browser, and doing so in a standardized manner.  Although the underlying cryptographic algorithms may differ from the Kerberos V5 example, the importance of this project can’t be understated.

 

Persona introduces the concept of the Identity Data Provider (IdP).  The basic idea is that a domain owner is responsible for vouching for the identity of email addresses in that domain.  This could involve whatever scheme the domain owner wishes to implement.  If a domain does not implement an IdP, the Persona system will use its own default IdP which uses the email verification scheme that all Internet users are familiar with:  you prove your ability to receive email at a particular address.  When signing-in to a website which uses Persona authentication, the user will be presented with a dialog window asking for the email address to use.

Screenshot from 2013-04-10 13:14:26

Behind the scenes, the Persona system determines which IdP to use to verify the address.  A domain implementing an IdP must publish some metadata (the public key, and provisioning and verification URLs), in JSON format, at the URL https://domain/.well-known/browserid.  The server at the URL must have a certificate from a trusted certificate authority, and the returned value must be properly-formatted JSON with certain required metadata information (described here).

 

The author implemented an IdP at the domain margrave.com as a research exercise, borrowing from the NodeJS browserid-certifier project.  This particular IdP was written to accept X.509 client certificates issued by a commercial certificate authority, to extract the email address from the X.509 certificate, and issue a persona certificate with that email address. The .well-known/browserid file for node.margrave.com is shown here:

{
    "public-key": {"algorithm":"DS",
        "y":"aab45377fa024964a6b3339d107b91887adf85b96649b5b447a7ac7390866c92d88ed101f6525e717c0d703d5fd8727e0d1d8adb60bb80c7123730616c197326f1eed326fdfc136d7594ffce39a05005a433add8d3344813ea89f6e426d8f5b0bc0d3fdb59c8ec7c19583ba7f14d3636713b84c1ebe62a6866e9c2091def5c25aba967670eabc4591ee3f536006ce5c550265d4b2264c5a989abf908763b41014f35eb2949a0b027a1a1054203a3e13eeb1f16ffb171d6942405546a8407c3fb7e73227e432d150834054edc379de8f8988a8e3b102b70fe5b1164a28a4a453310313e00de1aa177f5ac2b73ef31670e16914607ba4196c06e57f7e5209bc7e4",
        "p":"d6c4e5045697756c7a312d02c2289c25d40f9954261f7b5876214b6df109c738b76226b199bb7e33f8fc7ac1dcc316e1e7c78973951bfc6ff2e00cc987cd76fcfb0b8c0096b0b460fffac960ca4136c28f4bfb580de47cf7e7934c3985e3b3d943b77f06ef2af3ac3494fc3c6fc49810a63853862a02bb1c824a01b7fc688e4028527a58ad58c9d512922660db5d505bc263af293bc93bcd6d885a157579d7f52952236dd9d06a4fc3bc2247d21f1a70f5848eb0176513537c983f5a36737f01f82b44546e8e7f0fabc457e3de1d9c5dba96965b10a2a0580b0ad0f88179e10066107fb74314a07e6745863bc797b7002ebec0b000a98eb697414709ac17b401",
        "q":"b1e370f6472c8754ccd75e99666ec8ef1fd748b748bbbc08503d82ce8055ab3b",
        "g":"9a8269ab2e3b733a5242179d8f8ddb17ff93297d9eab00376db211a22b19c854dfa80166df2132cbc51fb224b0904abb22da2c7b7850f782124cb575b116f41ea7c4fc75b1d77525204cd7c23a15999004c23cdeb72359ee74e886a1dde7855ae05fe847447d0a68059002c3819a75dc7dcbb30e39efac36e07e2c404b7ca98b263b25fa314ba93c0625718bd489cea6d04ba4b0b7f156eeb4c56c44b50e4fb5bce9d7ae0d55b379225feb0214a04bed72f33e0664d290e7c840df3e2abb5e48189fa4e90646f1867db289c6560476799f7be8420a6dc01d078de437f280fff2d7ddf1248d56e1a54b933a41629d6c252983c58795105802d30d7bcd819cf6ef"
    },
    "authentication": "/persona/sign_in.html",
    "provisioning": "/persona/provision.html"
}

 

The public key from the browserid file is the public portion of the key pair used by the IdP to certify users in the domain.  The fact that it must be served over a URL protected with a certificate issued from a trusted CA, is how the Persona system builds on the existing trust infrastructure of the Internet, instead of attempting to re-implement their own from scratch, or requiring operators of websites relying on Persona authentication to establish shared secrets out-of-band.  The authentication and provisioning URLs are how browsers interact with the IdP.

 

In the Certificate-based IdP implemented at margrave.com, the page located at /persona/provision.html includes some javascript which does the following things:  calls an AJAX API to get the email address from the certificate, receives the email address that the user entered in the Persona login dialog via a javascript callback, validates that they match, and calls another AJAX API to issue the certificate.  Note that the email address comparison performed in client-side javascript is purely for UI and troubleshooting purposes, the actual issuance of the Persona certificate uses the email address from the X.509 certificate (if the provisioning process progresses to that point), irrespective of what username was entered in the Persona login dialog.  The client-side validation of the email address is to provide the ability to troubleshoot scenarios where a user may choose the wrong certificate from the browser certificate dialog box, etc.  The client-side provisioning source code is shown below (ancillary AJAX error handling code is omitted).

 

function provision() {

  // Get the email from the cert by visiting a URL that requires client cert auth and returns our cert's email in a json response.
  // This is not strictly necessary, since the server will only issue persona certificates for the email address from the X.509 certificate,
  // but it is useful for troubleshooting, helping the user avoid choosing the wrong certificate from the browser dialog, etc.
  getEmailFromCert(function(emailFromCert) {
      if (emailFromCert) {
          navigator.id.beginProvisioning(function(emailFromPersona, certDuration) {
              if (emailFromPersona===emailFromCert) {
                  navigator.id.genKeyPair(function(publicKey) {
                      // generateServerSide makes an AJAX call to a URL that also requires client cert auth
                      generateServerSide(publicKey, certDuration, function (certificate) {
                          if (navigator.id && navigator.id.registerCertificate) {
                              //alert('registering certificate: ' + certificate);
                              navigator.id.registerCertificate(certificate);
                          }
                      });
                  });
              } else {
                  navigator.id.raiseProvisioningFailure('user is not authenticated as target user');
              }
          });
      } else {
          navigator.id.raiseProvisioningFailure('user does not have a valid X.509 certificate');
      }
  });
}

function generateServerSide(pubkey, duration, cb) {
    $.ajax({
        // Note that this URL requires SSL client certificate authentication,
        // and performs its own authorization on the email address from the certificate,
        // (for example, based on issuing CA or email address domain),
        // and so does not need the email address as an explicit input parameter
        url: "https://node.margrave.com/cert_key",
        type: "POST",
        global: false,
        data: {pubkey: pubkey,
               duration: duration},
               dataType: "json",
        success: function(response) {
                if (response.success && response.certificate) {
                    cb(response.certificate);
                }
            }
    });
    return false;
}

function getEmailFromCert(cb) {
        $.ajax({
            // Note that this URL requires SSL client certificate authentication,
            // and performs its own authorization on the email address from the certificate.
            url: "https://node.margrave.com/email",
            type: "POST",
            global: false,
            dataType: "json",
            success: function(response) {
                cb(response.email);
            }
        });}

 

The other portion of a Persona IdP, the authentication URL, turned out not to be necessary in this case, because the authentication is implicit in the use of X.509 client certificate-authenticated AJAX calls.  Once the Persona certificate has been provisioned, the user is able to access the protected resource.  If things don’t work as expected, the error messages do not seem to bubble up to the UI dialog, and I had to resort to tracing XHR calls with Firebug to determine what went wrong.  In one case, it was a clock skew error that was corrected by installing ntpd on my IdP server.   In another case, one of my IdP AJAX calls may return an error but this error gets masked by a vague UI message.  It may be helpful to standardize HTTP return code and JSON field names to return descriptive error text to the Persona UI.

 

Screenshot from 2013-04-10 13:15:32

 

 

In its current form, this concept could be useful for enterprises, but not really for the Internet at large, since it requires a) that you have a client cert and b) that the IDP for your email domain is certificate-aware.  However, If the persona-default IDP were certificate-aware, or CAs were persona-aware, then there are some interesting possibilities.

  1. The persona default IDP could skip the verification email if a trusted X.509 client certificate is provided.   Possession of a certificate from a trusted CA implies the email address has already been verified, at a minimum.  The Persona system already accepts CA’s trust when retrieving .well-known/browserid, this idea extends CA trust to clients.
  2. Going the other direction: If a CA were to accept a persona certificate from either a domain’s IDP or from the persona-default IDP, and using that to issue X.509 client certificates, or as one part of the client certificate enrollment process (higher assurance certificates may verify more information than email, such as state-issued identification).  This idea seems promising because the email verification scheme is the wheel that everyone on the Internet has reimplemented, in many cases with security flaws.