Category Archives: Programming

Certificate Path Building in PKIjs

Now that its possible to decode and verify the signature on X.509 certificates within the browser the natural question to ask is what can I do with that?

Well first off to build an interesting application you will need to have the ability to validate that a certificate is trusted the first step in doing that is building the certificate path associated with the certificate.

The defacto standard for path building libraries is the NIST PKITS tests our goal is to create a library that will be able to pass the sane tests from this suite (some are odd for sure).

This is a pretty high bar and will take some time. At the time of writing this blog post we pass 1-33 of this test suite in with flying colors these tests cover all of the basic certificate validation rules. We also think the library will pass all Policy Constraints and Name Constraints but more testing is needed to confirm.

So how does building a chain look like today with this library?

var certs = new Array();

// Load cert to be validated, its intermediates and root
for(var i = 0; i < cert_buffers.length; i++)
{
    var asn1 = org.pkijs.fromBER(cert_buffers[i]);
    certs.push(new org.pkijs.simpl.CERT({ schema: asn1.result }));
}

var crls = new Array();

// Load any CRLs we have
for(var i = 0; i < crl_buffers.length; i++)
{
    var asn1 = org.pkijs.fromBER(crl_buffers[i]);
    crls.push(new org.pkijs.simpl.CRL({ schema: asn1.result }));
}

var cert_chain_simpl = new org.pkijs.simpl.CERT_CHAIN({
    certs: certs,
    crls: crls
});

cert_chain_simpl.verify().then(
    function(result)
    {
        alert("Good result");
    },
    function(error)
    {
        alert("Error: " + error);
    }
);

The current incarnation of the API expects that the bag of certificates that is passed in will include all intermediates as well as all trust anchors. We will be changing this in a future release so that trust anchors are passed in another bag.

This will help ensure that the certificate inputs to be validated don’t contain anything that might accidentally result in the certificate being treated as valid when it should not be. With that said as it is currently structured we can begin developing automated testing which is great.

Note: Updated the post to indicate the goal is to pass the sane PKITS tests, some of which are not and some are not possible to pass in a web environment.

Generating signed messages using CMS and PKI.js

One of the most common signature formats on the web is known as CMS SignedData, this is the signature format used in PDF files, CAdES, S/MIME and several other digital signature solutions.

As a signature it has a few notable features:

  1. Having multiple signers.
  2. Including meta-data that will be signed along with the data that is being signed.
  3. Including meta-data that is outside the scope of the signature.
  4. Signing data contained within the signature or data referenced by it.

These traits mean you can do some interesting things like implementing counter-signing in-turn enabling notarization scenarios.

Utilizing PKI.js you can now create and verify this signature format, bellow is an example of how creating one of these messages looks using this library:

// #region Put a static values 
var sample_data = new Uint8Array(sample_data);
sample_data[0] = 0x00;
sample_data[1] = 0x01;
sample_data[2] = 0x02;
sample_data[3] = 0x03;
sample_data[4] = 0x04;

cms_signed_simpl = new org.pkijs.simpl.CMS_SIGNED_DATA({
    digestAlgorithms: [
        new org.pkijs.simpl.ALGORITHM_IDENTIFIER({ algorithm_id: "1.3.14.3.2.26" }) // SHA-1
    ],
    encapContentInfo: new org.pkijs.simpl.cms.EncapsulatedContentInfo({
        eContentType: "1.2.840.113549.1.7.1", // "data" content type
        eContent: new org.pkijs.asn1.OCTETSTRING({ value_hex: sample_data })
    }),
    signerInfos: [
        new org.pkijs.simpl.CMS_SIGNER_INFO({
            sid: new org.pkijs.simpl.cms.IssuerAndSerialNumber({
                issuer: cert_simpl.issuer,
                serialNumber: cert_simpl.serialNumber
            }),
            digestAlgorithm: new org.pkijs.simpl.ALGORITHM_IDENTIFIER({ algorithm_id: "1.3.14.3.2.26" }), // SHA-1
            signatureAlgorithm: new org.pkijs.simpl.ALGORITHM_IDENTIFIER({ algorithm_id: "1.2.840.113549.1.1.5" }), // RSA + SHA-1
        })
    ],
    certificates: [cert_simpl]
});
// #endregion 

return cms_signed_simpl.sign(privateKey, 0);

In this sample you can see we are putting our content to be signed within the SignedData message and then signing it with RSA and SHA1, this is in-the exact same thing that is needed to implement what is called opaque signed email in S/MIME.

 

Frankencerts, new extensions and PKI.js

One of the things that we wanted you to be able to use the PKI.js libraries for is the X.509 Certificates, including those with new extensions (like testing for OCSP MUST Staple or for testing other certificate processing libraries like was done in the Frankencert paper.

Here is an example of what that might look like:

function create_cert()
{
    // #region Initial variables 
    var sequence = Promise.resolve();

    var cert_simpl = new org.pkijs.simpl.CERT();

    var publicKey;
    var privateKey;
    // #endregion 

    // #region Get a "crypto" extension 
    var crypto = org.pkijs.getCrypto();
    if(typeof crypto == "undefined")
    {
        alert("No WebCrypto extension found");

        return;
    }
    // #endregion 

    // #region Put a static values 
    cert_simpl.serialNumber = new org.pkijs.asn1.INTEGER({ value: 1 });
    cert_simpl.issuer.types_and_values.push(new org.pkijs.simpl.ATTR_TYPE_AND_VALUE({
        type: "2.222.333",
        value: new org.pkijs.asn1.PRINTABLESTRING({ value: "RU" })
    }));
    cert_simpl.issuer.types_and_values.push(new org.pkijs.simpl.ATTR_TYPE_AND_VALUE({
        type: "2.222.444",
        value: new org.pkijs.asn1.PRINTABLESTRING({ value: "Test" })
    }));
    cert_simpl.subject.types_and_values.push(new org.pkijs.simpl.ATTR_TYPE_AND_VALUE({
        type: "2.222.333",
        value: new org.pkijs.asn1.PRINTABLESTRING({ value: "RU" })
    }));
    cert_simpl.subject.types_and_values.push(new org.pkijs.simpl.ATTR_TYPE_AND_VALUE({
        type: "2.222.444",
        value: new org.pkijs.asn1.PRINTABLESTRING({ value: "Test" })
    }));
    cert_simpl.notBefore.value = new Date();
    cert_simpl.notAfter.value = new Date(2016, 01, 01);

    cert_simpl.extensions = new Array(); // Extensions are not a part of certificate by default, it's an optional array

    // #region "BasicConstraints" extension
    var basic_constr = new org.pkijs.simpl.x509.BasicConstraints({
        cA: true,
        pathLenConstraint: 3
    });

    cert_simpl.extensions.push(new org.pkijs.simpl.EXTENSION({
        extnID: "2.5.29.19",
        critical: false,
        extnValue: basic_constr.toSchema().toBER(false),
        parsedValue: basic_constr // Parsed value for well-known extensions
    }));
    // #endregion 

    // #region "KeyUsage" extension 
    var bit_array = new ArrayBuffer(1);
    var bit_view = new Uint8Array(bit_array);

    bit_view[0] = bit_view[0] | 0x02; // Key usage "cRLSign" flag
    bit_view[0] = bit_view[0] | 0x04; // Key usage "keyCertSign" flag

    var key_usage = new org.pkijs.asn1.BITSTRING({ value_hex: bit_array });

    cert_simpl.extensions.push(new org.pkijs.simpl.EXTENSION({
        extnID: "2.5.29.15",
        critical: false,
        extnValue: key_usage.toBER(false),
        parsedValue: key_usage // Parsed value for well-known extensions
    }));
    // #endregion 

    cert_simpl.signatureAlgorithm.algorithm_id = "1.2.840.113549.1.1.5"; // RSA + SHA-1
    cert_simpl.signature.algorithm_id = cert_simpl.signatureAlgorithm.algorithm_id; // Must be the same value
    // #endregion 

    // #region Create a new key pair 
    sequence = sequence.then(
        function()
        {
            return crypto.generateKey({ name: "RSASSA-PKCS1-v1_5", modulusLength: 2048, publicExponent: new Uint8Array([0x01, 0x00, 0x01]), hash: { name: "sha-1" } }, true, ["encrypt", "decrypt", "sign", "verify"]);
        }
    );
    // #endregion 

    // #region Store new key in an interim variables
    sequence = sequence.then(
        function(keyPair)
        {
            publicKey = keyPair.publicKey;
            privateKey = keyPair.privateKey;
        },
        function(error)
        {
            alert("Error during key generation: " + error);
        }
    );
    // #endregion 

    // #region Exporting public key into "subjectPublicKeyInfo"  
    sequence = sequence.then(
        function()
        {
            return cert_simpl.subjectPublicKeyInfo.importKey(publicKey);
         }
    );
    // #endregion 

    // #region Signing final certificate 
    sequence = sequence.then(
       function()
       {
           return cert_simpl.sign(privateKey);
       },
       function(error)
       {
           alert("Error during exporting public key: " + error);
       }
    );
    // #endregion 

    sequence = sequence.then(
        function()
        {
            alert("Good result");
        },
        function(error)
        {
            alert("Error during signing: " + error);
        }
    );

    sequence.then(
        function()
        {
            return cert_simpl.verify();
        }
    ).then(
        function(result)
        {
            alert("Verification passed: " + result);
        },
        function(error)
        {
           alert("Verification failed: " + eror);
        }
    );
}

As you can see the library is designed in such a way you are not limited to the creation of some static pre-conceived layouts of these structures, you can fairly easily construct any type of certificate (or any of the other supported message types).

At a later date (if it makes sense to do so) we may also decide to add a simple layer ontop of this that abstracts out the need to understand encoding concepts as well.

This layered approach ensures the library can be used to create real-applications without the need to hack up the underlying APIs enabling developers to avoid the need to understand low-level  ASN.1 formats in detail.

Potential projects for PKI.JS

This is a list of potential projects that would benefit from PKI.js, can you think of any others that would be interesting?

  1. Adding signing, time-stamping and verifying support to pdf.js.
  2. Creating a smime.js on email.js that provides signing, verifying, encryption, decryption of S/MIME.
  3. Creating a CAdES library for web pages.
  4. Adding signature support to web document management solutions.
  5. Adding encrypted mail support in webmail offerings.
  6. Building a usable and modern certificate based authentication solution using web markup for credential selection.
  7. Creating a rich user experience for the short-lived certificates providing temporary access via SSH and TLS.

With some changes the libraries could be used in server side projects:

  1. Creating an OCSP responder based on node.js.
  2. Creating a simple CA on node.js

WebCrypto and PKI

Like it or not here it comes — within the next few months WebCrypto will be supported in various degrees across all mainstream browsers. There are plenty of posts out there talking about the security concerns of performing cryptography in the browser so I wont go into those here.

What I wanted to talk about was now that it’s here (mostly) what can we do with it? There will be those who say you do harm by making any claims about the assurances a web application makes relating to identity and confidentiality when WebCrypto is involved. The reality though is its happening and we should think about how we enable applications to use it responsibly.

This is why I started work on PKI.JS and ASN1.JS with Yuri Strozhevsky. Now that browsers have these basic crypto primitives available to them it is possible to build web applications that are interoperable with the security services used off the web, it is also possible to build new services on the web that simply were not possible before.

Now there have been libraries that that offered ASN.1 and PKI related capabilities (for example jsRSAsign, Forge and Lapo-asn1js) but none of these were complete and none built around WebCrypto as the source of crypto.

What Yuri and I set out to do is create a set of libraries that addressed these gaps and used public test suites (when available) to ensure conformance with the associated standards, including:

  1. X.509 and CRLs– RFC 5280
  2. CMS / PKCS 7 (Signed & EnvelopedData) – RFC 5652
  3. PKCS10 – RFC 2986
  4. PKCS8 – RFC 5208
  5. OCSP – RFC 6960
  6. Time-stamping – RFC 3161

For example for ASN.1 Yuri used his freely available test suite and for path building he tested against the PKITS test suite.

This of course does not mean the libraries are 100% compliant or defect free, in-fact I can promise you they are not but where test suites were clearly available we tried to utilize them so we would end up with a highly stable and standards compliant library.

At this point the libraries work in all modern browsers but only support signing, verifying, encrypting and decrypting in the Chrome dev-channel but in theory should work on Firefox nightlies as well. Unfortunately the profile and version of WebCrypto supported by Internet Explorer is outdated enough at this point these features do not work there at all yet.

These libraries have not yet been published to their public repositories but I expect them to be within the week under an BSD style license, to give some perspective on the size of this project I expect it to be just under 20,000 lines of code when released. It’s my hope that other people take this and build upon them so that the Internet has a browser friendly way to interact with these technologies.

NOTE: While I hate disclaimers like this but these libraries have not undergone any significant review please do not consider them production ready more work is needed before that’s the case.

NOTE: It’s also worth noting that until at least two browsers release their WebCrypto implementations as final products that these libraries may stop working or not work uniformly across browsers, for example at this time the nightly Chrome builds do nor support key exports which prevents implementation of the key storage structures.

P.S I actually miss spoke earlier, we did not end up include PKCS #12 in this version but most of the base structures are supported.

The origin of the Windows Biometric Framework

One of the projects I was responsible for when I was at Microsoft was the design and delivery of the Windows Biometric Framework. This was actually one of my favorite projects during my entire tenure at the company.

What most people don’t know is the feature was a very late addition to Windows 7, an analysis of the crash data sent by clients to Windows showed that right after Video Cards these fingerprint readers were one of the largest sources of the “Blue Screen of Death”.

Something else most people don’t know that this was not the first attempt to provide a platform for biometrics into Windows, the prior attempts failed for various reasons but they did not have the same data we had – we knew these devices that were barley used were creating a bad experience for anyone just because the OEM decided to include these sensors as a differentiator.

To fix this a stellar team of senior engineers were re-assigned from other projects and began work almost halfway through the Windows 7 release cycle. Our goal was to make it possible for these devices (and in the future other forms of Biometrics) to work well with the minimal amount of vendor code.

You see these devices need to respond really quickly, especially swipe sensors because they take a picture of your finger as you drag it across the sensor and that data needs to get to a number of components very quickly to make a workable user experience. For this reason the sensor manufacturers all thought they needed to be in kernel, and since most driver developers have only been doing driver development for a few years they did a really bad job of it which resulted in the blue screens.

The value proposition of this project for the vendors was that ultimately they wouldn’t need to write so much software to get their sensors to work, for some this was seen as a positive – especially the new entrants since they didn’t have as much software. For those vendors with complete software suites, they were less thrilled in that a driver framework of any kind places architectural restrictions on the way you build things and since they didn’t get the support calls they didn’t really care that much about the crashes their software caused.

Despite this nearly every biometric vendor begun work on new driver’s client software based on this platform that was being developed at almost the same time they were writing their components. Every vendor who was in the first round of development made significant time investments to make the platform a reality.

The software team at Microsoft also executed amazingly, I am confident that I will allways think of this group of individuals as one of the best I worked with at the company.

In the end the platform made it out the door with more features than we had originally hoped with all the major vendors on board and by the general availability to OEMs people were ready to support real customers.

The late start did mean OEMs had to work very hard to get the new device drivers in their initial images, they had very little runway to do so but several did and those that did not at first got their revised images out shortly later.

This new model delivered a kernel mode and user mode driver model that significantly reduced the amount of software that was needed to get one of these devices working in Windows, we delivered a test suite that helped ensure the devices and their drivers worked reliably. And we were even able to deliver a framework for use, management, group policy, and a hook for enrollment.

The combination of the above meant that we essentially eliminated the historic problems these devices caused, improved the security of how they worked when integrating with windows, made them more supportable and simply work well.

The team went on to work on some very cool additions, many of the team changed but some remained and the work that we began after the “freeze” in the Windows 7 release cycle ultimately found their way out in Windows 8/8.1 – I hope the rest of the things this team were doing find their way out someday soon also.

Windows Certificate Enrollment Protocols

Windows has two sets of certificate enrollment protocols; these protocols are used to enable a client to acquire and maintain certificates with little to no manual interaction.

The first implementation of certificate enrollment in Windows was introduced with Windows XP, it was was based on DCOM; this DCOM dependency limited where you could deploy it. The solution could never be practically or securely could be used over the internet and in segmented corporate networks it in essence required each island of trust to deploy their own certificate authority which was both costly and impractical.

The second implementation was introduced in Windows 7, it is based on SOAP based web services and has two core APIs one for determining what certificates a client should enroll for and another for acquiring those certificates. This newer approach addresses the security and deployability concerns of its predecessor and enables the deployment of CA as a pure Software As A Services (SAAS) solution.

Normally I don’t write about GlobalSign products here but when we decided how we would build our own solution in this space we decided the security and deployment limitations of the DCOM approach were too limiting to enable our goals of reducing TCO and enabling broader use of certificates so we went with the more flexible and secure approach.

Enterprise logon with challenge response (OTP) tokens

So from time to time I am asked about how to add support for a new authentication method for Windows Logon, the answer to that question is buried in what authentication methods Windows natively supports.

For the purpose of this blog post lets scope that question to the Windows Kerberos Implementation as this is the “modern” authentication platform in Windows (in other words Kerberos as they do not want you to be using NTLM any further), additionally it supports most authentication methods.

So what are those methods? Essentially there are two passwords and public/private keys. What about the Windows Native implementation of Biometrics and Picture Passwords? These like nearly every other modality of authentication in Windows is simply a layer on top of the native support of passwords that is built into Windows.

I should note I was the Lead Program Manager for the Windows Biometric Framework and I do think it is a good solution for what it was designed for.

Let me explain; let’s use Biometric as an example. In this solution there is a service that works with the biometric sensors to perform the biometric match. When the match occurs this service releases a clear text copy of your password which in turn is stuffed into the Windows logon path just as if the user entered the password by hand.

This means that somewhere in the file-system there is a clear text copy of the password stored in a reversibly encrypted form. Specifically the encryption done in this case is performed using Windows subsystem called the Data Protection API (DPAPI). DPAPI is used by applications when they need keep a secret — secret, you see this is a Catch 22 scenario unless a password (or key) is entered there is no secret to encrypt with which means ultimately there is a secret that is stored in the clear. This means if you can get at the root secret you can get at the clear text values encrypted with DPAPI.

There are ways to mitigate some of the associated risks; one example being the use of Bitlocker and a TPM protector but unfortunately this is not broadly deployed and doesn’t address the full risk profile; this is why Microsoft positions the Biometric feature as one designed Windows as a convince. That is not to say it cannot be used in a secure way or to suggest it can not be used in an enterprise but unless thought is given to the threat model and the right mitigations are deployed it actually weakens your security.

So what about those enterprise OTP and challenge response solutions that we keep hearing about, surely they must be different right? Unfortunately no they are not. In fact they are probably worse because unlike the Windows Biometric solution they create a central repository of all clear text passwords.

You see for them to work they introduce a client (once called a GINA, now called a Logon Provider) that collects the challenge and likely has a service component or SSPI provider that implements a protocol that interacts with their product’s service which then validates the challenge and returns a clear text password which they then stuff on the users behalf.

No matter how good the design and implementation of this system is they are reliant on that singular repository of clear text / reversibly encrypted passwords. Things get even worse when you look at how these systems have to deal with offline and local logons, you know the scenario where a user takes their laptop on a plane and still needs to do work? For this to work they need to store the clear text password on the clear on the local machine just like the Biometrics solution.

What about the public / private key approach? There are essentially two variants of this in windows PKINIT and PKU2U. The most common example is the use of a smart card, this is what is used by many Fortune 500s and governments to secure their logons. It is also possible to add these other modalities into Windows by emulating this approach with a software virtual smartcard but they too fall prey to the same attack vectors, namely what key do you use to keep the private key protected (the answer is you use DPAPI).

So does this mean OTP is a bad solution? No in fact it’s much better than passwords if a system was designed to use it and it makes a great additional factor, the thing is Windows was not.

Deploying forward secrecy on RedHat, Centos or Fedora based systems

If you want to deploy perfect forward secrecy and you use a RedHat, Centos or Fedora based system you likely won’t be able to do so without building your own OpenSSL. This is because by default the OpenSSL packages for these systems do not include ECC or ECDH and when web-servers like apache and Nginx are built against libraries that do not support them they obviously omit support for the algorithms.

Hopefully these distributions and the community repositories that support them will start including support for these algorithms by default soon but in the meantime you can add support by building your own.

You can check if your OpenSSL supports these algorithms easily enough, just run this command:

> openssl ciphers

If you see ciphers like “ECDHE-RSA-AES256-GCM-SHA384” then you have a version of OpenSSL that was built with ECC and ECDHE support enabled which is required if you want forward secrecy today. If you don’t see such a suite don’t worry, just check out this blog post and it will step you through how to build a new OpenSSL.

Unfortunately building a software package on each Linux distribution is slightly different, I will share what worked for me on Centos 6.3.

1. We don’t want to break your current Nginx deployment so lets start by finding out what options your Nginx was built with, you do this with the following command:

> nginx -V

This will return something like this:

nginx version: nginx/1.5.1

built by gcc 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC)

TLS SNI support enabled

configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 --with-cc-opt='-O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic'

To ensure we don’t omit something your using we will use the same options when we build our Nginx.

 

2. Next let’s make a backup of everything you have in your current Nginx installation just in case something doesn’t go right:

root> cp /usr/sbin/nginx /usr/sbin/nginx.orig

root> cp -r /etc/nginx /etc/nginx.orig

3. Now let’s make sure we get our dependencies installed, download our source and extract it so we can do our build.

root> cd /tmp

root> yum install pcre

root> yum install pcre-devel

root> yum install zlib

root> yum install zlib-devel

root> wget http://nginx.org/download/nginx-1.5.1.tar.gz

root> tar -xvzf nginx-1.5.1.tar.gz

root> cd nginx-1.5.1

4. Then we need to configure our build. Lets start by taking the options you saw when you ran “nginx -V” and add the following to the end of them:

opt='-L/usr/local/ssl/lib' --with-cc-opt='-I/usr/local/ssl/include'

This is where your OpenSSL libraries and headers should be, just append the combined set of parameters to the “./configure” command, this will look like this:

root> ./configure --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 --with-cc-opt='-O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --with-ld-opt='-L/usr/local/ssl/lib' --with-cc-opt='-I/usr/local/ssl/include'

5. Make and install your Nginx

root> make
root> make install

6. Restart Nginx

services nginx restart

Now if you are using the SSL configuration I recommend here you already have set your server to prefer the commonly supported forward secret algorithms and if you visit https://sslcheck.x509labs.com you will see these ECDH based algorithms at the top of the list of negotiated suites.

 

For more information see:

1. How to botch TLS forward secrecy, AGL
2.
Getting the Most Out of SSL Part 1: Choose the Right Certificate, CA Security
3. SSL Configuration Checker, X509 Labs
4. High Performance Browser Networking, Ilya Grigorik
5. Bulletproof SSL/TLS and PKI, Ivan Ristic

How to Build Your Own OpenSSL

So you have been reading all the press on forward secrecy and want to deploy it? But does your OpenSSL support it? Thankfully it is easy to tell, just run this command:

> openssl ciphers

If you see ciphers like “ECDHE-RSA-AES256-GCM-SHA384” then you have a version of OpenSSL that was built with ECC and ECDHE support enabled which is required if you want forward secrecy today.

So how do you go about doing this? Thankfully you don’t need to be a developer of cryptographer, with the following commands you should be able to get the latest (as of the time of this post) OpenSSL with ECC and ECDH enabled.

root> cp /usr/bin/openssl /usr/bin/openssl.orig
root> cd /tmp
root> wget http://www.openssl.org/source/openssl-1.0.1e.tar.gz
root> tar -xvzf openssl-1.0.1e.tar.gz
root> cd openssl-1.0.1e
root> ./config no-shared no-threads 
root> make depend
root> make
root> make install

You may also need to re-build your web server,you see even though the latest versions of Nginx and Apache include the necessary changes to enable ECDH if the version you are running was built against a version of OpenSSL that did not include support your going to have to rebuild it also. Here is a quick post on how to do that for Nginx.

 

Good luck!

 

Ryan