Category Archives: Security

Strategies for Deploying 2FA

 

Two Factor Authentication (2FA) enables applications and services to tie the usage of a two things (typically something you know and something you have) to user accounts so that both factors must be proven prior to the use of the service. Normally the second factor is a device (either physical or virtual); a great example of this is how Google and Microsoft have added support for One Time Passwords (OTP) in their online services.

With that said there are many forms of two factor authentication and OTP is simply one of them. When we look at deploying 2FA, before you get to choosing what kind of technology to adopt you first need to understand “how” we will you use it and where? The most common answers to the “how” question are:

  • Optional usage.
  • Conditional usage.
  • Mandatory usage.
  • Account / password recovery.

You will notice I have crossed out the first option (optional usage); the reasoning behind this is that optional usage provides no security benefit. This is because the attacker can still choose to attack the weaker mechanism.

Next is conditional usage, an example to of this model would be how some solutions prompt you to authenticate with a stronger mechanism when performing a privileged operation such as modifying another users account.

Then there is mandatory usage, this is of course what we all envision when we think of using Two Factor Authentication. Unfortunately it is seldom used as it has significant barriers do adoption, the three most common issues I see preventing this model being used are:

  1. The usability of these solutions is normally considered too poor for the serviced user community to be expected to use all the time.
  2. Accessing the same account from multiple devices and locations where the second factor is not available or a viable option.
  3. Not all services are able to be enabled for multi-factor authentication. The canonical example here is that of POP/IMAP services offered by Google, by default they allow the user to log in via POP/IMAP with a password.

This leaves many sites coming to the conclusion that leveraging 2FA for account recovery (a special case of conditional usage) is the most deployable solution for their user bases.

But what if you want to actually achieve a world where the usage of 2FA is mandatory? The answer is buried in risk assessment, planning and having a technology strategy that includes acceptable authentication technologies.

Organizations should pick what technologies they will utilize for authentication and incorporate these as requirements into the procurement and technology adoption processes. While its natural (and even beneficial) to standardize on a single technology the business requirements and technological realities mean that you will have a suite of standards you will need to support.

Usually the process of establishing those standards begins with an inventory of what technologies you are already using. If you are an enterprise and running Windows that list will include Kerberos with passwords as well as NTLM.

Next one assesses the how extensive the use of each of these protocols are in your environment, what platforms/applications support those protocols, how business critical/sensitive those systems are, what are the constituency of users that use them and finally getting a solid understanding of the risks those protocols represent in your environment.

Armed with this information you now have sufficient information to build a plan, most of the time the conclusion will be something along the following:

  1. You are leveraging some legacy authentication mechanism (possibly NTLM) that you need to aggressively deprecate.
  2. There are small (relatively speaking) set of users who perform business critical / sensitive operations and if those users were to step up to a stronger authentication mechanism the business would benefit.
  3. If will take numerous budgetary cycles to standardize the organization on the desired core authentication technologies.

Armed with this information you are now prepared to evaluate the various approaches to Two Factor Authentication and build a practical plan on how you get it ubiquitously deployed. That deployment will likely involve all of the approaches called out above:

  1. Optional usage – Piloting the use of the chosen technologies with users across your targeted user segments, you ask them to use the new authentication solution as their primary authentication technique.  You also ask them to track their experiences, report their problems and recommendations to increase chances of successful adoption. During this time it is very valuable to measure their actual usage of the technology use audit logs if available.
  2. Conditional usage – Usually it is not possible to jump directly to mandatory usage so the most sensitive users and scenarios are tackled first, for reasons called out above this also typically requires starting by only requiring the usage when possible or appropriate. This allows you to quickly begin to realize the benefits of deploying this technology while not negatively effecting productivity.
  3. Mandatory usage – Once you feel prepared to support the usage as the primary authentication approach for your identified application and scenarios you move to making the usage you’re your identified users / scenarios mandatory. This is typically done one user segment at a time, each business has different operational requirements and as you deploy to each community of users you learn something new that can be used to ensure the project is a business success.

Hopefully you found this post useful, let me know if you have any questions and would like to discuss your particular situation and how one can approach eliminating or at least reducing your organizations dependency on passwords.

Windows Certificate Enrollment Protocols

Windows has two sets of certificate enrollment protocols; these protocols are used to enable a client to acquire and maintain certificates with little to no manual interaction.

The first implementation of certificate enrollment in Windows was introduced with Windows XP, it was was based on DCOM; this DCOM dependency limited where you could deploy it. The solution could never be practically or securely could be used over the internet and in segmented corporate networks it in essence required each island of trust to deploy their own certificate authority which was both costly and impractical.

The second implementation was introduced in Windows 7, it is based on SOAP based web services and has two core APIs one for determining what certificates a client should enroll for and another for acquiring those certificates. This newer approach addresses the security and deployability concerns of its predecessor and enables the deployment of CA as a pure Software As A Services (SAAS) solution.

Normally I don’t write about GlobalSign products here but when we decided how we would build our own solution in this space we decided the security and deployment limitations of the DCOM approach were too limiting to enable our goals of reducing TCO and enabling broader use of certificates so we went with the more flexible and secure approach.

DCOM and your Firewall

DCOM (Distributed Component Object Model) is a framework used by Windows to allow COM components to work over the network.

Unlike your traditional TCP/IP and UDP/IP services where a single protocol has a fixed port DCOM dynamically assigns ports for the COM objects it remotes.

Any client wishing to communicate with objects owned by the server will always connect to the same TCP or UDP port. Clients discover the port associated with a particular object by connecting to and using the services provided by DCOM’s Service Control Manager (SCM).

The SCM always operates at a fixed network port on every computer; this is always port 135 for both TCP and UDP.

DCOM is a great technology for what it was designed for but unfortunately that design did not include the internet or firewalls.

If a firewall exists between the client and the DCOM endpoint you must open up a large range of ports that can be used for virtually any traffic between those two endpoints. By default, those ports port range from 1024 to5000.

The nature of this model means that one port is allocated per process, so the number of available ports equates to the number of simultaneous DCOM processes allowed through the firewall.

This design basically negates the value proposition of the firewall; as such deployment of services based on DCOM across firewall boundaries is not something that should ever be done.

Enterprise logon with challenge response (OTP) tokens

So from time to time I am asked about how to add support for a new authentication method for Windows Logon, the answer to that question is buried in what authentication methods Windows natively supports.

For the purpose of this blog post lets scope that question to the Windows Kerberos Implementation as this is the “modern” authentication platform in Windows (in other words Kerberos as they do not want you to be using NTLM any further), additionally it supports most authentication methods.

So what are those methods? Essentially there are two passwords and public/private keys. What about the Windows Native implementation of Biometrics and Picture Passwords? These like nearly every other modality of authentication in Windows is simply a layer on top of the native support of passwords that is built into Windows.

I should note I was the Lead Program Manager for the Windows Biometric Framework and I do think it is a good solution for what it was designed for.

Let me explain; let’s use Biometric as an example. In this solution there is a service that works with the biometric sensors to perform the biometric match. When the match occurs this service releases a clear text copy of your password which in turn is stuffed into the Windows logon path just as if the user entered the password by hand.

This means that somewhere in the file-system there is a clear text copy of the password stored in a reversibly encrypted form. Specifically the encryption done in this case is performed using Windows subsystem called the Data Protection API (DPAPI). DPAPI is used by applications when they need keep a secret — secret, you see this is a Catch 22 scenario unless a password (or key) is entered there is no secret to encrypt with which means ultimately there is a secret that is stored in the clear. This means if you can get at the root secret you can get at the clear text values encrypted with DPAPI.

There are ways to mitigate some of the associated risks; one example being the use of Bitlocker and a TPM protector but unfortunately this is not broadly deployed and doesn’t address the full risk profile; this is why Microsoft positions the Biometric feature as one designed Windows as a convince. That is not to say it cannot be used in a secure way or to suggest it can not be used in an enterprise but unless thought is given to the threat model and the right mitigations are deployed it actually weakens your security.

So what about those enterprise OTP and challenge response solutions that we keep hearing about, surely they must be different right? Unfortunately no they are not. In fact they are probably worse because unlike the Windows Biometric solution they create a central repository of all clear text passwords.

You see for them to work they introduce a client (once called a GINA, now called a Logon Provider) that collects the challenge and likely has a service component or SSPI provider that implements a protocol that interacts with their product’s service which then validates the challenge and returns a clear text password which they then stuff on the users behalf.

No matter how good the design and implementation of this system is they are reliant on that singular repository of clear text / reversibly encrypted passwords. Things get even worse when you look at how these systems have to deal with offline and local logons, you know the scenario where a user takes their laptop on a plane and still needs to do work? For this to work they need to store the clear text password on the clear on the local machine just like the Biometrics solution.

What about the public / private key approach? There are essentially two variants of this in windows PKINIT and PKU2U. The most common example is the use of a smart card, this is what is used by many Fortune 500s and governments to secure their logons. It is also possible to add these other modalities into Windows by emulating this approach with a software virtual smartcard but they too fall prey to the same attack vectors, namely what key do you use to keep the private key protected (the answer is you use DPAPI).

So does this mean OTP is a bad solution? No in fact it’s much better than passwords if a system was designed to use it and it makes a great additional factor, the thing is Windows was not.

Deploying forward secrecy on RedHat, Centos or Fedora based systems

If you want to deploy perfect forward secrecy and you use a RedHat, Centos or Fedora based system you likely won’t be able to do so without building your own OpenSSL. This is because by default the OpenSSL packages for these systems do not include ECC or ECDH and when web-servers like apache and Nginx are built against libraries that do not support them they obviously omit support for the algorithms.

Hopefully these distributions and the community repositories that support them will start including support for these algorithms by default soon but in the meantime you can add support by building your own.

You can check if your OpenSSL supports these algorithms easily enough, just run this command:

> openssl ciphers

If you see ciphers like “ECDHE-RSA-AES256-GCM-SHA384” then you have a version of OpenSSL that was built with ECC and ECDHE support enabled which is required if you want forward secrecy today. If you don’t see such a suite don’t worry, just check out this blog post and it will step you through how to build a new OpenSSL.

Unfortunately building a software package on each Linux distribution is slightly different, I will share what worked for me on Centos 6.3.

1. We don’t want to break your current Nginx deployment so lets start by finding out what options your Nginx was built with, you do this with the following command:

> nginx -V

This will return something like this:

nginx version: nginx/1.5.1

built by gcc 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC)

TLS SNI support enabled

configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 --with-cc-opt='-O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic'

To ensure we don’t omit something your using we will use the same options when we build our Nginx.

 

2. Next let’s make a backup of everything you have in your current Nginx installation just in case something doesn’t go right:

root> cp /usr/sbin/nginx /usr/sbin/nginx.orig

root> cp -r /etc/nginx /etc/nginx.orig

3. Now let’s make sure we get our dependencies installed, download our source and extract it so we can do our build.

root> cd /tmp

root> yum install pcre

root> yum install pcre-devel

root> yum install zlib

root> yum install zlib-devel

root> wget http://nginx.org/download/nginx-1.5.1.tar.gz

root> tar -xvzf nginx-1.5.1.tar.gz

root> cd nginx-1.5.1

4. Then we need to configure our build. Lets start by taking the options you saw when you ran “nginx -V” and add the following to the end of them:

opt='-L/usr/local/ssl/lib' --with-cc-opt='-I/usr/local/ssl/include'

This is where your OpenSSL libraries and headers should be, just append the combined set of parameters to the “./configure” command, this will look like this:

root> ./configure --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 --with-cc-opt='-O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --with-ld-opt='-L/usr/local/ssl/lib' --with-cc-opt='-I/usr/local/ssl/include'

5. Make and install your Nginx

root> make
root> make install

6. Restart Nginx

services nginx restart

Now if you are using the SSL configuration I recommend here you already have set your server to prefer the commonly supported forward secret algorithms and if you visit https://sslcheck.x509labs.com you will see these ECDH based algorithms at the top of the list of negotiated suites.

 

For more information see:

1. How to botch TLS forward secrecy, AGL
2.
Getting the Most Out of SSL Part 1: Choose the Right Certificate, CA Security
3. SSL Configuration Checker, X509 Labs
4. High Performance Browser Networking, Ilya Grigorik
5. Bulletproof SSL/TLS and PKI, Ivan Ristic

How to Build Your Own OpenSSL

So you have been reading all the press on forward secrecy and want to deploy it? But does your OpenSSL support it? Thankfully it is easy to tell, just run this command:

> openssl ciphers

If you see ciphers like “ECDHE-RSA-AES256-GCM-SHA384” then you have a version of OpenSSL that was built with ECC and ECDHE support enabled which is required if you want forward secrecy today.

So how do you go about doing this? Thankfully you don’t need to be a developer of cryptographer, with the following commands you should be able to get the latest (as of the time of this post) OpenSSL with ECC and ECDH enabled.

root> cp /usr/bin/openssl /usr/bin/openssl.orig
root> cd /tmp
root> wget http://www.openssl.org/source/openssl-1.0.1e.tar.gz
root> tar -xvzf openssl-1.0.1e.tar.gz
root> cd openssl-1.0.1e
root> ./config no-shared no-threads 
root> make depend
root> make
root> make install

You may also need to re-build your web server,you see even though the latest versions of Nginx and Apache include the necessary changes to enable ECDH if the version you are running was built against a version of OpenSSL that did not include support your going to have to rebuild it also. Here is a quick post on how to do that for Nginx.

 

Good luck!

 

Ryan

Example IIS SSL / TLS configuration

Configuring your web server for SSL can be a little overwhelming. In the case of IIS (In this post I am assuming you are running 2008 R2 or later) it often requires navigating a myriad of screens and sometimes may require editing the registry.

In this post we will configure SSL via a simple .reg file that configures the Windows SSL implementation (SCHANNEL) directly and makes the following changes:

  1. We have chosen SSL cipher suites to offer a blend of performance and security.
  2. We have disabled SSL v2 and v3 and enabled all versions of TLS.
  3. OCSP stapling is on by default.
  4. We have enabled SSL session caching.

Here is the .reg file:

Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 2.0]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 2.0\Client]
"DisabledByDefault"=dword:00000001
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 2.0\Server]
"DisabledByDefault"=dword:00000001
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Server]
"DisabledByDefault"=dword:00000000
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server]
"DisabledByDefault"=dword:00000000
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL]
"AllowInsecureRenegoClients"=dword:00000000
"DisableRenegoOnServer"=dword:00000000
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL]
"ServerCacheTime"=dword:00600000
"MaximumCacheSize"=dword:00020000
[HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Cryptography\Configuration\SSL\00010002]
"Functions"="TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P521,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256_P256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256_P384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256_P521,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256_P256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256_P384,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256_P521,TLS_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA256,TLS_DHE_DSS_WITH_AES_128_CBC_SHA256,TLS_DHE_DSS_WITH_AES_256_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA_P256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA_P521,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P522,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA_P256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA_P521,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA_P256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA_P521,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA"

 

WARNING: As always when working with .reg files be sure to take a backup of the current registry keys so you can return to previous values if things do not work as expected.

In addition to the above changes we also want to ensure we are add some headers that will help secure the site. In the case of IIS setting these headers may require the addition of modules, as such I wont document how to set the headers here and instead provide links I to authoritative resources that will help you do this part:

  1. X-Frame-Options header to help protect from Click-Jacking. See KB2694329.
  2. X-Content-Security-Policy header
    to help protect from Cross-Site-Scripting. See Security through HTTP response headers.
  3. Strict-Transport-Security
    header instructs the browser to always start with the https site.  See the Strict Transport Security Module on CodePlex.

We also want to configure IIS to issue a permanent redirect for all http requests to their https equivalent, for more information on how to do this I recommend reading HTTP to HTTPS redirects on IIS 7.x and higher.

Deploying SSL – Beyond the certificate and cipher suites

If you were to go do a search on the internet for “configuring SSL” you would find a ton of references on configuring your favorite web server to do SSL some of it good and some of it not so good. But what you don’t see a lot of content on is how to deploy it successfully.

What do I mean by successfully? These articles ignore the larger picture, for example:

  1. Are there changes to your content you will need to make?
  2. What about external content and script references?
  3. Are there any SEO considerations?
  4. Are there other related considerations?

To some these things may be common-sense but even for those a refresher never hurt so lets go over them again briefly.

 

Are there changes to your content you will need to make?

Probably, lots of content I encounter explicitly references a protocol serving it (aka href=”http://…” and src=”http://…”) and if that’s the way your content looks then yes you will want to update your content to use relative references, for example

href=”//{hostname}/{uri}”

src=”/{uri}”

This way your content is independent of what protocols are used to transport it, it will also help prevent your users from encountering “mixed content” warnings.

 

What about external content and script references?

Another scenario that causes mixed content warnings is when sites use of scripts and content hosted on other servers that is explicitly referenced over HTTP. The two most common I encounter are YouTube Embeds and Google Analytics but there are lots of different third-party content and scripts out there and each one you embed will also need to support SSL.

Thankfully I have never encountered one that does not support SSL and in most cases you will just need to make the reference relative (“//”) and let the browser decide what protocol to use to get the reference. In the very rare cases where this does not work a quick email to support at the content/script provider will get you the URL to the SSL version of the content/script.

Though this has always been the case one thing to keep in mind is that the perceived performance and actual security of your site is dependent on the performance and security of the providers you include in it. I strongly recommend you check their performance and SSL configuration and ask them to make any changes necessary to address issues this might identify.

 

Are there any SEO considerations?

Aren’t there always? So to achieve all of security benefits of SSL you have to deploy SSL across your entire site (this is commonly referred to as Always On SSL). This means that as far as a search engine is concerned there could be two copies of the same content. This is treated as a negative condition in most page ranking schemes, we address this in a few ways:

1. Tell the search engine which content is authoritative (aka which one we want them to index), we do this using:

    • Updating <link rel=”canonical”> to point to the HTTPS version.
    • Updating the XML Sitemap to refer to the HTTPS version of the content.

Making these two changes ensures the search engine will index the SSL version of the site so the first link the user visits will be your HTTPS version.

These things not only improve the users experience by making them get at the content quicker (instead of relying on a rewrite rule to get them to the HTTPS content) but also help to mitigate MITM attacks that would be possible for organic traffic based on your HTTP urls.

2. Ensure the robots.txt is available over SSL.

3. Redirect all HTTP requests to your site to the HTTPs version using a permanent redirect (a HTTP 301), this will transfer your PageRank to the SSL url.

4. Update the search engine webmaster tools to refer to the HTTPS url instead of the HTTP URL.

 

Are there other considerations?

There are a few, for one there is performance. There is a myth that SSL is computationally expensive, it’s simply not true (at least today) but that doesn’t mean you don’t need to be concerned with performance.

There are several settings you care about, for example it’s common for websites to use domain sharding means when you’re using SSL is each one of those requests represents a new SSL negotiation and the negotiation is the most costly part of the SSL session. While we can’t eliminate this cost we can ensure that the servers terminating our SSL sessions implement session caching and reuse to reduce the impact of the SSL overhead. We can also try to limit the number of domains we use when sharding so reduce the number of SSL sessions needed to finish rendering a site.

You may also want to look at deploying a forward proxy in front of your web servers where all SSL would be terminated; this can give you performance benefits beyond SSL and can simplify key and SSL management in your environment at the same time.

Then there is the question of cookies, while all sensitive cookies should already be marked “secure” so they won’t get sent over non-secure sessions you should consider marking all cookies as “secure” since the whole site is now supposed to be served over SSL.

Depending on how you have authored your rewrite rules there may be static references to HTTP buried in there, you will want to review your rewites to ensure they are protocol independent (where appropriate) so that you don’t end up forcing users through an unnecessary redirect.

And finally setting the HTTP Strict Transport Security header means browsers will visit you over HTTPS the every time, even if not from search results; this will improve relative perceived performance and help protect from MITM attacks.

 

Ryan

 

Resources

1. Choose the Right Certificate, CA Security

2. Deploying SSL – How to get your server configuration right, Ryan Hurst

3. SSL Configuration Checker, X509 Labs

4. SSL Pulse, Trustworthy Internet Movement

5. Bulletproof SSL/TLS and PKI, Ivan Ristic

6. High Performance Browser Networking, Ilya Grigorik

7. How to get the latest stable OpenSSL, Apache and Nginx, Ryan Hurst

8. Always On SSL, OTA

9. Revocation Report, X509 Labs

10. SSL/TLS Deployment Best Practices, Qualys Labs

11. Transport Layer Security, WikiPedia

12. How to botch TLS forward secrecy, AGL

Deploying SSL – How to get your server configuration right

They say the most complicated skill is to be simple; despite SSL and HTTPS having been around for a long time, they still are not as simple as they could be.

One of the reasons for this is that the security industry is constantly learning more about how to design and build secure systems; as a result, the protocols and software used to secure online services need to continuously evolve to keep up with the latest risks.

This situation creates a moving target for server administrators, creating a situation where this year’s “best practice” may not meet next year’s. So how is a web server administrator to keep up with the ever-changing SSL deployment best practices?

There is, of course, a ton of great resources on the web that you can use to follow industry trends and recent security research, but it’s often difficult to distill this information into actionable and interoperable SSL configuration choices.

To help manage this problem there are tools like the X509labs SSL Configuration Checker which look your server’s configuration and makes recommendations on what you should change to address current industry best practices. This tool makes recommendations that are based on current and past security research, trends, and both client and server behavior and capability.

The tool performs over 33 different tests on your server configuration and, based on the results, recommends specific changes you should make to address its findings.

In general, the guidance the tool provides can be categorized as follows:

 

Support latest versions of TLS protocol

Often organizations are slow to pick up newer versions of their web server and SSL implementations.  This is normally a conscious decision attributed to the old adages of “if it’s not broken don’t fix it.”

The problem is that these older versions are plagued with security issues. In many cases, these organizations pick up security patches, but these patches do not include the more recent (and more secure) versions of the protocols.

It is important that all sites add support for TLS 1.2 as this new version of the protocol offers security improvements over its predecessors and lays the groundwork for addressing future security concerns.

Disable older known insecure versions of the SSL protocol

SSL was defined in 1995 and has evolved significantly since then, SSL 2.0 in particular has been found to have a number of vulnerabilities. Thankfully these issues have been resolved in later version of the protocol.

Unfortunately at least 28% of sites today still support it (based on SSL-pulse data); when I speak to server administrators about why they enable this older version they commonly mention concerns over client interoperability. Thankfully browser statistics show us that TLS 1.0 support is ubiquitous and it is no longer necessary to support the older insecure version of the protocol.

 

Choose secure and modern cipher suites

This is one of the more confusing parts of configuring SSL; it’s also one of the most important. No matter how strong the cryptographic key material that goes into your certificate, the strength of your SSL is only as secure as the cryptography used to encrypt the session.

You don’t need to be a cryptographer or security researcher to make the right choices though, the X509Labs SSL configuration checker will help you keep on top of current recommendations. Based on current research, the following would be solid choices for you to go with:

Apache

SSLProtocol -ALL +SSLv3 +TLSv1 +TLSv1.1 +TLSv1.2;
SSLCipherSuite "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS"
SSLHonorCipherOrder on;

Nginx

ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS";
ssl_prefer_server_ciphers on;

These settings were chosen based on several factors including strength of the cryptography, interoperability and support for forward secrecy whenever it is supported by both the client and the server.

What is forward secrecy? You have forward secrecy when an attacker needs more than the encrypted traffic from your server and its private key to decrypt the traffic.

For you to be able to use the cipher suites that support forward secrecy here you will need to be using a version of OpenSSL and your web server that was built with ECDHE support. If you’re not you can still use these settings you just won’t offer forward secrecy to your users.

 

Disable insecure options in SSL and HTTP

As a general rule, protocols have options; these options can have unforeseen side-effects.

A great example of this is the option of SSL compression. Compression was added to SSL to improve performance of the protocol but it had a side effect – it enabled attackers to perform cryptanalysis on the cryptographic keys used in SSL. This attack was called CRIME (Compression Ratio Info-leak Made Easy) and, as such, this option is disabled today in secure SSL configurations.

Ensuring your configuration does not enable any such options is key to having a secure SSL configuration.

 

Enable performance optimizing options in SSL

To truly benefit from deploying SSL you need to apply it to your whole site—not doing so exposes sessions to attacks. The most common reason I hear from organizations as to why they are not deploying SSL across their whole site concerns performance.

This is a legitimate concern, according to Forester Research “The average online shopper expects your pages to load in two seconds or less, down from four seconds in 2006; after three seconds, up to 40% will abandon your site”.

And while it is true that an improperly configured web server will perform notably different than a properly configured one, it’s not difficult to configure your servers so that performance is not a major concern.

 

Ryan

 

Resources

  1. Getting the Most Out of SSL Part 1: Choose the Right Certificate, CA Security
  2. SSL Configuration Checker, CA Security
  3. SSL Pulse, Trustworthy Internet Movement
  4. Bulletproof SSL/TLS and PKI, Ivan Ristic
  5. High Performance Browser Networking, Ilya Grigorik
  6. How to get the latest stable OpenSSL, Apache and Nginx, Ryan Hurst
  7. Always On SSL, OTA
  8. Revocation Report, X509 Labs
  9. Transport Layer Security, WikiPedia
  10. Perfect forward secrecy , Wikipedia
  11. SSL Labs: Deploying Forward Secrecy, Qualys
  12. Intercepted today, decrypted tomorrow, Netcraft
  13. How to Build Your Own OpenSSL, Ryan Hurst
  14. Deploying forward secrecy on RedHat, Centos or Fedora based systems, Ryan Hurst

How to do a quick and dirty benchmark on a smart card

So you have to make a decision on which smart card or crypto token you’re going to use on a given project, there are lots of things to consider including price, platform support, certifications, build and software quality, number of certificates and keys it can hold, what algorithms and key sizes it implements amongst numerous others.

But even with those things well understood you need to understand the performance characteristics of the device, the two most common operations your users will likely be doing in an authentication deployment is sign and certificate requests.

Thankfully in Windows it’s fairly easy for us to get an idea of how these operations will perform for users with just a little bit of PowerShell.

In both cases we will use the measure_runs script that can be found here.

Measuring Signing Performance

To do this we will use the support PowerShell has for Code Signing, first thing is you need a code signing certificate whose key is on your smart card; I won’t include how to create one in this post but if you need help just ping me on twitter (@rmhrisk).

1. Make sure you have at least one code signing certificate by running this command:

gci cert:\CurrentUser\My –CodeSigningCert

 

Assuming you do you should see something like this:

 

Thumbprint Subject

———- ——-

EEB2729E922E72E9DCC03000129795939F194358 CN=PowerShell User

 

2. Make sure you can sign with that certificate:

$cert = @(gci cert:\currentuser\my -CodeSigningCert)[0]; Set-AuthenticodeSignature measure_runs.ps1 $cert

 

If things work you will see something like this:

SignerCertificate Status Path

—————– —— —-

EEB2729E922E72E9DCC03000129795939F194358 Valid measure_run..

 

3. Sign 100 times averaging the results:

.\measure_runs.ps1 -numberRuns 100 -run { Set-AuthenticodeSignature measure_runs.ps1 $cert} -before {$cert = @(gci cert:\currentuser\my -CodeSigningCert)[0]}

 

If things work you will see something like this:

[ 1 / 100] (Preparing… ) Running… 0.1874 seconds

[ 2 / 100] (Preparing… ) Running… 0.1700 seconds

[ 3 / 100] (Preparing… ) Running… 0.1316 seconds

[ 4 / 100] (Preparing… ) Running… 0.1426 seconds

[ 5 / 100] (Preparing… ) Running… 0.1468 seconds

Average

——-

0.262226129

 

A few things to keep in mind, you will have been prompted for a pin once in this exercise, why once? Because Windows is caching the handle to the card for the PowerShell process this means anything in the context of that PowerShell session will be able to sign with the key unless you pull the token.

Measuring Certificate Request Generation

To do this we will use a script that generates a self-signed certificate I stumbled on while searching the internet you can find it here (http://poshcode.org/1793), you will need to modify this to refer to the appropriate CSP/KSP for your smart card so we are testing the right device/software.

While a self-signed certificate isn’t exactly a Certificate Request to create both generate a key pair and sign a message so it’s a going to be representative of the CSR generation time also.

To do this test run this command:

.\measure_runs.ps1 -numberRuns 10 -run { .\createselfsigned.ps1}

 

If things work you will see something like this:

[ 1 / 100] Running… 11.9023 seconds

[ 2 / 100] Running… 9.8994 seconds

[ 3 / 100] Running… 9.9996 seconds

[ 4 / 100] Running… 10.7412 seconds

[ 5 / 100] Running… 8.9172 seconds

Average

——-

9.0738614

 

Hope this helps you, if you have any questions don’t hesitate to ask.

 

Ryan