Tag Archives: SSL

A quick look at SSL performance

When people think about SSL performance they are normally concerned with the performance impact on the server, specifically they talk about the computational and memory costs of negotiating the SSL session and maintaining the encrypted link.  Today though it’s rare for a web server to be CPU or memory bound so this really shouldn’t be a concern, with that said you should still be concerned with SSL performance.

Did you know that at Google SSL accounts for less than 1% of the CPU load, less than 10KB of memory per connection and less than 2% of network overhead?

Why? Because studies have shown that the slower your site is the less people want to use it. I know it’s a little strange that they needed to do studies to figure that out but the upside is we now have some hard figures we can use to put this problem in perspective. One such study was done by Amazon in 2008, in this study they found that every 100ms of latency cost them 1% in sales.

That should be enough to get anyone to pay attention so let’s see what we can do to better understand what can slow SSL down.

Before we go much further on this topic we have to start with what happens when a user visits a page, the process looks something like this:

  1. Lookup the web servers IP address with DNS
  2. Create a TCP socket to the web server
  3. Initiate the SSL session
  4. Validate the certificates provided by the server
  5. Establish the SSL session
  6. Send the request content

What’s important to understand is that to a great extent the steps described above tasks happen serially, one right after another – so if they are not optimized they result in a delay to first render.

To make things worse this set of tasks can happen literally dozens if not a hundred times for a given web page, just imagine that processes being repeated for every resource (images, JavaScript, etc.) listed in the initial document.

Web developers have made an art out of optimizing content so that it can be served quickly but often forget about impact of the above, there are lots of things that can be done to reduce the time users wait to get to your content and I want to spend a few minutes discussing them here.

First (and often forgotten) is that you are dependent on the infrastructure of your CA partner, as such you can make your DNS as fast as possible but your still dependent on theirs, you can minify your web content but the browser still needs to validate the certificate you use with the CA you get your certificate from.

These taxes can be quite significant and add up to 1000ms or more.

Second a mis(or lazily)-configured web server is going to result in a slower user experience, there are lots of options that can be configured in TLS that will have a material impact on TLS performance. These can range from the simple certificate related to more advanced SSL options and configuration tweaks.

Finally simple networking concepts and configuration can have a big impact on your SSL performance, from the basic like using a CDN to get the SSL session to terminate as close as possible to the user of your site to the more advanced like tuning TLS record sizes to be more optimum.

Over the next week or so I will be writing posts on each of these topics but in the meantime here are some good resources available to you to learn about some of these problem areas:

How to tell DV and OV certificates apart

Introduction

There are in essence three kinds of SSL certificates: Domain Validated, Organization Validated and Extended Validated. I am not going to write about the differences here it seems that there are hundreds of articles on this topic on the Internet.

What I think has not been given sufficient coverage is how one is able to look at a certificate and determine what type it is.

One would think that this would be easy; In theory if nothing was explicitly stated it would be a Domain Validated certificate (since it is the weakest validation), otherwise someone would put something in the certificate making it clear that the certificate was either Organization Validated or Extended Validated.

Unfortunately it’s not this simple, the main issue being the historic lack of coordination within the CA industry.

Each Certificate Authority (CA) has its own unique practices relating to how they mark their certificates so with the existing deployed certificates there is no singular rule or approach can be used to definitively know what level of validation was done for a given certificate.

Thankfully it looks like that this problem is betting better thanks to the adoption of the Baseline Requirements but in the meantime we have to make do with heuristics.

Deterministic Approach

Today the only way to know with confidence that a certificate is of a specific type is to know the practices of each CA.

In X.509 the way an issuer is supposed to express something like this is via the Certificate Policies extension which is defined in RFC 5280.

This allows a CA to express a unique identifier (an OID) in their certificates that maps to a document that describes its practices associated with this certificate. This identifier can be used programmatically to do make trust decisions about a certificate or to differentiate the user interface in an application based on what type of certificate is being used.

This is exactly how browsers today can tell if a certificate is an Extended Validation (EV) certificate. In essence they have some configuration that says “I trust GlobalSign to issue EV certificates, when a certificate is presented to me from them that has this policy OID show the EV user experience”.

The Baseline Requirements use the same approach defining identifiers for Domain Validated and Organization Validated certificates, these are:

Type Policy Identifier
Domain Validated 2.23.140.1.2.1
Organization Validated 2.23.140.1.2.2

 

Having these identifiers takes us a long way towards our goal of deterministic evaluation of certificate issuance policy — that said not all CAs have adopted them which is technically alright since the Baseline Requirements do allow them to use their own Policy Identifiers.

Heuristic Approach

Since the Baseline Requirements were only established this year it will take some time for the existing install base of certificates to be re-issued to use these Policy Identifiers called about above. This doesn’t mean you can’t tell the certificates apart today, it does mean it is quite a bit messier though.

Here is some pseudo-code provided to me as an example from a friend that they used in one of their projects:

type = null;

if (cert is self-signed) then

     type = SS;        /* SS = Self-signed */

else if (cert was issued by a known “CA”) then

     type = DV;  /* DV = Domain Validation */ else if (cert contains a known EV Policy OID) then

     type = EV;  /* EV = Extended Validation */ else if (cert “Subject O” and “Subject CN” are the same or “Subject OU” contains “Domain Control Validated”) then {

     if (cert contains no Subject L, St or PostalCode) then

         type = DV;

}

else if (cert “Subject O” is “Persona Not Validated” and the cert’s issuer was StartCom

     type = DV;

if (type is null)

     type = OV;

This logic is not comprehensive but should work well enough for most uses.

Summary

Unfortunately today there is not a deterministic way to tell if a certificate was Domain or Organization Validated, that said things are changing and within a few years hopefully it will be possible.

In the mean-time there are heuristics you can use that help tell these types of certificates apart.

Algorithms, key size and digital certificates

Introduction

On the surface the digital certificates are not complicated — a third-party (a certificate authority) verifies some evidence and produces a piece of identification that can be presented at a later date to prove that the verification has taken place.

As is usually the case when we look a little deeper things are not that simple. In the this case we have to care about a few other things, for example what are the qualifications of the third-party, what are their practices and what cryptographic algorithms did they use to produce the digital certificate?

As an administrator using digital certificates like in the case of SSL these things also can have impact on your operational environment – by using a certificate from a certificate authority you take dependencies on their practices and operational environment.

This is especially true when it comes to decisions relating to what cryptographic algorithms and key lengths are accepted and used by that third-party.

Thankfully you do not need to be a cryptographer to make good decisions on this topic, first we need to start with an understanding of the history, future and then considerations.

History

In recent history the industry has relied on two algorithms, the first being an encryption algorithm called RSA the second being a hash algorithm called SHA-1. Both of which have are considered weaker now due to advances in cryptanalysis.

RSA’s strength and performance is based on the size of the key used with it, the larger the key the stronger and slower it is.

These advances in cryptanalysis have driven the increase in key size used with this algorithm which in turn has increased the amount of computing power necessary to maintain the same effective strength.

The problem with this is that that every time we double the size of an RSA key the decryption operations with that key become 6-7 times slower.

As a result as of all of this as of January 2011 trustworthy Certificate Authorities have aimed to comply with NIST (National Institute of Standards and Technology) recommendations by ensuring certificates all new RSA certificates have keys of 2048 bits in length or longer.

Unfortunately this ever increasing key size game cannot continue forever, especially if we ever intend do see SSL make up the majority of traffic on the internet – the computational costs are simply too great.

That takes us to SHA-1, hash algorithms take a variable amount of input and reduce it to a typically shorter and fixed length output the goal of which being to provide a unique identifier for that input. The important thing to understand is that hash algorithms are always susceptible to collisions and the advances in the cryptanalysis have made it more likely that such a collision can be made.

The problem here is that there is no parameter to tweak that makes this problem harder for an attacker, the only way to address this issue is to change to a stronger algorithm to produce the hash.

Future

For the last decade or so there has been slow and steady movement towards using two new algorithms to address these advances — SHA-2 and ECC.

ECC has the potential for significant performance benefits over RSA without reducing security and SHA-2 has three versions each with progressively longer lengths which help it both address the current risks and give it some longevity.

Considerations

Our goal in configuring SSL is enabling users to communicate with us securely; to accomplish this goal we need to be able to do this with the fewest hassles, lowest costs and comply with any associated standards.

Interoperability is the key that ensures the fewest hassles — if it was not for this we would simply switch to these new algorithms and be done with it. As is normally the case when it comes to security this is where Windows XP rears its ugly head, SHA-2 was added to XP in Windows XP Service Pack 2 and ECC in Windows Vista.

These facts set the adoption clock for these new algorithms; if you care about XP (about 30% of the Internet today) you can’t adopt ECC and SHA-2 in full for about 5 years.

This leaves us with RSA 2048 and SHA-1 which thankfully is broadly considered sufficient for the next decade.

Performance is always a concern as well — a RSA 2048-bit RSA certificate used in SSL will result in around a 10% CPU overhead not huge but something to keep in mind.

As mentioned previously we can’t forget compliance — whether it is the Payment Card Industry / Data Security Standards (PCI/DSS), Federal Information Processing Standards (FIPS) 140-2 or some other set of criteria you need to meet this always needs to be considered.

Conclusion

The decision of what algorithm’s and key lengths to use in your digital certificates is dependent on a number of factors including security, interoperability, performance and compliance. Each situation may require a different trade-off to be made however a rule of thumb if you stick with SHA-2 and RSA 2048-bit certificates and keys you should be fine for now.

 

Resources

[1]   BlueKrypt Cryptographic Key Length Recommendations

[2]   Recommendation for Key Management, Special Publication 800-57 Part 1 Rev. 3, NIST, 05/2011

[3]   Fact Sheet Suite B Cryptography, NSA, 11/2010

[4]   Worldwide Operating System Statistics, Stat Counter, 9/2012

[5]   RSA Algorithm, Wikipedia

[6]   RSA Key Lengths, Javamex

[7]   ECC Algorithm, Wikipedia

[8]   Performance Analysis of Elliptic Curve Cryptography for SSL, Sun

[9]   Using ECC keys in X509 certificates, UnmitigatedRisk

[10] Using SHA2 based signatures in X509 certificates, UnmitigatedRisk

[11]Payment Card Industry / Data Security Standards – PCI

[12]Federal Information Processing Standards 140-2 – NIST

Using ECC keys in X509 certificates

Recently the CAB Forum published a document called the Baseline Requirements for the Issuance and Management of Publicly Trusted Certificates.

This document was authored by both browsers and public CAs and is used by the browser vendors to mandate what minimum technical requirements need to be met for inclusion into their “Root Programs”.

One of the changes specified in this document is that subscriber certificates (aka SSL certificates) containing RSA keys must have a bit length of 2048. This is a change for a lot of CAs (GlobalSign had made this change some time ago) one that has implications to server operators.

Just take a look at the Crypto Plus Plus Benchmarks to see how much more expensive 2048 bit RSA. For most users this additional computational cost won’t be an issue but in some cases customers may need to increase the computing power they allocate for SSL establishment.

But what alternatives do you have? Well there is one, certificates with ECC keys; using these have the potential to significantly decrease the computational costs for SSL negotiations (even over your old 1024bit RSA certificate) but they come at a significant penalty – compatibility.

ECC was not supported in Windows until VISTA which was released in 2009, this basically means 100% of the XP clients out there (around 29% of the browsers on the internet as of July 2012) would be unable to establish a session with your website if you switched exclusively to ECC.

This is important for more than just Internet Explorer users since even Chrome and Safari use CryptoAPI for certificate validation when on Windows.

This would mean these users would see something like this:

 

That is pretty scary, so how long until we can use this more broadly? It’s hard to say there is a good article titled “The developers guide to browser adoption rates” that sheds some light, that and the historic gs.statcounter.com results. Based on these unless there is a sudden change (which is possible these machines are getting pretty old) I would assume that we have around 4-5 years of XP out there yet.

Hope this helps,

Ryan

Using SHA2 based signatures in X509 certificates

It’s been an exciting decade for cryptography; as a result we see smaller key sizes and weaker algorithms getting deprecated.

One driver of such things is the U.S. Federal Government, specifically NIST.

One example of this would be NIST Special Publication 800-131A which disallows the use of SHA1 after December 2013. What this means is if you are in the U.S. Federal Government or you work with them you may have to revise your technology strategy to use SHA2 in its place.

But what if you don’t have any policy mandate forcing you to do this switch? Well it’s a good idea but it has consequences too, namely compatibility.

You see SHA2 was published in 2001 so anything produced before then will not support it. The most notable example is Windows XP which as of July 2012 has about 29% presence on the Internet.

This is important for more than just Internet Explorer users since even Chrome and Safari use CryptoAPI for certificate validation when on Windows.

The good news is that XP SP3 which was released in 2008 added support for this new suite of hash algorithms, that begs the question how many of those XP machines have XP SP3?

Unfortunately I don’t have any public references that can answer this question but let’s that 85% of all XP machines on the Internet have gotten this update (I have good confidence in this number) that means that 15% of those 29% would not be able to connect to your server over SSL if you used SHA2.

This would mean these users would see something like this:

 

 

That is pretty scary, so how long until we can use this more broadly? It’s hard to say there is a good article titled “The developers guide to browser adoption rates” that sheds some light, that and the historic gs.statcounter.com results. Based on these unless there is a sudden change (which is possible these machines are getting pretty old) I would assume that we have around 4-5 years of XP out there yet.

Hope this helps,

Ryan

A revised look at the new Windows Update SSL configuration

The other day I did a quick post about how SSL was configure for Windows Update, I thought I would double back and see if they have made any changes and it looks like they have.

From looking at the SSL Labs results I notice a few changes:

  1. The servers are now indicating a cipher suite preference.
  2. The servers are now putting TLS suites above the SSL suites.
  3. The servers no longer support TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA.
Since they made some changes since the post we can assume the goal was to improve things, I think they have but it doesn’t entirely make sense to me the choices they have made, for example:
  1. Why continue to support SSL 2.0 when no client that only supports SSL 2.0 has the WU client software on it.
  2. Why include more than one SSL cipher suite, if the goal is to enable the use of SSL 3.0 and the only clients to this server are based on SCHANNEL and CryptoAPI a single suite would be enough.
  3. If your going to support the RC4 cipher suites, why not prioritize them above the other suites so that they are resistant to BEAST.
  4. Why remove TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, I am guessing its performance but that is surprising since its not that much more expensive.

We also see a few changes when browsing to https://www.update.microsoft.com, unlike before when we browse from a client that doesn’t have a WU agent available on it (like XP /w IE6 and no SP) we no longer get instructions on how to get patched we now get a blank page.

Anyhow thats what I see today.

Ryan

Setting HTTP headers with OpenSSL and the OCSP test client

So I have been doing some OCSP performance bench-marking the last few weeks, one of the things we noticed while doing this is that OpenSSL does not set the HOST header when sending requests.

This is a problem when interacting with a OCSP responder that is on a shared host (a virtual host), without it the web-server doesn’t know which instance to send the request to. You can work around this by using the “-header” command, for example:

openssl ocsp -noverify -no_nonce -respout ocspglobalsignca.resp -reqout ocspglobalsignca.req -issuer globalsigng2.cer -cert globalsign.com.cer -url “http://ocsp2.globalsign.com/gsextendvalg2” -header “HOST'” “ocsp2.globalsign.com”

You need to ensure the HOST header is set consistently with the host contained in the -url switch but it works.

Ryan

Pulse data publicly availible

SSL Pulse is an initiative ran by Qualys to monitor the overall health of the SSL deployments on the Internet. It is based on the SSLLABS work Ivan Ristik has done; he has recently published the data used to derive these reports.

There are some interesting findings in buried in the raw data, for example:

  • Most of the certificates (85%?) are expired.
  • Most of the certificates are self-signed or from internal PKIs.
  • Those 5 “shorter than 1024bit” keys in the Pulse Dashboard are down to 3 (based on manual verification) are time valid certificates from public CAs, two expire this year the last in 2014.
Host Issuance Date Expiration Date Key Size Issuer
www.pysoft.com 01/25/2009 02/24/2014 512 Equifax Secure Global eBusiness CA-1
www.comlink.com.br 10/13/2009 11/07/2012 512 Thawte Premium Server CA
www.rtp.com 04/13/2009 06/04/2012 512 Thawte Premium Server CA

 

  • There are 2,472 RSAwithMD5 certificates in the 215,607 sample-set or around .01% of the hosts.

This last point gives us some context to some numbers Venafi published recently, they indicated 17.4% of the certificates they see are signed using MD5; this is a rate significantly higher than what we see here. Its probably safe to assume the difference is that their sample is primarily derived from intranets where private CAs are commonly are set up and forgotten.

For me the largest conclusion we can take from this data appears to be that there is a large number of organizations who set-up SSL (and PKIs) and simply forget about them – this is of course not a surprise but it’s neat to see it backed up with numbers.

Ryan

 

A look at the new Windows Update SSL certificates

This morning I noticed a tweet by Mikko about the Windows Update certificate chain looking odd so I decided to take a look myself.

I started with the webserver configuration using SSLLABS, unfortunately it did not fare well:

Looking a little closer we see a few things of interest:

  • SSLLABS is unable to validate the certificate
  • The server is using weak ciphers
  • The server is vulnerable to the BEAST attack
  • The server is not using an Extended Validation  (EV) Certificate
  • The server is supporting SSL 2.0

To understand the specifics here we needed to look a little deeper, the OpenSSL s_client is a great tool for this:

openssl s_client –showcerts -status –connect www.update.microsoft.com:443

Loading ‘screen’ into random state – done

CONNECTED(0000017C)

OCSP response: no response sent

depth=1 C = US, ST = Washington, L = Redmond, O = Microsoft Corporation, CN = Microsoft Update Secure Server CA 1

verify error:num=20:unable to get local issuer certificate

verify return:0

Certificate chain

0 s:/C=US/ST=Washington/L=Redmond/O=Microsoft/OU=WUPDS/CN=www.update.microsoft.com

i:/C=US/ST=Washington/L=Redmond/O=Microsoft Corporation/CN=Microsoft Update Secure Server CA 1

1 s:/C=US/ST=Washington/L=Redmond/O=Microsoft Corporation/CN=Microsoft Update Secure Server CA 1

i:/DC=com/DC=microsoft/CN=Microsoft Root Certificate Authority

Server certificate

—–BEGIN CERTIFICATE—–

MIIF4TCCA8mgAwIBAgITMwAAAAPxs7enAjT5gQAAAAAAAzANBgkqhkiG9w0BAQUF

—–END CERTIFICATE—–

1 s:/C=US/ST=Washington/L=Redmond/O=Microsoft Corporation/CN=Microsoft Update S

ecure Server CA 1

i:/DC=com/DC=microsoft/CN=Microsoft Root Certificate Authority

—–BEGIN CERTIFICATE—–

MIIGwDCCBKigAwIBAgITMwAAADTNCXaXRxx1YwAAAAAANDANBgkqhkiG9w0BAQUF

—–END CERTIFICATE—–

subject=/C=US/ST=Washington/L=Redmond/O=Microsoft/OU=WUPDS/CN=www.update.microsoft.com issuer=/C=US/ST=Washington/L=Redmond/O=Microsoft Corporation/CN=Microsoft Update

Secure Server CA 1

No client certificate CA names sent

SSL handshake has read 3403 bytes and written 536 bytes

New, TLSv1/SSLv3, Cipher is AES128-SHA

Server public key is 2048 bit

Secure Renegotiation IS supported

Compression: NONE

Expansion: NONE

SSL-Session:

Protocol  : TLSv1

Cipher    : AES128-SHA

Session-ID: 33240000580DB2DE3D476EDAF84BEF7B357988A66A05249F71F4B7C90AB62986

 

Session-ID-ctx:

Master-Key: BD56664815654CA31DF75E7D6C35BD43D03186A2BDA4071CE188DF3AA296B1F9674BE721C90109179749AF2D7F1F6EE5

Key-Arg   : None

PSK identity: None

PSK identity hint: None

Start Time: 1339954151

Timeout   : 300 (sec)

Verify return code: 20 (unable to get local issuer certificate)

read:errno=10054

 

With this detail we can also look at the certificates with the Windows Certificate viewer, we just extract the server certificate Base64 and put it into a text file with a .cer extension and open it with Explorer:

   
   
   

 

From these we see a few additional things:

  • OCSP Stapling is not enabled on the server
  • The issuing CA was created on 5/30/2012 at 8:49pm
  • The issuing CA was issued by the 2001 SHA1 “Microsoft Root Authority”

So with this extra information let’s tackle each of these observations and see what conclusions we come to.

 

SSLLABS is unable to validate the certificate; there are two possible reasons:

a. The server isn’t including the intermediate certificates (it is) and SSLLABS doesn’t chase intermediates specified in the AIA:IssuerCert extension (doubt it does) or that extension isn’t present (it is).

b. The Root CA isn’t trusted by SSLLABS (which appears to be the case here).

My guess based on this is that Ivan only included the certificates in the “Third-Party Root Certification Authorities” store and did not include those in the “Trusted Root Certification Authorities” which are required for Windows to work.

Basically he never expected these Roots to be used to authenticate a public website.

[2:00 PM 6/18/2012] Ivan has confirmed he currently only checks the Mozilla trusted roots, therefor this root wouldn’t be trusted by SSLLABS.

Microsoft’s decision to use this roots means that any browser that doesn’t use the CryptoAPI certificate validation functions (Safari, Opera, Chrome on non-Windows platforms, Firefox, etc.) will fail to validate this certificate.

This was probably done to allow them to do pinning using the “Microsoft” policy in CertVerifyCertificateChainPolicy.

I believe this was not the right approach since I think it’s probably legitimate to use another browser to download patches.

[2:00 PM 6/18/2012] The assumption in this statement (and it may turn out I am wrong) is that it is possible for someone to reach a path where from a browser they can download patches; its my understanding this is an experience that XP machines using a different browser have when visiting this URL I — I have not verified this.

[3:00 PM 6/18/2012] Harry says that you have not been able to download from these URLs without IE ever, so this would be a non-issue if that is the case.

To address this Microsoft would need to either:

  • Have their PKI operate in accordance with the requirements that other CAs have to meet and be audited and be found to meet the requirements of each of the root programs that are out there.
  • Have two separate URLs and certificate chains one for the website anchored under a publicly trusted CA and another under this private “Product” root. The manifests would be downloaded from the “Product” root backed host and the web experience would be from the “Public” root backed host.
  • Cross certifying the issuing CA “Microsoft Update Secure Server CA 1” under a public CA also (cross certification), for example under their IT root that is publically trusted and include that intermediate in the web server configuration also. Then have a CertVerifyCertificateChainPolicy implementation that checks for that CA instead of the “Product” roots.

 

The server is using weak ciphers; the server is using several weak ciphers:


I see no reason to support the MD5 based ciphers as I find it hard to believe that there are any clients that can communicate with this site that do not support their SHA1 equivalents.

 

[2:00 PM 6/18/2012] I have been told I am too critical by calling these MD5 based ciphers as weak in that they are used as HMAC, it is true that when used with a key as is the case with HMAC the current attacks are not relevant. With that said any client that supports these suites will also support their SHA1 counterpart and there is no reason to support the weaker suites that use MD5.

 

The server is vulnerable to the BEAST attack; and SSLLABS isn’t able to tell if the server is specifying a cipher suite preference, this means it probably is not.

It is the cipher suite ordering issue that is actually resulting in the warning about the BEAST attack though. It is addressed by putting RC4 cipher suites at the top of the cipher suite order list.

[2:00 PM 6/18/2012] It’s been argued the BEAST attack isn’t relevant here because the client is normally not a browser, these pages that are returned do contain JS and there are cases where users visit it via the browser — otherwise there would not be HTML and JS in them. As such the attacker could use the attack to influence you to install malicious content as if it came from Microsoft. Maybe its not a leakage of personal information initially but its an issue.

 

It is not using an Extended Validation (EV) Certificate; this is an odd one, is an EV certificates necessary when someone is attesting to their own identity? Technically I would argue no, however no one can reasonably expect a user to go and look at a certificate chain and be knowledgeable enough to that this is what is going on.

The only mechanism to communicate the identity to the user in as clear a way is to make the certificate be an EV certificate.

Microsoft really should re-issue this certificate as an EV certificate – if there was ever a case to be sure who you are talking to it would certainly include when you are installing kernel mode drivers.

 

The server is supporting SSL 2.0; this also has to be an oversight in the servers configuration of SSL 2.0 has been known to have numerous security issues for some time.

They need to disable this weak version of SSL.

 

OCSP Stapling is not enabled on the server; OCSP stapling allows a webserver to send its own revocations status along with its certificate improving performance, reliability and privacy for revocation checking. According to Netcraft Windows Update is running on IIS 7 which supports it by default.

This means Microsoft is either not allowing these web servers to make outbound connections or they have explicitly disabled this feature (login.live.com has it enabled and working). While it is not a security issue per-se enabling it certainly is a best practice and since it’s on by default it seems they are intentionally not doing it for some reason.

 

The issuing CA was created on 5/30/2012 at 8:49pm; this isn’t a security issue but it’s interesting that the issuing CA was created four days before the Flame Security advisory. It was a late night for the folks operating the CA.

 

That’s it for now,

 

Ryan

What is your organizations policy on SSL?

In other posts I discussed how to redirect the initial request to a website from the HTTP version to the HTTPS  (for Apache and IIS).

By following those steps your website no longer will serve HTTP content but users will still be able to get to your site without having to know to type the HTTPS:\\ before they browse to your site.

This is an important part of making your site reachable by users over SSL because:

  1. Most users do not type a URL moniker at all when entering an URL.
  2. Since 99% of the traffic on the Internet is not available over HTTPS so browsers default to HTTP.
  3. Existing HTTP URLs that have been indexed by search engines, embedded in documents, passed off in emails can continue to work.

This approach isn’t perfect, for example:

  1. An attacker can perform a Man-In-The-Middle on the initial request and bypass the SSL protection (see sslstrip).
  2. It has the potential to “train users” to not navigate to the HTTPs version of your site initially.

The problem is of course that the alternative of returning an error when a user requests the HTTP version of the website (say a 403.4 – SSL required) or simply not having a server listening on the HTTP port is almost the same as saying your site isn’t accessible to the mainstream users.

So how can you manage these problems? There are a few things you can do:

For sensitive services like those for login and those that collect personal information or credit cards actually use 403.4 errors. This tells the user in no uncertain terms that SSL is required for that task but since the browsing experience does not typically “start here” you do break the user experience for your users.

You may also want to consider hosting the most sensitive content like login and account details on a separate virtual host that does not have a HTTP listener (for example login.example.com or accounts.example.com).

Next you should communicate your policy on SSL to the web browsers so they can do the right thing for the users, there are several ways for you to do this:

  1. Set the HTTP Strict-Transport-Security (HSTS) header for your pages this will tell the browsers to require SSL on your site.
  2. Request that your site be added to the HTTPS Everywhere Rule list.
  3. Request that your site be added to the Preloaded HSTS list in Google Chrome.
  4. If you are a larger site you can also request that Google “pin” your web server’s public keys to your domain.

These things will not eliminate these risks but it does help, especially for those using browsers that support HSTS (Chrome and Firefox as of today) or those that are using plugins like HTTPS Everywhere and NoScript.

As for the last two, they are clearly Chrome specific but it represents about 32% of the browser market today and as such is worth paying attention to.

Ryan

Additional Resources

How to Deploy HTTPS Correctly