Today about 1% of the traffic on the Internet is protected with SSL (according to Sandyvine), there are a few key issues keeping this number so small and I thought I would put together a quick post on what I think those issues are.
Interoperability
For over a decade we have been working towards migrating to IPv6, despite that we have made little progress, in-fact they say that at the end of 2012 we will run out of IPv4 addresses.
As far as I know not one of the top 10 CAs support IPv6 yet (yes, not even GlobalSign though were working on it). This means it is impossible to host a pure IPv6 SSL solution today (because of the need for revocation data).
This is also interesting because today many sites are hosted on virutal hosting solutions that share the same IP address — this is primarily because IP addresses are a scarce resource, it has the side effect of making it hard (sometimes impossible) to deploy SSL on these hosts.
In 2003 an extension to TLS was proposed to address this problem it’s called Server Name Indication (SNI – now defined RFC 6066).
Today the server support for this extension is quite good but the same can not be said for client support (due to the lingering XP population and influx of mobile devices).
In my opinion this is the #1 issuing holding back the adoption of SSL everywhere.
Complexity
It is amazing to me but very little has changed in the CA industry since it’s birth in the mid 90s, certificates are still requested and managed in essentially the same way – it’s a shame, it’s wasteful.
One of the reasons I joined GlobalSign is they have been trying to address this issue by investing in both clients and APIs (check out OneClick SSL and CloudSSL) — with that said that there is still a lot more that can be done in this area.
Then there is the problem of managing and deploying SSL, the SSL Pulse data shows us it’s hard to get SSL configured right; we are getting better tools for this but again there is still a ton of room for improvement.
Performance
There has been a bunch of work done in this area over the years; the “solutions” relating to performance of SSL seem to be broken up into:
- Protocol improvements (SPDY, FalseStart, OCSP Stapling, etc.)
- Using different cryptography to make it faster (Smaller keys, DSA, ECDSA, etc.)
- Using accelerator products (F5 BigIP, NetScaller, SSL Accelerators, etc.)
I won’t spend much time on protocol improvements as I think it gets a ton of coverage from the likes of Google who have made several proposals in this area over the last few years. I do have concerns with these protocol changes introducing interoperability issues, but I can’t argue with the performance benefits they offer.
You will notice I also included OCSP Stapling in this group, I think this is a great way to improve revocation checking but it’s not about security, it’s about performance and reliability – you should just use this today, it’s safe and very likely supported by your servers already.
The use of different cryptography is an interesting one, however again the issue of compatibility rears its ugly head. Though every implementation of an algorithm will perform differently the Crypto++ benchmarks are a nice way to get high level understanding of an algorithms performance characteristics.
There is a lot of data in there, not all of it related to SSL but one thing definitely is the performance characteristics of RSA vs DSA:
Operation | Milliseconds/Operation | Megacycles/Operation |
RSA 1024 Signature | 1.48 | 2.71 |
RSA 1024 Verification | 0.07 | 0.13 |
DSA 1024 Signature | 0.45 | 0.83 |
DSA 1024 Verification | 0.52 | 0.94 |
You will notice that with RSA it is more expensive to sign than it is to verify, you will also notice that with DSA the opposite is true (it is also faster in this sampling).
Since in the case of SSL it is the server doing the signature and the client doing the verify this is an important fact, it means a server using a DSA certificate will spend less time doing crypto and more time doing other stuff like serving content.
On the surface this sounds great, there are of course problems with this though – for one because of the work researchers have done to “break” RSA over the last few years the browsers are moving CAs to not issue 1024bit RSA keys (by 2013) an effort which CAs have also applied to DSA.
Another not-so trivial factor is that Microsoft only supports DSA keys up to 1024 bits in length which means the larger DSA keys are not viable on these platforms.
So what of the new ciphers like AES and ECDH-ECDSA? This will represent a very large performance boon for web server operators but they too like SNI are not supported by legacy browsers.
What this means for you is for the next few years we have to make do with the “legacy cipher suites” as a means to facilitate TLS sessions.
Miscellaneous
Not everything fits neatly into the above taxonomy, here are a few common topics that don’t:
- Increased cost of operation
- Inability to do “legitimate” packet inspection
Increased cost of operation can be summerized needing more servers for the same load due to the increased SSL computational costs.
Inability to do “legitimate” packet inspection can be summarized as limiting he practical value of existing security investments of technologies like Intrusion Detection and Network Optimization since once the traffic is encrypted they become totally innefective. To work around this issue networks need to be designed with encryption and these technologies in mind.
Summary
I personally think the biggest barriers is ineroperability, the biggest part of this being the lingering XPs installations; the silver lining being the last few years XP has lost market share at about 10% per year, at the current rate we are about three years from these issues being “resolved”.
In the mean time there is a lot the industry can do on the topic of complexity, I will write more on this topic another time.