Category Archives: Security

Wet, Dry, Electronic, Digital and Hybrid Signatures

When talking about signatures there are several different styles of signatures people refer to. The first is the one we are all the familiar with – wet signatures.

A wet signature is created when a person physically puts their mark a document. In some cultures this is done by writing a name in a stylized cursive format or using a seal. The name wet implies that the signature was made with ink or wax, it might also indicate that the signature is “fresh” and the ink has not yet dried. Probably the most recognizable wet signature is that of John Hancock.

john hanko

These sorts of signatures have been in use for as long as we have had a written language (and maybe even before). We do know that since the sixth century forensic document analysis has been used to verify the authenticity of these signatures.

Dry signature is a term used as a way to describe both a wet signature where the “ink has dried” and as a higher level description that captures many other forms of non-ink based signatures (such as electronic and digital signatures).

Electronic signatures for the most part can be thought of as a “synthetic wet signature”. These signatures are produced as their name implies electronically and most commonly try to look as much like a wet signature as possible. Services such as HelloSign and Pandadoc are examples of services that leverage these synthetic wet signatures. With these services you upload a document, they convert it to a PDF and then you insert what is ultimately a picture of something that resembles your wet signature. These pictures of your signature are typically produced by digitizing your signature, uploading a copy of your signature or by the use of varied cursive typography.

With electronic signatures this “picture” intended to make both the signer and recipient of a signed document “feel” like the ritual they are undergoing is equivalent to that of the the traditional paper process that is traditionally used.

That said ones synthetic wet signature very rarely reflect a person’s real wet signature so this is really more about symbolism than anything else. One’s ability to prove a that it was really “you” who signed with an electronic signature is really limited to a statement from the facilitator of the signing that essentially says:

“I saw someone on this IP address who was able to access this email address and they asked us to insert this picture in this document – trust us.”

There is no concept of legal identity involved. For most “electronic signatures” there is also no verifiable proof of the claims from the facilitator about the signature. Anyone could trivially re-create a document or log that says something entirely different and it would be very difficult to prove which one represented the truth.

In this log the question of what was signed is captured by embedding a hash of the document that is being “signed”. It is important to understand that this hash alone does not capture what was seen by the user, it simply captures a fingerprint of a binary file. To understand this point just consider how the same website renders differently on Chrome vs Internet Explorer..

If the document were to be modified by someone after the fact one would need to rely on the database of the facilitator to determine what really happened.

In the event such a signature were to be questioned in a court of law it is for the most part left to a case of he-said-she-said. At best you could ask the facilitator to be a witness in the court case to attest to their operational practices and why their logs associated with the activity are most likely true.

Digital signatures are also technically “electronic signatures” but they are notably different in that they leverage strong cryptographic techniques to make it so that any changes to the document are detectable. If only the signer holds the private key that is used to sign the document it is mathematically provable that only the signer could have placed that signature on the document.

For the same symbolism reasons above these signatures will often also contain a synthetic signature.

The question of identity in electronic signatures is most commonly handled via X.509 certificates where a certificate authority goes through a process to verify the identity of the signer and issues them a digital certificate that states “I verified the following information about the holder of this private key”. The information in the certificate may be as little as their email address or as much as their legal identity and physical address.

The nice thing about this approach is that neither the document signing facilitator nor the certificate issuer can pretend to have signed a document — they do not have the private key.

It is still important to ensure adequate logs are maintained to prove what was presented to the user when they placed their digital signature on the document but this defense of this signature is much easier given there is less trust being put on the facilitator to act responsibly.

Hybrid signatures or notarized electronic signatures represent a mix of “electronic signatures” and “digital signatures”. This is what DocuSign and EchoSign do. They apply a the synthetic wet signature for the user and append a log saying “trust us this is what we saw happen” but they sign the document and that log with their own digital signature acting as a notary of sorts.

This is far superior to what the pure electronic signature providers provide because it in the event there is a question about the validity of the signature there is less question of the integrity of the logs.

For example consider the case where a pure electronic signature was put into question; one could simply argue the service provider’s database was compromised and any data within it was suspect.

With that said it is far better to use a pure digital signature approach as it removes even more arguments about the validity of the signature.

Browser Bound Certificates

The addition of WebCrypto to the browser enables a number of interesting client server opportunities that did not exist prior. One of which I think is interesting is what I have been calling browser bound certificates.

In-fact at least two such scenarios were included in the charter of the W3C WebCrypto working group – Document Signing and Encrypted Mail.

Now neither of these scenarios necessarily prescribe the use of X.509 certificates but considering signed PDF’s are the defacto-standard for signed documents and S/MIME is supported by Android, IOS, Windows Phones and Outlook it seems its not totally silly to say this approach has at least some merit.

To implement both of these one needs to have support for X.509 and its concepts within the browser, this is where Browser Bound certificates and PKIjs comes in. Imagine a client authenticating a user and over that authenticated session the client submits a certificate request bound to that session that is passed to an API on the server side that issues the client a X.509 certificate.

With that the client now has all the material that is necessary to sign and/or encrypt messages on the client side using the formats already in use. The web can interoperate with the desktop.

In our theoretical application need to take all the traditional precautions for both web and crypto-aware applications some of which include:

  1. Not mixing content from other domains,
  2. Loading the site and all of its resources over SSL,
  3. Segmenting the signing and verification code with postMessage,
  4. Using crypto primitives in safe ways,
  5. Using non-exportable keys,
  6. Keeping the keys short-lived.

But we can with these Browser Bound certificates build modern PKI aware applications that have great user experience that can even work without the server being present once provisioned.

A look at short lived certificates, keys and the relevance of FIPS 140-2

Today the defacto-standard for purchasing criteria for a cryptographic component is a US Federal Standard called FIPS 140-2. This is set of assurance levels the US Federal Government uses to ensure that government agencies purchase cryptographic products that are interoperable and address threat-specific risks; Europe has similar set of guidelines called Common Criteria.

These standards were adopted by the security industry because in the beginning the only purchasers of their products were government agencies and if you did not design your products to meet these requirements your product wouldn’t even be considered by your only customer segment.

As the security industry began selling outside of government agencies they started with the Fortune 50 because they were the only ones who understood the risks their businesses were exposed to. This was a time when information security was in-essence a new discipline and the only tried and true examples these organizations had to learn from were from the government and military. For this reason the solutions that were sold and deployed were watered down versions of what they sold to governments.

As the awareness of security risks spread to the rest of the corporate world these same foundational standards continued to be used — in many respects without question. In fact I am always surprised how many customers I encounter who have mandated a specific FIPS assurance level be supported by a product that have no understanding of what protection each level provides.

With the Snowden revelations people are now starting to question these standing assumptions. Should we be using cryptography that is specified governments at all? Is our adoption of government approved cryptography making us more secure or is it exposing us to new risks?

The real questions we must be asking ourselves are:

  1. What is the actual (vs perceived) threat model?
  2. Where are the assets that are valuable to the attacker in my system?
  3. Are we applying security technology and approaches in a balanced way relative to the risks?
  4. What are the consequences of each of the design decisions we are making?

Our reliance on blanket adoption of standards like FIPS 140-2 are in many respects a way to make ourselves feel better about not spending the time to answer the first two questions and the last two questions represent areas where most organizations fall down.

First let me temper what I am about to say with I still believe FIPS 140-2 and Common Criteria have value and they are good solutions for what they were designed for but in many cases they are a round peg in a square hole.

Let’s start this by first understanding the claims and the values of each:

Third-party evaluated – An organization deemed knowledgeable and capable by the government has reviewed the design relative to the stated requirements and found no unresolved issues.

Approved Algorithms – Supports a set of algorithms that the government has decided are necessary for interoperability. The selection of these algorithms by the government is plausibly a result of a rigorous process that determined they are sufficiently secure for their needs. Ex: RSA, ECC /w secp256r1, SHA2, etc.

Uses Approved Algorithms and Methods to Protect Keys – Uses a set of algorithms and approaches the government has decided are sufficient to keep keys of the types specified in approved algorithms secure. Ex: Use crypto and methods at least as strong as the keys being protected.

Production-Grade Components – An attempt to specify a qualitative set of requirements that are intended to ensure there is adequate quality in the solution to be used in production.

Tamper Evidence – Implements mechanisms such as seals and manufacturing techniques that make it visibly obvious that the device has been physically compromised.

Protects Once Compromised – Implements mechanisms that make it difficult to extract the keys from the device once it is physically compromised.

Tamper resistant – Implements mechanisms to destroy the protected keys when a compromise is attempted.

The following table shows you how these traits map across the various FIPS 140-2 assurance levels:

Third-party evaluated Approved Algorithms Uses Approved Algorithms and Methods to Protect Keys Production-Grade Components Tamper Evidence Protects Once Compromised Tamper resistant
Level 1 x x x x
Level 2 x x x x x
Level 3 x x x x x x
Level 4 x x x x x x x

Now each of these traits are desirable but they may also have consequences, for example:

Third-party evaluated – These audits take up to a year to prepare for and complete. Due to the specialized nature and near-monopoly the approved testers have the tests are incredibly expensive. Additionally these testing agencies perform their tasks based on guidelines based published by governments who are very slow to adapt and change and focused on their own immediate needs which restricts innovation.

This all becomes very complicated when you need to respond to security issues in short periods of time and many have come to the conclusion the bureaucracy associated with completing these audits reduces security.

Approved Algorithms – While I am pleased with the fact that NIST runs crypto competitions in some cases they are not used and in others their choices may not be right for you. Additionally there are questions about some of their decisions and what they mean to the security of the algorithms they implement.

In other cases  the requirements may actually hamper adoption of your solution and while the product may be “more secure” it will not be usable by in many cases. A great example being it is only possible to have a software only solution that is evaluated to FIPS 140-2 Level 1 so if you specify anything higher you may significantly reduce the usability and applicability of your solution.

The important thing to remember is there are many ways to mitigate a risk and if we are not careful to take a step back and take a look at the problem and goals as a whole we might as they say miss the forest through the trees.

For example if we come to the conclusion that we require the use of a FIPS 140-2 Level 4 device we preclude the un-augmented use of every Windows or ChromeOS computer that has a TPM when arguably that would expose the product to hundreds of millions of more users. Is the increased security of that that choice worth the it?

Also if we look at the Tamper EvidenceProtects Once Compromised and Tamper resistant goals we can mitigate these risks significantly if we simply generate new keys every 15 minutes. By doing this we reduce the risk of compromise to a very small window and we reduce the value of the key to the attacker.

It’s this last approach I think we should as an industry apply more now; we no longer live in a world of disconnected systems. We are dynamically deploying services using technologies like Docker, Chef, and Puppet and there is no reason we can not deploy our keys to systems and users dynamically as well.

Key management and key lifetime

One of my favorite quotes about cryptography is this one from Bruce Schneier where he says:

“If you think cryptography can solve your problem, then you don’t understand your problem and you don’t understand cryptography.”

The point he is getting at is often times the introduction of cryptography carries its own baggage that can itself be a problem to manage. One of the larger issues one is exposed to is that of Key Management.

Many of the key management practices we use today were actually designed around the concepts of offline keys. You see exchanging keys securely is hard and it’s human nature to avoid hard things so we (either explicitly or implicitly) choose to do them infrequently. For example take a look at TLS private keys — The single most prominent “upgrade” on most CA websites is a longer lived certificate (as much as 3 years per certificate).

People just don’t want to hassle with the idea of getting a new certificate and renewing it. The lifetimes of these certificates are well within the current guidance for crypto effectiveness but there are other factors to be considered when looking at cryptoperiods beyond how strong the cryptography is.

The reality is that crypto itself is seldom the direct attack vector it is application logic, coding defects and operational practices that prove to be the source of most vulnerabilities.

For this reason surely how that key is protected is the most important factor. If “anyone” can access a key encrypting or signing data with that key is nothing more than security theater. When you consider that remember today for keys to be used they must be accessible to application logic. The key is exposed to the risks of the full software and hardware stack that runs supports that service. As a result if that system is exposed to the internet it should be changed more frequently than one that is in a locked box at a bank.

The key itself doesn’t actually have to be exposed in its raw form either, for example if malware can turn the software that has access to the key into a signing oracle it doesn’t need raw access to the key — this is actually what happened to DigiNotar, the Dutch CA who was compromised the bad guy got into the system that had access to the HSM containing the CA keys and was able to sign virtually anything they wanted.

So what do we do about this? Of course one needs to build systems using a process that incorporates security into all aspects of product development and operations but above and beyond that you really should change your keys as often as possible.

Fundamentally the longer a key is trusted the more valuable it is to an attacker and the more opportunity an attacker has had to compromise that key.

It is this paradigm that necessitates the existence of revocation protocols like OCSP in X.509. The CABFORUM allows these revocation messages to be good for up-to a week. This is important to understand because a CA’s ability to revoke a certificate effectively in the event a compromise is identified is limited by this. If the CA instead issued certificates that were good for no longer than a week then there would in-essence be no need for revocation checking at all.

If you can issue certificates that are good for a week and change them reliably each week why not do shorter? What about certificates and keys that are trusted for only a few hours or minutes? Surely this would be better. This significantly reduces the value to the attacker and increases the amount of trust one can place in that certificate.

The same holds true for certificates that are stored on Smartcards and Hardware Security Modules; the more recently the key was created and the crypto operator authenticated the more trustworthy they key is.

If that’s the case why is it we still manage keys like they are on hardened offline systems? The answer is simple — Key Management is hard. What’s important to understand that while it is hard it is doable we just need the will to do something about it as an industry.

NOTE: Though in my examples above I use certificates as the canonical example they are just that examples; the exact same issues exist with all uses of cryptographic keys (encryption keys, bitcoin wallets, authentication keys, etc.).

My thoughts on Let’s Encrypt

Today about 80% of all SSL certificates on the Internet that are in use are what are commonly referred to at Domain Validated (DV) certificates. The name is a bit of a misnomer in that not all DV certificates authenticate control of a Domain in-fact most actually authenticate the control of a specific server in the domain.

The large majority of these certificates can be issued with little to no human interaction. In a typical manual enrollment a server administrator generates and submits a certificate request and in return is provided a random value that they are instructed to place into a HTML meta-tag in /index.html that the CA will check for periodically to see if administrator was able to place it there. The idea being that modifying a the meta-tag there is sufficient to prove control over the website. Once the CA notices the administrator was able to complete this task it performs a handful of other checks and the certificate is issued.

Most certificates used for SSL end up coming from hosting providers, service providers and certificate resellers that sell these certificates for as little as a few dollars and in many cases they simply give them away for free.

These folks will also commonly automate the issuance, installation and maintenance of these certificates. Hosting providers typically do this using a plugin that comes from the issuing CA that hooks into their management console (WHM, etc) and the larger more advanced ones write their own based on the web services exposed by the certificate authorities.

So today, contrary to common perception certificates are in-fact are cheap to free and in many cases fully automated. With that said there are a number of pretty important cases where that automation is missing such as cloud service providers (AWS, Azure, Google Cloud, Rackspace, etc), corporate servers and Internet connected devices.

At some point all of the cloud service providers will provide SSL for free after all Mozilla has recently stated that they are working to deprecate HTTP all together and I am sure all other browsers will follow them when there is sufficient SSL ubiquity.

The Let’s Encrypt project aims to make this transition happen faster by being yet another place to get free certificates and making the acquisition of these certificates even easier by closely integrating the certificate lifecycle management into the most commonly used servers.

It is this last part that I think is the most important contribution that Let’s Encrypt will make to the Internet. There are a few reasons for this; for various reasons I could go on about for hours each of the Certificate Authorities have gone and created their own protocols for certificate enrollment instead of working together to define a common one. These protocols (like their cousins from device and operating system vendors) are designed around their specific back-ends and not generic enough to be used when they are not the entity behind them.

To address this the the Let’s Encrypt people have proposed a new modern REST based protocol that does not have this baggage. In fairness it also doesn’t solve all of the CAs needs either but I can easily envision how one would extend it to do so (in-fact it looks a lot like a protocol I designed for GlobalSign’s use).

The other problem not many actually understand is how many issues exist inside the various SSL implementations that prevent a third-party from properly automating the lifecycle of a certificate without downtime. The simplest example being for a external program to change certificates on a running web server it often has to rely on HUPing a the server to force it to pick up the new certificate.

Unfortunately Certificate Authorities are not exactly the most loved people on the Internet and I know from my experience trying to get the maintainers of web servers and SSL stacks to support things like OCSP Stapling that the scale of changes that are necessary to make automated certificate lifecycle totally seamless (and with low risk) for everyone was unlikely going to happen when driven by CAs.

NOTE: In my opinion a big reason for the resistance is that CAs have basically treated these projects as core infrastructure without supporting them financially or by hiring developers to contribute to them. That said this has been slowly changing and despite that the “love” still continues.

The Let’s Encrypt project is a project for developers by developers with the skill, credibility and motivation to fix these issues.

When they are successful (and I am confident they will be) those solutions that use the clients based on their code and protocol will rarely if ever experience an outage due to an expired certificate. Notice I didn’t say the clients that use Let’s Encrypt ? Thats because what they are doing is solving the plumbing problem that CAs have failed to solve and the CAs will be able to benefit from this work also.

It will also enable a class of products and services that otherwise would not have the technical experience, financial means or motivation to otherwise integrate SSL into their product.

Imagine your next refrigerator having a web portal you could log into at https://myhome.refrigerators.com where you could check if you needed to bring home milk where the portal was protected with SSL. These and other projects are unlikely to happen without something like Let’s Encrypt.

So when people tell me “Certificates are already practically free why do we need Let’s Encrypt?” I tell them they need to look at the long game.

How did I get involved in PKI?

In the mid 90s I was a security consultant, I principally worked on authentication systems (Smart cards, One Time Passwords, Kerberos, PKI, etc.).

Back then the only people who cared about these things were organizations concerned with protecting lives or money. This meant most of our contracts were with governments, banks, and fortune 50s. This was an amazing experience that I would not trade for the world — it gave me the chance to work with some amazing people in some of the most paranoid and security conscious environments in the world.

While not my first exposure to PKI the first time “it was all I did” was when I worked for a company called ValiCert. The founders saw a problem:

Who was watching the certificate authorities and who would make sure that the revocation infrastructure would scale to meaningfully work in the event miss-issuances or key compromises happened?

We had developed technologies that were intended to address these problems. This technology looked very similar to Certificate Transparency, OCSP stapling and certificate pinning which are again all-the-rage these days.

Unfortunately the Certificate Authorities did not like the the idea of being “watched” by a third-party; the largest CA went so far to threaten with lawsuits and modified their Relying Party Agreements to state that third parties could not re-distribute any information about what certificates they had revoked or issued.

Another entity had patents they claimed covered some of our optimizations and given the browsers were minimally investing in this area we did not get adequate traction so we pivoted into other areas.

For personal reasons I ultimately ended up at Microsoft where I was responsible for a number of security technologies and one of the “little things” I ran was the Microsoft Root Program.

When this was assigned to me I was told it was the least important thing on my plate and that I could measure my success through the number of escalations we got relating to it — basically I was told to invest as little as possible to keep things quiet. The root program was a necessity but shipping software was what we were all about.

The first thing I did for the root program was review its requirements and try to understand who were its participants and what agreements we had with them. I was surprised to see there were in-essence no requirements, no authoritative list of contacts at each of the organizations and no contracts with any of its members. I felt marginally better when I found that Netscape had only one requirement and that was your check for $250,000 USD cleared, the upside of which also meant they probably had contracts with each CA but there were no technical or audit requirements in their program either.

To remedy I began to work with my AWSOME paralegal and lawyer on defining the first “root program” with both technical and audit requirements. We did not want to approach this as a profit center like Netscape but instead establish a set of requirements that were technically sound that encouraged CAs to spend on improving their infrastructure and having it reviewed by others

To this end I picked up a project that had been begun by my predecessor to work with the American Institute of Public Accountants (AICPA) to help define and adopt what is WebTrust for CAs today.

We were the first root program to adopt this new audit. I remember being interviewed by the AICPA for a video on their website on how excellent it was to work with them – they must have taken 50 cuts during that session because of my bumbling.

With these new requirements in hand we set out to get contractual agreements with each of the CAs where they would commit to meet these new requirements and make clear conditions on which we could kick them out for not complying. Given this required them to make operational changes to their practices as well as budget and manage a third-party audit it took a complete product release cycle to get all of this in place.

At the end of the operating system release we had an audited set of CAs and contractual agreements with each one of them. Now our goal was to get these CAs into one room so we could encourage them to adopt common issuance practices.

This was important for a number of reasons, one of the most obvious was that each one of the CAs used a different taxonomy to describe what they did. The simplest example of this was that one CAs in-person verified certificate would be called a Class 1 and another’s was a Class 3.

To top things almost all of the CAs wanted to see the browser “chrome” differentiate between their weakly authenticated certificates and those that were strongly authenticated. This of course was not possible without a common practices  and means of marking certificates to make it clear what practices were used in the vetting of the subscriber.

The internal consensus was that there would be value to users to be able to tell the difference so we decided to try to make this happen. To do that we arranged to get these CAs in one room so we could talk about standardizing practices and certificate formats.  To make this happen I reached out to my contact at the AICPA and asked him to work with me to arrange what was the very first gathering of publicly trusted CAs and trust store providers. We met in Washington DC because I felt we could leverage the work done by the US Government to accelerate the standardization of these things.

Unfortunately one of the newest CAs who only issued low assurance certificates saw adopting common standards for vetting and labeling a risk to their business and as a result they through a wrench in the my plan. They filed a claim with the FTC that what the event an attempt to create anti-competitive marketplace and as a result I was deposed by the DOJ. Ultimately the issue was closed and I understand the disposition was that the claim was baseless.

At this point I was instructed by management and our legal council to stop pushing for this standardization as it represented too much legal risk for the company.

As an aside a few months later the largest CA acquired the troublemaker.

About a year and a half later the CAs self-organized and attempted to agree on a smaller set of standardization, the definition of what is called Extended Validation today. This was effectively a new label for what most CAs were offering in their “high assurance” certificates. The CABFORUM was now born.

At this point I had moved onto another team at Microsoft. During my time at Microsoft I worked on a number of very cool projects with some great people. Several of the projects I worked on used PKI but my involvement was much more on the peripheral to the industry at that point.

Years later I decided to leave Microsoft — the Diginotar incident was a big contributor to this decision. I felt that the industry was a mess, they were under investing in their infrastructure, not supporting the open source community they were dependent on and not actively working to improve adoption of SSL. I wanted to change this, I had decided I would start my own Certificate Authority and set an example for the industry on how a CA should approach these things.

This is when GlobalSign approached me and asked me to join as their CTO, I really liked the team, they were principled, hard working and looking to change the way things were done. I spent nearly three years in this role and we accomplished a great deal, I also still work with them on technical research / direction  but I have since moved onto a startup doing work on Bitcoin related technologies.

I did not accomplish all of the things I wanted to but I still have hopes that these systemic issues will be resolved as I do believe trusted-third parties are needed on the internet.

Anyway this is how I got into PKI.

Ryan

The first web-server and the meaning of the SSL lock

The first web-server was developed by CERN in 1990 — that is twenty-four years ago!

Back then the web was a much simpler place. For the most part web pages were static files hosted by a single server owned and operated by the same entity that managed core network infrastructure, and DNS. In many cases they even owned the building where the systems were located.

As the web became more popular the architecture of these systems needed to evolve. At first that was done by bolting on basic search capabilities.  The database-backing search was simply another process running in background indexing the documents.

1

Around 1994 SSL came about. It was used almost exclusively in e-commerce scenarios.

These systems e-commerce systems were really the beginning of the complex n-tier deployments we have today. With that said they were still very simple by today’s standards. These new systems were essentially made up of a cluster of web servers sitting in front of a shared database one-network port away.

The processes of getting a SSL certificate back then was also quite onerous even when compared to what we do for Extended Validation today. To get a certificate in many cases you had to actually visit a public notary with documentation that proved your affiliation with the business you were getting a certificate for; the notary would then attest that they saw the originals of your identification as well as those documents. I even know of cases where company’s executive staff were required to visit the CA in person.

This complexity was because businesses identity was inherently part of what that certificate was about. As a consumer if you were dealing with a online business you knew they had a clue about technology (relatively) and because this online presence was an extension of their brick-and-mortar business you already knew — they were a known quantity and knowing it was them  gave you confidence they would be applying the same diligence and practices from their online business to their online transactions.

At this point the certificates used in SSL cost as much as $1,500 each and while this slowed the adoption of SSL it also gave a signal to visitors of that the that sites that had them were not some fly-by-night operation as they were willing to spend “real” money to ensure people knew who they were.

Above and beyond that when users saw the “SSL” lock users knew their users their sessions were encrypted end-to-end and as a result their data was not going to be stolen in transit.

2

Given the kinds of organizations that would operate these sites (at the time this was banks and large e-commerce businesses) there was also an element of “these guys get security” – after all they knew how to do all of the above and had their existing brick-and-mortar reputations they were building on.

Over the next decade those small server clusters that ran these websites became more and more complicated. For these site to scale what used to run on one or two boxes got moved across many. At the edge dedicated systems were used to terminate SSL and forward clear text to back-end systems that were sometimes owned and operated by different entities and often spanned multiple networks.

3

The mega-sites like those run by Google and Microsoft still are designed in this way because it is the only way to cost effectively scale and be agile enough to meet market needs for systems of this size.
For the rest of the Internet this model just isn’t used that much anymore – its just not cost effective for small sites and most organizations don’t have access to the skills or resources to deploy the kind of networks and systems that these larger sites do. For this reason most sites have moved from deploying onto hardware and networks they own to those owned and operated by other people.

It is now the norm and not the exception to have numerous service providers embedded in a single website, the physical hardware being used by these the site and the service providers are almost always multi-tenant, even the databases backing a them are likely shared.

Expectations of users about how the web performs has changed as well, for this reason an entire industry developed to provide yet another shared service — high-end networking services that logically sit in front of these machines to ensure timely delivery wherever the user is at (AKA CDNs).

To keep pace with the demand for SSL the way certificates were priced and are validated changed as well. Today around 70% of all SSL certificates are Domain Validated (DV) and in many cases they can be had for free.

For entrepreneurs this means they can build an online business more quickly and cost-effectively than ever before. For users it means that these online businesses are fast and more professional looking but it also makes it harder to understand the security assumption behind the operational practices of the site.

The “site” as the user sees it may literally be operated by a half dozen an entities such as the network provider, CDN, DNS, hosting provider, analytics, data providers and the site it amongst others.

You don’t know what agreements the site has with these providers, how any of the entities store your data, which they share it with, or if they attempt to use good security practices in the development and operations of the services.
These things were broadly speaking inferable in the 1990’s given how few sites were on the network and the kind of investment necessary to even get online. Today some college kid in a garage could be operating your favorite site, he is motivated not by protecting his current business but by getting to market quick enough to grow his new one.

To me this means it is more important than ever before to understand whom it is your dealing with and what their security practices are. This isn’t a change that happened over night but something that has happened slowly over the last twenty-five years.

This is why its great sites publish their security and privacy practices, even if we must take them at their word. This is why it is also important to understand whom it is you are doing business with, without this how can you make an informed decision on the credibility of their word?

5

In a perfect world these things would not be items to be concerned with but as my father always told me we have to see the world the way it is and not the way we want it to be if we ever want to change it.

What are some upsides of googles’s SHA1 deprecation plan?

NOTE: Google has since adopted a more gradual plan for migration which will addresses the potential false sense of urgency the prior plan represented. Personally I think the new plan is a good one. The upsides in this post are still accurate and it is my hope people switch to SHA256 based certificates as quickly as possible.

The Internet is about to embark on another Heartbleed-esq certificate migration. This time there is no immediate danger (which was certainly not the case with Heartbleed) and there is a proposed twelve weeks to plan and respond.

During this time (unless that plan changes) a large majority of the SSL secured Internet will need to swap out their SSL certificates or the users of these sites will see a little scarier user experience. To be fair some of these certificates will be expiring regardless and need to be replaced anyway but this still represents a large number of additional sites that will need to replace certificates sooner than they had planned.

That said there are upsides, for example given how many of the top sites now use SSL the users of these sites will need to move to modern browsers not dependent on platform crypto or update to a newer version of Windows in the process gaining access  to modern web technologies and security fixes.

Another benefit is that CAs that are not active participant in the CABFORUM and who do not follow the root program requirements closely will be sure to stop their use of SHA1 based signatures as soon as they see the user experience impacted.

The same thing will be true of device companies and enterprises who do not as of today have the option to participate in the CABFORUM and even if they did are frankly unlikely to. That is when they see their support calls go up they will change their products and/or processes so that such certificates are not used.

The net of which is by the end of 2017 we will most likely see the complete end-of-life of SHA1 as part of signature suites and we may see an above average increase in modern browser adoption.

Ryan

 

 

Why might you have a certificate with a SHA1 based signature in its chain that is valid beyond 2016/1/1?

NOTEGoogle has updated the plan they will be using to deprecate SHA1 based certificates. The content in this post is still mostly accurate but for dates please see the thread. Personally I think the new plan is a good one. The upsides in this post are still accurate and it is my hope people switch to SHA256 based certificates as quickly as possible.

So there is a plan under discussion to “degrade” the user experience for SSL sessions protected with certificates (or chains) that contain a SHA1 based signature that are valid beyond 2016/1/1.

This 2016/1/1 date was apparently discussed at a CAB Forum meeting six months ago, prior to that the “sunset date” for SHA2 was considered to be 2017/1/1.

Given Chrome represents such a large percentage of the browser ecosystem and they appear to be unwaveringly marching towards this new date I think its fair to refer to this date as the “new sunset date”.

There have been lots of conversations about this topic from the perspective of a CA and that of a browser but not so much from a perspective of a certificate holder.

There are a few cases why you might have such a certificate:

  1. Your certificate was issued before the new sunset date was specified.
  2. When the new sunset date was specified your certificate authority did not update their system to restrict use of that algorithm to expire by that new date.
  3. Your certificate authority gave you the option of choosing which signature suite (and hash algorithm) and expiration dates to use and you chose SHA1.

Some might ask why CAs did not simply stop issuing certificates that utilize SHA1 based signatures all together when Microsoft issued their goal to deprecate by 2017. The answer to this is simple; there is a large number of XP machines out there (15% of the Internet and over 35% of browsers in China) and its unclear how many of them have Service Pack 3 which is necessary to support certificates with SHA2. There are also concerns about the number of mobile and embedded devices that also do not support SHA2.

So how big of a risk is the interoperability impact? It’s hard to say; some numbers i have seen suggest it is less than 1% of traffic but honestly it doesn’t appear possible to measure  the number of XP machines without SP3 and if it were it still wouldn’t take into consideration the devices that do not support SHA2 and we know such devices were shipping as recently as two years ago.

So that takes me to the main reason for this post; it’s my guess that the primary reason you have a certificate that will be effected by this change is that the CAs honestly did not realize google was moving the sunset date forward and were adopting migration plans that they felt balanced interoperability, usability and security.

With that said I believe google sincerely feels this change is in the best interest of the internet and that the user interface changes they are proposing are subtle enough that it wont be noticed by most (see : A Large-Scale Field Study of Browser Security Warning Effectiveness [pdf]).

Unfortunately this leaves you the server administrator stuck somewhat in the middle. You will have to choose to give up views and revenue from these clients that do not support SHA2 or all of your users who use Chrome will see a degraded user experience.

What will Chrome’s SHA1 early warning look like?

NOTEGoogle has since revised its plan to enable a more gradual migration to SHA256, this post is no longer accurate.

For the last few weeks there has been an ongoing discussion on the Chromium security-dev mailing list on how Google intends to implement a user interface change to warn users that a SHA1 certificate is in use.

I wont talk to the reasoning behind this change or to the current and future security properties of SHA1 in this post but I thought some folks might be interested in what this might ultimately look like. I say might because right now there is only a mail thread and who knows how things will evolve and what the copy would be in such user interfaces.

With that said the thread does describe what affordances they intend to use when a site has a certificate where it or the corresponding certificate chain has SHA1 based signature in it (excluding the root) that expires after 2016/1/1 the user interface may be “degraded” for these sessions.

At this time it seems the “red x” that is used for mixed content will be used; if so this will look something like this:

 1

 

 

 

 

For the SHA1 certificates that expire after 2017/1/1 if that page contains active content such as JavaScript and CSS that is served over a SSL session with such a certificate they will not be loaded unless the user explicitly chooses to approve their execution, this would look something like this:

2

 

 

 

 

 

Again for SHA1 certificates that expire after 2017/1/1 if the page contains passive content (such as images) that is served over a SSL session with such a certificate it will not be loaded unless the user chooses to do so and the lock will get a yellow arrow, which will look something like this:

3

 

 

 

 

 

 

 

 

Which combinations of these things one will see would be dependent on the specific combination of conditions but this will give you some idea on what these changes may look like.

Ryan