A lot of products today claim to offer End-To-End Encryption but not all of these products offer the same level of protection. Some of the differences between these solutions are rooted in the protocols and cryptography that they use, in some, it is in the way they are implemented and in others it is the way they handle the discovery of the cryptographic keys of the peers involved in the session.
The topic of key discovery itself is a complicated one, on its surface, for a messaging application all you need to do is go to a directory to request the public key pairs associated with the user or their devices you will communicate with. Where things get tricky is how, as a relying party, you can tell if the key discovery mechanism is lying to you or not.
This is important because if the key discovery server is lying to you it can facilitate an impersonation of that user, add a hidden third-party to the encrypted session without your knowledge, or potentially trigger a re-encryption to a device not under your control without your knowledge.
To understand the implications here you just need to look at iMessage. Although many do not know this iMessage is actually End-to-End Encrypted! Matthew Green has done several great write-ups on its protocol   and how the lack of verifiability in the key discovery mechanism utilized weakens the overall solution.
The most used End-to-End Encrypted messaging application is probably Facebook’s What’s App. Several years ago a security researcher  reached out to The Guardian to discuss what they described as a “backdoor” in What’s App, this “backdoor” was related to how it handled key discovery in device recovery use cases.
As a product person, you often need to make trade-offs to achieve your goals and that was what happened in this case. This “backdoor” was a design decision that was made to ensure billions of users could get some of the End-to-End encryption protections without compromising usability.
A number of security researchers, including myself, spoke up  which resulted in the article being updated to correctly reflect this reality  flawed reporting about WhatsApp.
Later WhatsApp and how Key Discovery happens came up in the news again, this time in an article from Wired [6. Alex Stamos, the former Chief Security Officer of Facebook, responded to this article  affirming some of the article’s points and talking about how a conscious decision was made to enable the associated use case:
“Read the Wired article today about WhatsApp – scary headline! But there is no secret way into WhatsApp groups chats. The article makes a few key points.”
While is response may be true, it is nor verifiably true as it relies on the behavior of the client and not cryptographic verifiability.
This is where systems like CONiKS , Keybase  and Google’s Key Transparency  come into play.
These solutions aim to enable automated trust establishment with untrusted communication through the use of an auditable directory of all of its users’ keys both past and present.
The fact that these solutions provide the auditable history of keys means that both the relying party and subscriber involved in the communication can reliably be made aware of when new keys have been associated with a users account, and importantly what entity added the key to the account.
With this information, they applications the users are using can both prevent messagings (via policy) being sent or notify the user when keys have changed unexpectedly.
This allows messaging clients to verify the identity of users automatically and prevents malicious/compromised servers from hijacking secure communications without getting caught.
On the surface, this sounds much easier than it is to acomplish at least at scale. WhatsApp serves over a billion users, any solution needs to be able to deal with key updates and reads at rates necessary to support such a large user base.
It needs to do this without leaking metadata associated with who the users are communicating with.
And do this without significantly increasing the amount of data a user must download or the time it takes to change keys.
While these are all tractable problems, they are not problems that are solved today in this context.
For this reason, applications that implement End-To-End Encryption typically either provide a mechanism that users who care about these risks can use to out of band verify cryptographic keys in person  or simply implicitly trust the key discovery service as an honest actor.
At Google, I have the pleasure of working on Google’s answer to this problem . It is our hope that when complete that applications that need to securely discover keys in a verifiable way can simply download our solution and focus on their application and not need to spend years of energy to solve this problem for their applications.
I firmly believe the best way to ensure the right thing happens is to make sure that the right way is the easy way and fundamentally that is the goal for the Google Key Transparency effort.
-  Attack of the Week: Apple iMessage
-  Let’s talk about iMessage (again)
-  The Guardian is backtracking on a controversial story about WhatsApp
-  Security researchers call for Guardian to retract false WhatsApp “backdoor” story
-  Flawed reporting about WhatsApp
-  Read the Wired article today about WhatsApp – scary headline!
-  CONIKS Project
-  OKCUPID’S FOUNDERS WANT TO BRING ENCRYPTED EMAIL TO THE MASSES
-  Google’s Key Transparency project aims to ease a tough task in cryptography)
-  Safety number updates