In an earlier post, I discussed how the use of cryptographic controls can help enable those building privacy governance programs to both deliver on their objectives of meeting their privacy obligations and enabling them to do so in an enforceable and misuse-resistant manner.
As I was discussing this post with someone they brought up the topic of feature trade-offs one must take when utilizing cryptographic controls. The canonical example I hear seems to be bots/assistance in end-to-end protected messaging apps. The thesis goes that in e2e chats these features are not possible, the reality is that they are, it just is harder to do.
This is why I said in that earlier post “More work is needed to make it so the smaller organizations can adopt these patterns”. This is because the boilerplate to enable these scenarios is largely missing.
To be clear, my reference to cryptographic controls in that post is not limited to end-to-end encryption. There are many ways such cryptographic controls can be applied, depending on objectives and constraints. For example, a useful tool for mitigating insider threats of abused data is limiting access to real user data and auditing access via append-only ledgers.
In many respects, the points are more about moving beyond procedural and manual controls to technical controls that are both strong and demonstrable.
Unfortunately, most organizations do not have a formal strategy for privacy, and those that do usually design and manage them similarly to their existing compliance programs.
This is not ideal for several reasons which I will try to explore in this post but more importantly, I want to explore how the adoption of End-To-End encryption and Client-Side encryption is changing this compliance-focused model of privacy to one backed by proper technical controls based on modern cryptographic patterns.
If we look at compliance programs they usually evolve from something ad-hock and largely aspirational (usually greatly overselling what they deliver) and over time move to a more structured process-oriented approach that is backed with some minimum level of technical controls to ensure each process is followed.
What is usually missing in this compliance journey is an assessment of sufficiency beyond “I completed the audit gauntlet without a finding”. This is because compliance is largely focused on one objective — proving conformance to third-party expectations rather than achieving a set of contextually relevant security or privacy objectives – This is the root of why you hear security professionals say “Compliance is not Security”.
If you watch tech news as I do, you have surely seen news articles calling the cybersecurity market “the new lemon market “, and while I do agree this is largely true I also believe there is a larger underlying issue — an overreliance on compliance programs to achieve security and privacy objectives.
To be clear, the point here is not that compliance programs for privacy are not necessary, instead, what I am saying is they need to be structured differently than general-purpose compliance programs, as the objectives are different. These differences lend themself to misuse resistant design and the use of cryptographic controls as an excellent way to achieve those objectives.
Misuse resistance is a concept from cryptography where we design algorithms to make implementation failures harder. This is in recognition of the fact that almost all cryptographic attacks are caused by implementation flaws and not fundamental breaks in the cryptographic algorithms themselves.
Similarly, in the case of privacy, most companies will not say “we intend to share your data with anyone who asks”. Instead, they talk of their intent to keep your information confidential — the problem is that in the long run everyone experiences some sort of failure and those failures can make it impossible to live up to that intent.
So back to this compliance-focused approach to privacy — it is problematic in several cases, including:
Where insiders [1,2,3] are abusing their position within an organization,
When configuration mistakes result in leakage of sensitive data [4,5,6],
When service providers fail to live up to customers’ expectations on data handling [7,8],
When technical controls around data segmentation fail [9],
and of course when service providers fail to live up to their marketing promises for government access requests for your sensitive information [10,11].
If we move to a model where we approach these problems using engineering principles rather than process and manual controls we end can end up in a world where the data and access to it are inherently misused resistant and hopefully verifiable.
This is exactly what we see in the design patterns that have been adopted by Signal, namely they gather only the minimal level of data to deliver the service, and the data they do gather they encrypt in such a way to limit what they can do with it.
Another great example is how payment providers like Stripe leverage client-side encryption to reduce exposure of payment details to intermediaries or how Square uses client-side encryption in the Cash App to limit adversaries’ access to the data.
While today these patterns are only being applied by the largest and most technically advanced organizations the reality is that as Alexander Pope once said “to err is human” and if we are to truly solve for privacy and security we have to move to models that rely less on humans doing the right thing and for privacy, this means extensive use of cryptographic patterns like those outlined above.
It could be argued that of the reasons we do not see these patterns applied more is security the nihilist’s argument that client-side encryption and cryptographic transparency are exercises of re-arranging deck chairs on a sinking ship.
While there is some truth to that argument you can both limit the amount of trust you have to place in these providers (limiting the amount of trust delegated is the essence of security after all!) and make elements of what the provider does verifiable which again furthers this misuse resistant objective.
If that is the case then why is it that only the largest providers do this today? I would argue it’s just too darn hard right now. For example, when you start applying client-side encryption and cryptographic controls to systems there is still a substantial lift, especially when compared to the blind trust paradigm most systems operate within.
There are a few reasons, but one of the largest in the case of client-side encryption is that you end up having to build the associated key management frameworks yourself and that takes time, and expertise that many projects simply do not have access to.
With that said just like modern systems have moved from self-hosted monoliths to microservices that are globally scaled thanks to solutions like Kubernetes which in turn gives them node to node compartmentalization and other by default security controls, I believe we will see a similar move for sensitive data handling where these cryptographic controls and usage policies become a key tool in the developer’s toolbox.
More work is needed to make it so the smaller organizations can adopt these patterns but, unquestionably, this is where we end up long term. When we do get here, to borrow an over-used marketing term, we end up what might be called Zero Trust Data Protection in today’s market.