In an earlier post, I discussed how the use of cryptographic controls can help enable those building privacy governance programs to both deliver on their objectives of meeting their privacy obligations and enabling them to do so in an enforceable and misuse-resistant manner.
As I was discussing this post with someone they brought up the topic of feature trade-offs one must take when utilizing cryptographic controls. The canonical example I hear seems to be bots/assistance in end-to-end protected messaging apps. The thesis goes that in e2e chats these features are not possible, the reality is that they are, it just is harder to do.
This is why I said in that earlier post “More work is needed to make it so the smaller organizations can adopt these patterns”. This is because the boilerplate to enable these scenarios is largely missing.
Alec Muffet talks a bit about this specific scenario in his recent talk about his IETF draft.
To be clear, my reference to cryptographic controls in that post is not limited to end-to-end encryption. There are many ways such cryptographic controls can be applied, depending on objectives and constraints. For example, a useful tool for mitigating insider threats of abused data is limiting access to real user data and auditing access via append-only ledgers.
In many respects, the points are more about moving beyond procedural and manual controls to technical controls that are both strong and demonstrable.