Graphene CLI

A few weeks ago we released Graphene, a PKCS #11 binding for NodeJS. Today we are releasing a CLI based on the same library.

Our goal was to make it easy for you to work with PKCS#11 devices in a vendor neutral way without the complexity of installing and configuring a bunch of miscellaneous packages.

Using the tool itself is pretty straight forward, load the PKCS#11 module and open a session to a slot then you are ready to go:

rmh@peculiar-02:/home/graphene# graphene 

> module load -l /usr/safenet/lunaclient/lib/ -n test

Module info


  Library: /usr/safenet/lunaclient/lib/

  Name: test

  Description: Chrystoki                      

  Cryptoki version: 2.20

> slot open --slot 0 -p {YourPin}

Session is started


Once logged in you can always find help with the ‘?’ command:

> ?


    ?         output usage information
    exit      exit from the application
    module    load and retrieve information from the PKCS#11 module
    slot      open a session to a slot and work with its contents
    object    manage objects on the device
    hash      compute a hash for a given file
    test      benchmark device performance for common algorithms


    all commands require you to first load the PKCS #11 module

      > module load -l /path/to/pkcs11/lib/ -n LibName

Some things you can do with the tool:

  • Enumerate the supported algorithms of your device
  • Enumerate and manage the objects on your device
  • Benchmark the performance of your device
  • Hash a file utilizing your device

We recently benchmarked the capabilities of a SafeNet G5, you can see the results here.
We hope you find it useful.

Certificate based Encryption in PDFs

The PDF format is the most used file format on the internet but unfortunately, the specification that documents it leaves a lot to be desired when it comes to producing signed and encrypted documents.

PDF is still the only truly cross-platform “paper like” experience available to users. It also has a number great of features that many are not aware of, one of which is the ability to encrypt the PDFs so they are not readable without having access to the appropriate secrets.

It supports two approaches to this encryption, one based on passwords and one based on digital certificates. In a later post I will discuss the issues in password based encryption but here I want to talk about the second approach as it offers the potential for the most security.

When looking at our findings it’s important to keep in mind the history and timeline of the PDF format. It was released as a free to implement specification in 1993 and then was standardized by ISO in 2008. It has been left largely unchanged since then. This is important because much of the practices and approaches in the standard were considered state-of-the-art in the 90s but are no longer considered strong today.

NOTE: The below is based on our findings while reading a pre-release of ISO 32000-2 that was approximately one-year-old. It is notably more readable  than its predecessor, 32000-1, in many areas but it seems little change has been made to how signing and encryption is handled. Unfortunately the ISO standardization process does not produce public “intermediate” documents and this was the freshest document we could find.

Message Format

The granddaddy of signature formats is something called PKCS #7. This was defined by RSA in the mid-90s and later handed off to the IETF when they republished in as Cryptographic Message Syntax (CMS) in 1999. CMS is now a superset of PKCS #7 where additional attributes and cryptographic algorithms are also supported. The PDF specification however still references signature format as being PKCS #7 but its references are to the at least one CMS RFC and not the PKCS#7 one. Most implementations, however, such as Adobe Acrobat, have added support for algorithms and options that are available in the latest CMS specifications so we can assume this is what they mean. This is not a security issue, but it does create a mess when it comes to interoperability.

This inconsistency in the specification makes it harder for an implementer to know which bits and pieces to implicitly pull in from different unreferenced specifications. This not only makes interoperability more challenging, but it also results in a lack of common capabilities across implementations.  


The most popular asymmetric cryptographic algorithm of all time is clearly RSA, it serves as the foundation of most key management and distributions solutions in use today. When encrypting data with RSA you need to “pad” the data you are encrypting. In the 90’s you would pad using a scheme called RSA PKCS v1.5. The problem is that in the late 90s it became clear this scheme was attackable.

These attacks are most relevant to systems that are online, for example, a website or API applying verifying electronic signatures or decrypting documents. As a result of these weaknesses, in 2001 the world started moving to something called OAEP that addressed the identified risks in PKCS v1.5. It takes a decade or more to depreciate a cryptographic technique that is broadly in use, and thankfully we now see cryptographic libraries (such as the new WebCrypto) deprecating these weaker constructs to prevent future scenarios from accidentally supporting them.

Unfortunately, it seems the PDF specification was never updated to use the more secure OAEP padding scheme. While there is nothing stopping a client that implements the standard from also using the more modern padding algorithm also (just as many have adopted features in the latest CMS specification even though not part of the PDF specification) it seems the most popular client, Adobe Acrobat, has not decided to do that. This means if people want to encrypt documents using the more secure approach they won’t be able to work with Adobe Acrobat.

It is about time the standard incorporated OAEP and thankfully doing so is almost as easy as “search and replace”.

Content Encryption

The PDF specification states that AES should be used in Cipher Block Chaining (CBC) mode. There is nothing wrong with CBC per-se, with that said it is “easy to get it wrong” and for this reason, most practitioners will tell you to use a mode called Galois Counter Mode (GCM). This mode of encryption is an “authenticated” mode where you can easily tell if a message has been modified when decrypting.

It is true that the PDF format includes an MD5 as an integrity mechanism, but unless special care is given one could still easily expose the associated attack vectors.

The ability to verify the integrity of an encrypted message before decryption is materially important to systems that handle lots of documents. Doing so as part of the encryption mechanism, as is done with GCM, ensures this happens. GCM has been around since 2005, three years before the ISO standard for PDF was published — It is about time the standard incorporated it.

Key Strength

In the certificate based PDF encryption scheme there are two “secrets” that need to be protected, the first is called a “seed” by the specification. This seed is used to derive the content encryption key (CEK) that is used to encrypt the actual PDF content.

This “seed” is a 20-byte random value. The 20-bytes was more than sufficient when working with shorter key lengths like those used in 3DES but with AES-256 it is not sufficient. When deriving the key in accordance with the specification this “seed” is mixed with a hash of portions of the document. It is possible the authors thought the use of the hash was sufficient to provide the additional entropy, but it is not. These inputs are not random, they may be unique to a given document, but all instances of that document would have the same values. As a result, there is insufficient entropy to get all the security benefits of AES-256.

The specification should be updated to indicate that for a 128-bit AES key you use at least 16-bytes, for a 256-bit AES key you use at least 32-bytes of entropy.

ECC Support

ECC represents an important cryptographic tool with lots of great security properties. Several PDF clients, including Acrobat Reader, support signing and verifying signatures based on ECC.  There would probably be even more, but again, the PDF specification we have states the format is PKCS #7 even though it links to an older copy of the CMS standard.

Those clients that do this do support ECC do so via ECDSA (this was specified in 2002 for CMS and later updated in 2010). 

It is also possible to encrypt with ECC. This would be done using ECDH and is also documented in the same RFCs. Adding support into the specification would make sense since many cryptographic guidelines across the world mandate the use of ECC based algorithms. Additionally signing and encryption go hand-in-hand and if one signs with ECC they surely would also like to encrypt the same document and today that’s not possible, at least in Adobe Reader, without also being enrolled for an RSA certificate.

Long story short, it is about time both ECC signing and encryption are supported by the PDF standard.

Implementation Guidance

The specification as written offers essentially no implementation guidance. There are numerous cases of this, but one of the more glaring implications comes up when we think about the key hierarchy used in encryption.

While the specifications use of two keys in the key hierarchy is a convoluted approach, if done correctly it can work fine. One of the larger issues here an implementor needs to be mindful of is the “effective key strength” they are offering the users of their products. For example, if you encrypt the “seed” using 3DES and the “content” with AES then the content is only as secure as the 3DES key. We have encountered at least one client that does exactly this. Another variant we have seen is a client that encrypts the “seed” with AES-128 and the content with “AES-256”, of course, the client tells the users the file was protected with the larger key length.

The specification should be updated to make it clear that both keys need to be protected with algorithms that offer the same effective security, and in fact, should simply be protected using the same algorithm.

Backward Compatibility

Another example of missing implementation guidance is that of backwards compatibility. While I am sure there are examples outside of how documents are signed and encrypted what is directly obvious to us is that the specification includes support for many algorithms that are weak and/or broken (for example MD2, MD5, DES, and 3DES).

The text really should be updated to make it clear that no new documents should be created using these algorithms and recommend that clients warn when viewing documents that have been produced with them since the guarantees of privacy, integrity and authentication the user expects are likely not being met.


The PDF format uses cryptographic approaches from the 90s and implementers have pulled in, on an as needed basis of more modern cryptographic approaches. Today algorithms that would be used to build such a standard would be AES-GCM, ECDSA, ECDH, and RSA-OAEP almost all of which are not supported by PDF as specified.

Thankfully the ISO 32000-2 specification is not yet complete, it is my hope that it will be soon, and that the editors of this specification read this and update the draft to incorporate this feedback.  If they do then we will all be better off.

Ryan & Yury

PKIjs and trust lists

As you probably know Yury and I think Same Origin Certificates (or Browser Bound Certificates) are the way PKI enabled applications will be built in the future. This is why we have been working on PKIjs for so long.

One of the issues you have when building applications that use this concept is deciding what Certificate Authorities you should trust. The answer to that question is pretty nuanced but the short version is only as many as you absolutely need to.

There are four trust stores that I personally think are interesting when thinking about digital signatures these include Mozilla’s, Microsoft’s, Adobe’s and the EUTL.

If you want to work with these lists you need to be able to parse them and get the CA certificates that are meaningful to you.

This is why we have created tl-create, it can (at the time of this post) parse the Mozilla list and the EUTL list.*

* At this time the EUTL trust list does no signature verification and should only be used for experimentation.

The output of which is either a PEM bag of certificates or Javascript array that you can import into your PKIjs based applications as trust anchors.

Hopefully you will find this interesting and useful, pull requests are welcomed.

ECC, NSA and Crypto Agility

Matthew Green, someone I admire, recently did a wonderful post on the NSA announcement deprecating secp256r1 and letting people know they are no longer encouraging further adoption of the Suite B.

As always Mr. Green has put together a well researched article that is a joy to read. I won’t rehash it more than necessary, but I think he missed an angle that deserves some discussion.

Over the last decade (Suite B is over 10 years old) we have seen more improvements in cryptanalysis than you can shake a stick at. This, as his post points out, is important since ECC doesn’t offer much of a margin for error.

“But while the ability to use (relatively) tiny elliptic curve points is wonderful for implementers, it leaves no room for error. If NSA’s mathematicians began to make even modest, but sustained advances in the state of the art for solving the ECDLP, it would put the entire field at risk. Beginning with the smallest of the standard curves, P-256, which would now provide less than the required 128-bit security.”

With hindsight, we can probably say those who advocated its adoption did not fully appreciate this, or how easy and cheap it it is today to get access to massive amounts computing power.

“Did I mention that as part of the recent announcement, NSA also deprecated P-256?”

If I were a betting man, I would say this is why they have deprecated P-256, not due to some conspiracy theory, instead consider, maybe they are simply playing it safe?

But why then stop encouraging the adoption of Suite B all together? I think the answer to this lays, not in some secret knowledge about advancements in quantum computing, but instead is rooted in the reality that after a decade of pushing ECC it’s still seldom used (when compared to RSA).

If the NSA were to spend the next decade pushing Suite B (or more at the current adoption rates)  they will have spent tons (of the governments and others) of money along with their credibility. This would also be a more difficult task given the IETFs push for Curve25519. All of which would just be thrown out once they pick their “winner” for a quantum computing resistant algorithm.

The reality is getting the world to upgrade its crypto is hard and takes time. Operating systems applications and protocols are simply not designed for it. Additionally, with the way things are designed today it works out to be mostly an all or nothing process. Just look at how difficult the relatively simple  deprecation of SHA1 has been.

I am often the the one who says “You’re not paranoid if they really are out to get you” but in this case I think we’re likely looking at the NSA’s version of pragmatism and not a nefarious plan.

On the other hand, as a friend pointed out this could be a way for them to derail Curve25519, either maliciously or benevolently.

PKCS #11, Javascript and Nodejs

Javascript has become the most popular language on the Internet. Until now there has not been a way to directly use cryptographic devices that provide PKCS#11 interfaces natively within NodeJS based applications.

The best you could do was to use the Node ability to use OpenSSL and OpenSSL’s ability to use the OpenSC PKCS#11 engine which would then wrap the vendor provided PKCS#11 library. That clearly is a convoluted mess.

We wanted to let Node developers use these devices directly. With that in mind we created Graphene which uses the node-ffi module to call into these libraries directly.

Our goal was to expose all of PKCS#11 while adopting the NodeJS “style” as appropriate. There is still work to do but we think it is now to the point where others may find value in it so we have made it public as of today.


Paper in a Digital World

Paper processes are a normal part of person to person exchanges, and like the written signature, we can be sure their use will not disappear overnight. This means it is even more important that we evolve the relationship between our physical and digital experiences that involve paper so they can work more fluidly.

Sometimes these exchanges begin as a physical interaction and transition to the digital but almost always, it is the digital embodiment of that transaction that is relied upon once the exchange ends. This is because these digital representations make it possible to instantly access the data contained in them and correlate it to other data enabling quicker and better decisions.

This is particularly important to keep in mind when we consider that paper based workflows are, broadly speaking, privacy preserving workflows. Only those people who have physical access to the associated documents have knowledge of their contents. Their physical nature also makes it possible for those who have possession to freely review these documents with others. This is not true of most digital workflows where the records are commonly stored in clear text in some database or cloud storage service.

There is also a long history of effective independent forensic analysis of paper documents and written signatures. While there are certainly many things that can be determined from forensic analysis of a digital document, attributing it to an individual, or detecting that it has been tampered with is often next to impossible.

It is possible to provide these same properties with digital documents and do so with even greater assurances with the intelligent application of cryptographic based signatures and encryption.  Despite this, these approaches are seldom used, the primary reason given by vendors is providing them requires investment in complex key management solutions and often results in sub optimal user experiences.

Those that do offer cryptographic signatures seldom use them to represent the signer’s intent and instead rely on digital facsimiles of the signer’s physical signature. They then notarize that they saw a given ip address, at a given time attach that facsimile of a signature. This technically exceeds the legal minimum requirements in the United States but fails to meet the minimum expectations most other countries mandate for electronic signatures.

Even once you design a solution that achieves all these properties you are not done providing an equivalent digital alternative. These person-to-person exchanges often require both paper and digital artifacts and as a result you will need to be able to link the two together. This is not too dissimilar than how an “original” contract with its ink signature is often treated as the authentic “source of truth”. In these hybrid digital and physical interactions one party may have processes or compliance requirements that require a paper representation (and something that approximates a physical signature) of the interaction. while the others involved may prefer the convenience of the digital representation.

So what are the things you minimally need to look for in a digital signature solution beyond usability if it is to deliver the same or better properties as existing paper based solution?

  • Each signer:
    • cryptographically signs the document;
    • attaches a facsimile of their physical signature to the document.
  • The final document:
    • is cryptographically notarized with metadata about the signing;
    • includes a timestamp and the cryptographic metadata needed to verify the signature long into the future;
    • can be encrypted end-to-end ensuring only the parties associated with of the document can read it;
    • is assigned a unique identifier that is placed plainly in the document so when it printed its digital embodiment can be easily found;
    • includes a log of activities that took place during the signing process;
    • is archived so it can easily be retrieve later in case of a dispute.
  • The document and signature formats used are based on broadly accepted standards so:
    • it will be readable and verifiable far into the future;
    • it can be read and verified in third-party applications;
    • enforcing the agreement does not require participation of the solution provider in case of dispute.
  • A free web based reader is available that:
    • does not require registering to read the document;
    • enables participants to share the documents with others;
    • can validate the signatures without the need for plug-ins or desktop applications;
    • works as well on mobile and tablet as it does on the desktop;
    • can be easily and freely integrated into your own applications.
  • An API that makes it possible to integrate into your own applications the signing of:
    • documents;
    • web forms.

With these bases covered you have something that should be able to withhold the test-of-time just as paper processes have been able to do.

WebCrypto and the modern web app

There is a famous Mark Zuckerberg quote from 2012 where he states Facebook’s biggest mistake period (in mobile) was their focus on HTML5. It’s been over three years since that statement and a lot has changed. Despite that I still think if I were making a daily use application like Facebook I would do so as a traditional mobile application but let’s face it mobile apps are horrible for infrequent use.

What has changed? today we have browsers that can run Javascript at near-native speeds and the majority of browsers support enough HTML5 to build beautiful and touch friendly “app” experiences without the overhead of an actual application. But there is more to an “app” like experience than a fast touch friendly user user interface, apps also work when there is no network connection and historically web pages have not.

This might seem like a trivial point but consider how many times you have been in a building or location where there was no WiFi and your cellular coverage just didn’t cut it (sorry T-Mobile customers).

Service Workers go a long way to address this problem, and until they are better supported, Polyfills based on App Cache serve as a reasonable substitute, but the problem doesn’t stop there. You also have the problem of how you authenticate entities and protect data while offline; this is where WebCrypto comes in.

The security issues associated with using crypto in Javascript have been covered in great detail but I think Tony Arcieri does the best covering the topic. Long story short you have to be careful to mitigate all the traditional web risks along with following cryptographic best practices, but even when you do that you don’t have cryptographic keys that are in the sole control of the user.

This is because the code that has access to the client side keys is downloaded by the browser frequently and each time that happens the server has a chance to modify the code so it does what it sees fit. In-fact this is exactly what happened in 2007 with a mail provider called Hushmail.

With all that said it is important to keep in mind that the same argument can be made of the browser and and operating system you are using to read this. Like most things it ends up being a trade off, the most obvious example is that of updates. With a desktop application it can take ages to get a security patch deployed en-mass but with a web application it happens automatically and seamlessly (…and doesn’t require a reboot or restart).

The other end of that argument is that the attacker gets to use every one of those seamless updates as an opportunity to attack the core logic of the application. In operating system terms this attack would this would be a library injection. In a web application you would use Content Security Policy (CSP) to mitigate the risk just as you might customize library load paths on an operating system.

OK, so even though you can do crypto in the browser and the security risks can be managed to the same basic profile as performing the cryptography on the server why would you? The answer to that question really depends on the application but if we think about a modern web application the same way we do a mobile app you start to see lots of ways you might apply cryptography locally. Some examples include:

  • Protecting data at rest;
  • Authenticating the user to the offline application;
  • Protecting from a passive attacker with read access;
  • Mixed content sites where a trusted site gates access to a protected resource (via iframes and PostMessage).

It is also important to remember that cryptography is not a panacea. Even by encrypting data  (in your mobile app or web app) or by using signing with client side keys you have no guarantees how the those keys are protected by your applications host (browser or os), and even if you did there is very little anyone can do against physical attacks.

“If you think cryptography can solve your problem, then you don’t understand your problem and you don’t understand cryptography.” – Bruce Schneier

So what should your take away be? I want you to see that even though WebCrypto is not perfect, mobile and desktop applications have many of the same risks. I also believe despite its imperfection WebCrypto does have a place in the modern web stack and this is particularly true for infrequent tasks where the friction of app acquisition and other associated tasks prove to be a barrier to adoption.

Additionally I want to encourage you to carefully to consider the mistakes mobile application developers make so you you don’t repeat the them when you build these modern web based applications.

P.S. If you have not read Adam Shostack’s “Threat Modeling – Designing for Security” you should give it a look.

Uniform Electronic Legal Material Act and Digital Signatures

Apparently one of the reasons states have been reluctant to publish legal material online is that there is a concern over how relying parties can tell if the material is authentic and has not been tampered with.

In an attempt to address this concern a law has been proposed called the Uniform Electronic Legal Material Act (UELMA) the text of which at a high-level states this must be addressed.

“An official publisher of legal material in an electronic record that is designated as official under Section 4 shall authenticate the record. To authenticate an electronic record, the publisher shall provide a method for a user to determine that the record received by the user from the publisher is unaltered from the official record published by the publisher.”

UELMA which was proposed in 2011 has been enacted into law in 12 states (including California, Colorado, Connecticut, Delaware, Hawaii, Idaho, Illinois, Minnesota, Nevada, North Dakota, Oregon, and Pennsylvania). With that said it looks like it may be yet another an unfunded mandate in that there doesn’t appear to be much activity in the way of publishing data signed data.

As with most US laws UELMA doesn’t specify how one would meet this requirement but the most obvious way would be to publish these documents as PDF files and sign them using PAdES. In many cases (especially legal text) this would be the ideal solution given how easy it is  to both apply and verify signatures thanks to the broad support of the standard.

But why is there such broad support for this standard? It’s simple the EU takes a totally different approach to the problem of specifying what makes a “legal” electronic signature than we do. The US basically doesn’t specify any format or requirements for signatures while the EU specifies 4 formats (each with a different use cases) that are allowable of which PAdES is one.

But why did they choose four formats and not just one? That is easy. A signed PDF may be an great way to make content accessible and verifiable to people it is not a good solution for structured data that would be parsed by machines. In these machine readable cases the Europeans rely on CAdES, XAdES and ASiC which are better signature formats for machine readable data.

Since the US doesn’t specify how one should address this problem a non-profit called US Open Data is advocating a solution they helped develop called Data Seal which is a web application that sits on top of PGP to verify files to be used for all of the above cases.

In my opinion this is a bad approach, here are just a few reasons:

  • PGP is 24 years old and has a wonderful mix of usability and interoperability issues that have not been solved in a meaningful way (though there are many who are trying [like Data Seal] but even many of these supporters now see it a lost cause).
  • Dependency on what is today in-essence a single vendor commercial solution, even if is based on an open standard and open sourced means that if these tiny vendors go out of business there is no practical way for “real users” to verify the authenticity of the documents/data.
  • Legal documents need to be verifiable long into the future and and this approach does not consider the concept of long term signature verification (time-stamping, crypto-periods, etc).
  • Pushing for the adoption of a single machine readable signature format (PGP) across the board at the expense of providing an easy-to-use and verify human readable solution is a short-sighted and bad tradeoff.
  • The world is getting smaller, interoperability is more important today than ever. If were going to adopt a different way of solving the same problem than a large majority of the globe it should provide sufficient material benefits to offset the interoperability and accessibility impacts such a decision caries with it.

I could even argue the that as architected Data Seal actually doesn’t even meet the ULEMA requirements in that ULEMA requires that the solution preserves the data and makes it permanently available but the solution does not provide a way for the signatures themselves to be verified long-term.

Anyway all of this is an interesting side-effect of the US approach to legislature. We try to allow innovation by not overly specifying how the market solves a problem while the EU tends to be overly specific and restrictive which tends to hurt innovation. I am almost always a fan of the US approach as governments move much much slower than the market and tend to create structural barriers to innovation. With that said I think interoperability is a case where standards are needed and when it comes to how governments publish and authenticate documents there should be a standard.

Blockchain, Digital Signatures and Identity

It seems anytime I talk to people about the last few years of my professional life, they ask me about how I see traditional X.509 based Public Key Infrastructure and Blockchain technologies intersecting in the future. I think the most obvious intersection between these two technologies is related to contracts.

When cryptography is used for electronic signatures, X.509 certificates are at the core of how signatures are applied. Today there are numerous startups looking at how to squeeze bitcoin into future solutions in this area:

RFC3161 Timestamping Proof Of Existence and Bit Proof
PAdES PDF Signatures BlockSign
X.509 Certificates OneName, World Citizenship, NameCoin, NetKi, etc.

In the United States these alternate Blockchain approaches do not have any regulatory barriers to acceptance, but outside the U.S. they don’t really have much of a chance since most countries specify which specific technologies and processes must be used to qualify as a legal signature.

As such I generally look at these products (at least in the frame of contracts) as solutions looking for problems. The core issue being that they offer limited, if any, material benefit over the existing technological approaches which have both a history and legal framework to support them.

This is particularly a large issue when you consider how global commerce has become, and that each jurisdiction has very different ideas of what constitutes a valid digital signature and contract.

With that said, I am a big believer in the idea of Smart Contracts and do see value in Proof of Existence, but they are features in broader solutions and not products in unto themselves.

But what about the blockchain and Identity Management? When looking at this we first have to remember that at its core Bitcoin is a public ledger — a public repository. The only identity related problem that requires a public repository is discovery of information, more specifically discovery of information that can not be easily discovered in context.

A great example of this is a Bitcoin wallet address. It is both impractical and unreasonable to expect users to pass these values around without error, which is why most of the identity solutions built on Blockchain technology focus on this problem. This is not so different from the problem of discovery of S/MIME or PGP certificates for encrypted mail.

The reality is that one does not need the blockchain to solve this problem, in-fact Facebook recently announced that they are now letting you publish your PGP key on your profile. There is nothing stopping them or any of the other public directory services users already use from publishing other similar values.

I would even go so far to argue the use of Bitcoin given the size of the Blockchain is a liability in these scenarios. Today the Blockchain is over 30GB in size and with over 60% of internet usage being mobile this means (at least for peer to peer cases) one would need to rely on something like Simple Payment Verification (SPV) for mobile devices, which inherently places some trust on a node anyway.

The Bitcoin purist would argue that any use of a trusted third-party is an apples-to-oranges comparison. Here is the kicker though — when it comes to bootstrapping trust you have to trust something/someone and this is especially true when it comes to verifying a legal identity. The net of which is since you have to trust a centralized repository, you do not strictly need a Blockchain based approach.

Long term I see us moving to a model where the federated concept of identity we use with consumer services today is extended to government and business services. We already see this happening with service offerings and the recent work in the EU around eIDAS and the US with NSTIC it seems that this trend won’t be slowing anytime soon.

If that is true then, these Blockchain based identity solutions will either pivot into new solutions or their future will be inextricably tied to the Bitcoin wallet address discovery problem.

What makes an enforceable electronic signature?

While this post should not be thought of as legal advice, in the United States there are five key elements that should be considered when answering the question “Is an electronic signature enforceable?”, these include:

  1. Can you prove who signed the document?
  2. Can you prove when and where they signed the document?
  3. Can you prove that they meant to sign the document?
  4. Can you prove they consented to the use of electronic signatures?
  5. Can you prove the document has not been altered since it was signed?

As they say “On the Internet, nobody knows you’re a dog ” — this makes this first question the hardest to answer.


Does control of the email address “[email protected]” prove who you are? Not really.  This is important because today most electronic signature solutions provide virtually no concept of identity verification beyond proof of control of an email address. This means that in the event of a dispute it will be up to you, and you alone to answer the question of who it is that signed that document.

The only evidence these solutions provide to support a dispute is a log that says something to the effect of “I saw someone with control of [email protected] at typed B-i-l-l  G-a-t-e-s”. The idea being, that in the event of a dispute, you will be able to use this log to prove it was Bill Gates that signed the document. Of course the ability to type the name “Bill Gates” doesn’t prove it was him and honestly the IP address doesn’t help all that much either.

To make matters worse, in most cases these logs are not cryptographically signed. The solution provider just appends an additional page to the document that contains this log. If you ever had to defend the signature, the idea is that you would hash the document and the log and use those values to ask the solution provider to make a statement that the document and the log has not been modified.

This is particularly troublesome when you consider:

  1. As many as 92% of startups fail;
  2. Industry has accepted the question is not “if you will be compromised” but “when”;
  3. Determining what happened decades later can be problematic.

On the surface this does not sound like a big deal; after-all I was raised to honor my word and I wouldn’t do business with someone I thought did not live by that same principle, but unfortunately many are not above cheating their way out of a contract.

The higher-end solution providers do apply cryptographic signatures but with a few exceptions. They only do so as a notarization of this log which helps but is far from holistically answering these key questions. For example even when a cryptographic notarization has been performed an expert would simply need to argue the solution provider could have been compromised when the log or signature was produced.

To address this risk some solution providers go so far as to sign using dedicated keys for each user in addition to notarizing the document. This is by far superior as long as the service provider themselves could not “sign” without the user’s consent. And becomes quite strong if identity verification has also taken place. In this scenario you end up with a set of evidence that actually states, with some reasonable level of assurance, what happened and who was involved.

In the end it is important to remember enforceability of a contract signed with a handshake, ink, or cryptography will always boil down to case-law and the evidence you maintained to support a potential suit. For this reason it is important that you ask yourself how important is it you can enforce the terms of this contract, and to keep adequate evidence so if you ever have to you can do so effectively.