Category Archives: Thoughts

WebCrypto and the modern web app

There is a famous Mark Zuckerberg quote from 2012 where he states Facebook’s biggest mistake period (in mobile) was their focus on HTML5. It’s been over three years since that statement and a lot has changed. Despite that I still think if I were making a daily use application like Facebook I would do so as a traditional mobile application but let’s face it mobile apps are horrible for infrequent use.

What has changed? today we have browsers that can run Javascript at near-native speeds and the majority of browsers support enough HTML5 to build beautiful and touch friendly “app” experiences without the overhead of an actual application. But there is more to an “app” like experience than a fast touch friendly user user interface, apps also work when there is no network connection and historically web pages have not.

This might seem like a trivial point but consider how many times you have been in a building or location where there was no WiFi and your cellular coverage just didn’t cut it (sorry T-Mobile customers).

Service Workers go a long way to address this problem, and until they are better supported, Polyfills based on App Cache serve as a reasonable substitute, but the problem doesn’t stop there. You also have the problem of how you authenticate entities and protect data while offline; this is where WebCrypto comes in.

The security issues associated with using crypto in Javascript have been covered in great detail but I think Tony Arcieri does the best covering the topic. Long story short you have to be careful to mitigate all the traditional web risks along with following cryptographic best practices, but even when you do that you don’t have cryptographic keys that are in the sole control of the user.

This is because the code that has access to the client side keys is downloaded by the browser frequently and each time that happens the server has a chance to modify the code so it does what it sees fit. In-fact this is exactly what happened in 2007 with a mail provider called Hushmail.

With all that said it is important to keep in mind that the same argument can be made of the browser and and operating system you are using to read this. Like most things it ends up being a trade off, the most obvious example is that of updates. With a desktop application it can take ages to get a security patch deployed en-mass but with a web application it happens automatically and seamlessly (…and doesn’t require a reboot or restart).

The other end of that argument is that the attacker gets to use every one of those seamless updates as an opportunity to attack the core logic of the application. In operating system terms this attack would this would be a library injection. In a web application you would use Content Security Policy (CSP) to mitigate the risk just as you might customize library load paths on an operating system.

OK, so even though you can do crypto in the browser and the security risks can be managed to the same basic profile as performing the cryptography on the server why would you? The answer to that question really depends on the application but if we think about a modern web application the same way we do a mobile app you start to see lots of ways you might apply cryptography locally. Some examples include:

  • Protecting data at rest;
  • Authenticating the user to the offline application;
  • Protecting from a passive attacker with read access;
  • Mixed content sites where a trusted site gates access to a protected resource (via iframes and PostMessage).

It is also important to remember that cryptography is not a panacea. Even by encrypting data  (in your mobile app or web app) or by using signing with client side keys you have no guarantees how the those keys are protected by your applications host (browser or os), and even if you did there is very little anyone can do against physical attacks.

“If you think cryptography can solve your problem, then you don’t understand your problem and you don’t understand cryptography.” – Bruce Schneier

So what should your take away be? I want you to see that even though WebCrypto is not perfect, mobile and desktop applications have many of the same risks. I also believe despite its imperfection WebCrypto does have a place in the modern web stack and this is particularly true for infrequent tasks where the friction of app acquisition and other associated tasks prove to be a barrier to adoption.

Additionally I want to encourage you to carefully to consider the mistakes mobile application developers make so you you don’t repeat the them when you build these modern web based applications.

P.S. If you have not read Adam Shostack’s “Threat Modeling – Designing for Security” you should give it a look.

Blockchain, Digital Signatures and Identity

It seems anytime I talk to people about the last few years of my professional life, they ask me about how I see traditional X.509 based Public Key Infrastructure and Blockchain technologies intersecting in the future. I think the most obvious intersection between these two technologies is related to contracts.

When cryptography is used for electronic signatures, X.509 certificates are at the core of how signatures are applied. Today there are numerous startups looking at how to squeeze bitcoin into future solutions in this area:

RFC3161 Timestamping Proof Of Existence and Bit Proof
PAdES PDF Signatures BlockSign
X.509 Certificates OneName, World Citizenship, NameCoin, NetKi, etc.

In the United States these alternate Blockchain approaches do not have any regulatory barriers to acceptance, but outside the U.S. they don’t really have much of a chance since most countries specify which specific technologies and processes must be used to qualify as a legal signature.

As such I generally look at these products (at least in the frame of contracts) as solutions looking for problems. The core issue being that they offer limited, if any, material benefit over the existing technological approaches which have both a history and legal framework to support them.

This is particularly a large issue when you consider how global commerce has become, and that each jurisdiction has very different ideas of what constitutes a valid digital signature and contract.

With that said, I am a big believer in the idea of Smart Contracts and do see value in Proof of Existence, but they are features in broader solutions and not products in unto themselves.

But what about the blockchain and Identity Management? When looking at this we first have to remember that at its core Bitcoin is a public ledger — a public repository. The only identity related problem that requires a public repository is discovery of information, more specifically discovery of information that can not be easily discovered in context.

A great example of this is a Bitcoin wallet address. It is both impractical and unreasonable to expect users to pass these values around without error, which is why most of the identity solutions built on Blockchain technology focus on this problem. This is not so different from the problem of discovery of S/MIME or PGP certificates for encrypted mail.

The reality is that one does not need the blockchain to solve this problem, in-fact Facebook recently announced that they are now letting you publish your PGP key on your profile. There is nothing stopping them or any of the other public directory services users already use from publishing other similar values.

I would even go so far to argue the use of Bitcoin given the size of the Blockchain is a liability in these scenarios. Today the Blockchain is over 30GB in size and with over 60% of internet usage being mobile this means (at least for peer to peer cases) one would need to rely on something like Simple Payment Verification (SPV) for mobile devices, which inherently places some trust on a node anyway.

The Bitcoin purist would argue that any use of a trusted third-party is an apples-to-oranges comparison. Here is the kicker though — when it comes to bootstrapping trust you have to trust something/someone and this is especially true when it comes to verifying a legal identity. The net of which is since you have to trust a centralized repository, you do not strictly need a Blockchain based approach.

Long term I see us moving to a model where the federated concept of identity we use with consumer services today is extended to government and business services. We already see this happening with service offerings and the recent work in the EU around eIDAS and the US with NSTIC it seems that this trend won’t be slowing anytime soon.

If that is true then, these Blockchain based identity solutions will either pivot into new solutions or their future will be inextricably tied to the Bitcoin wallet address discovery problem.

What makes an enforceable electronic signature?

While this post should not be thought of as legal advice, in the United States there are five key elements that should be considered when answering the question “Is an electronic signature enforceable?”, these include:

  1. Can you prove who signed the document?
  2. Can you prove when and where they signed the document?
  3. Can you prove that they meant to sign the document?
  4. Can you prove they consented to the use of electronic signatures?
  5. Can you prove the document has not been altered since it was signed?

As they say “On the Internet, nobody knows you’re a dog ” — this makes this first question the hardest to answer.

Internet_dog

Does control of the email address “[email protected]” prove who you are? Not really.  This is important because today most electronic signature solutions provide virtually no concept of identity verification beyond proof of control of an email address. This means that in the event of a dispute it will be up to you, and you alone to answer the question of who it is that signed that document.

The only evidence these solutions provide to support a dispute is a log that says something to the effect of “I saw someone with control of [email protected] at 192.168.0.1 typed B-i-l-l  G-a-t-e-s”. The idea being, that in the event of a dispute, you will be able to use this log to prove it was Bill Gates that signed the document. Of course the ability to type the name “Bill Gates” doesn’t prove it was him and honestly the IP address doesn’t help all that much either.

To make matters worse, in most cases these logs are not cryptographically signed. The solution provider just appends an additional page to the document that contains this log. If you ever had to defend the signature, the idea is that you would hash the document and the log and use those values to ask the solution provider to make a statement that the document and the log has not been modified.

This is particularly troublesome when you consider:

  1. As many as 92% of startups fail;
  2. Industry has accepted the question is not “if you will be compromised” but “when”;
  3. Determining what happened decades later can be problematic.

On the surface this does not sound like a big deal; after-all I was raised to honor my word and I wouldn’t do business with someone I thought did not live by that same principle, but unfortunately many are not above cheating their way out of a contract.

The higher-end solution providers do apply cryptographic signatures but with a few exceptions. They only do so as a notarization of this log which helps but is far from holistically answering these key questions. For example even when a cryptographic notarization has been performed an expert would simply need to argue the solution provider could have been compromised when the log or signature was produced.

To address this risk some solution providers go so far as to sign using dedicated keys for each user in addition to notarizing the document. This is by far superior as long as the service provider themselves could not “sign” without the user’s consent. And becomes quite strong if identity verification has also taken place. In this scenario you end up with a set of evidence that actually states, with some reasonable level of assurance, what happened and who was involved.

In the end it is important to remember enforceability of a contract signed with a handshake, ink, or cryptography will always boil down to case-law and the evidence you maintained to support a potential suit. For this reason it is important that you ask yourself how important is it you can enforce the terms of this contract, and to keep adequate evidence so if you ever have to you can do so effectively.

The bright side of the dark side

The computer network is arguably one of the most important innovations in my lifetime. When we got our first modem over thirty years ago, it opened a whole new world to me. No longer was my view of the world limited to where I lived. I now could travel across the world (albeit at 150 bits per second) and talk to people from all over the world. Some of these people were honest good folks and others… well, they were criminals.

What all of these people had in common was a passion for learning – a thirst for knowledge and for the most part they saw everyone in their digital realm as kindred spirits. Don’t get me wrong, these people also could be ignorant, hostile, mean and rude, but they also understood: not everyone knew this world even existed.

Every morning before school well beyond my bedtime I would be online stumbling across this endless online world, trying to see everything I possibly could. IRC and Usenet were the primary mode of discovery. You see, there wasn’t really a search engine like there is today where you could just look up the information you wanted someone had to share it.

The best places to go and learn things were warez chat-rooms. In my mind these were filled with kids like me who were motivated to learn by the desire to get access to the latest games. In reality, while there were kids, for the most part it was adults. Whoever they were, they knew what they were doing wasn’t legal, so they were secretive and it took a long time to earn their trust.

I started earning their trust by creating ANSI intros for their cracks, but to work up the food chain in these organizations you, really needed to be a cracker. To be a cracker you needed to be good at assembly, so off to the library I went to get a book on 68000 assembly (I had a C64 at the time). The library system only had a few of these books, so I had to be put on a waiting list. A month or so later the book came in and I started on the path of learning to crack games.

I remember starting with a game that I had and diff’ing it to a cracked copy, working back to what was changed and then figuring out why. It took me months before I could figure out how to find flaws in the copy protection logic games implemented or to simply NOP these checks out all together. Once I was able to do this, I started to create my own patches that would effectively remove the copy protection.

Able to display these skills, I was allowed into the inner circle where people shared information more freely. In these forums (even 30 years ago) exploits, credit card numbers and identities were traded openly. There were even well written how-to documents on how to use the exploits along with electronic copies of the manuals showing how to use the compromised systems.

This was exciting for me. You see, I did not fit in at school and I never felt “special” like the kids who were in sports or in the “cool crowd”, but now I was special – I belonged somewhere.

While I was exposed to morally questionable things in these forums, I learned a ton at the same time. It also exposed me to lots of new things. For example, my first exposure to building electronics was due to phone phreaking. I also learned networking, system administration, how to “hack”, and probably more importantly, I learned how to navigate complex social structures.

Along the way I got into trouble and sometimes did things that probably put me in danger or in jail if I were an adult. That said, these experiences also helped me develop the fundamental skills I still use today as a professional.

My father and I were recently discussing this topic and he reminded me of an argument we had where my parents were trying to get me to stop “hacking” in that argument apparently I said:

How am I going to learn about computers without this hacking stuff?

Looking back I have to say that at least in my case, that is true. In an earlier post I mentioned the BBS I wrote; a big part of my motivation was to be able to learn more from this group of people and running a BBS was a status symbol of sorts to impress them.

This journey proved to be a motivator for me. One, where in addition to the support of my family in learning about computers from this community I also was given:

  1. Access;
  2. Direction;
  3. Challenges;
  4. Support.

Long story short, for me the dark side of the internet was really a path to the bright side and I am sure I am not alone in this. This is one reason why I worry about poorly written legislation attempting to control security research.

Today there are an unimaginable set of resources available to help people get involved in computing and you do not need to “go to the dark side” to get access to this information. It is up to us as a parents, friends, neighbors, business people to help provide these other needed  elements to encourage kids to learn practical skills that will give them choices in life.

I think apprenticeships are a great way to do this, but each situation is different, and there are many options out there where you can help. Take the time to do so.

Help Wanted: Apprentice to learn trade

I have taken the “non-traditional path” in both my education and career. At age eight my parents discovered my aptitude and (more importantly) interest in programming. My mother was always learning new things and as a result when she got our first computer and started to learn to program it gave me access to everything I needed to teach myself.

I remember vividly when she purchased our first modem it was a 150 bits per second acoustic coupler. To put this in perspective COMCAST’s lower tier is 106 times faster than my first network connection. Even then it was painfully slow but it opened an entire new world to me – one I never knew existed.

At some point that year I decided I wanted to host a Bulletin Board System of my own (a BBS is very similar to a forum website today) so I asked my parents to buy me the software and telephone line to do this — they of course laughed and said no after all it would cost close to $1000 just for the software.

I had read enough of my moms programming books that I realized that I didn’t need to buy the software I could just make it myself. As a child my mother would always tell me “No does not mean no. It means find another way.” so thats what I did. I completed every exercise in every programming book she had along with a few others from the local library and set off to make my own BBS.

I made very quick progress. I implemented forums, chat, multiline, a download library, ZModem, XModem and more. I remember printing out the source on reams of continuous feed paper using our dot-matrix printer. My father heard the printer going for quite a while so he came in to stop me because he thought I was wasting ink and paper. As an aeronautical engineer by training and former Air Force officer even though he was not a “computer guy” after a few minutes of looking at what I was printing he recognized what I had accomplished and immediately he and my mother began the process of getting get me in programming  classes at the local colleges.

This moment was probably the most significant contributor to where I am today. It was possible because I was lucky enough to find myself in a situation I was given:

  1. Access;
  2. Direction;
  3. Challenges;
  4. Support.

This set me up for what I now think of as a series of unpaid internship and apprenticeships. I helped my professors and teachers teach their classes, grade homework, help students and create courseware. I also helped a few small businesses create automation to help with inventory management and invoicing — all for free.

The system of apprenticeships has been around since the middle ages. A cobbler might teach their children or someone else’s (in exchange for pay) their trade.  In essence these experiences allowed me to learn my trade.

My parents wanted nothing more than for me to go to University and get a degree. The problem was the independence of the path I was on made it hard for me to do give up control and go this route. I also wanted to learn everything I could about computers, programing, applied cryptography, security and realistically not even the most prestigious schools had much to offer in these areas at the time.

This resulted in me dropping out of high school and college where I was taking classes that interested me. My parents didn’t exactly approve and I was a bit rebellious at this point in my life so I got a job in technology and moved out.

This choice came with a set of unique challenges; for example some who looked at my resume would ask “Where did you get your graduate degree?” and when they heard I didn’t even have a diploma many would essentially look the other way. Fortunately computers were still relatively new and I was able to demonstrate my raw abilities which meant I still had plenty of opportunities I just had to look a little harder.

Two years after I moved out my first son came along. At this point I understood the benefits and challenges of the path I had chosen for myself but like all parents I wanted more for my children. I remember watching a television show called Gilmore Girls which was about a single mom who had her own realization along the same lines. She was also a drop-out but decided her daughter would go to University so she could have the benefits that path represented but still wanted her daughter to embrace the benefits of her personal approach to life.

I had decided this is what I wanted for my own children. But as they say they say “the best-laid plans of mice and men often go awry” and my oldest is on a path much closer to my own. He finished high school and moved on to being a software developer in Silicon Valley.

As a parent if my goal was to “get him into University” I made a fundamental mistake. That is by exposing him to an extensive computer science education at home by the time he was ready for college the only schools that looked challenging in computer science were out of reach due to admission requirements. It wasn’t that he wasn’t capable of the better scores and grades that were necessary to get into these schools but instead we got him unpaid internships where he could hone his skills and his grades suffered as a result.

Is this a failure in parenting? A failure in the school system? A little of both? Probably a little of both but a parent’s goal should not be to “get their children into university”. There are lots of ways to find success but what is important that we help them have choices in life and find happiness. The path he is on gives him that and while I still hold out hope that he goes to university the reality is he has the job that most Computer Science graduates dream of after four years of university and doesn’t have the associated debt.

Don’t get me wrong — there are many merits to University (which is why I think he should still go) but the reality is it is not the only path to success.

I bring all of this up because the other day Bill Gates, someone I really admire, blogged about the abysmal college completion rates.  In this post there is a quote that stands out:

By 2025, two thirds of all jobs in the US will require education beyond high school.

As a hiring manager in technology I know how hard it is today find people with the right skills and experiences to build products and services the market demands (Don’t get me started on our visa system!). As a parent I also know the school system is still failing our kids so this talent drain is surely going to get worse.

With that said I think we are not looking at the problem holistically. There are lots of ways to get the skills that are necessary to have options in life — Universities do not have a monopoly on success. Thats not to say University isn’t a good option or that there are not careers where a degree is both useful and/or necessary. It is just that there are lots of ways to get our children choices and we should be embracing them as well.

In my mind the apprenticeship is still one of the best ways to get a practical education. It works exceedingly well in technology. I also know a number of lawyers who have passed the bar without having gone to law school as well as a number of small business owners who essentially got their start as apprentices.

Unfortunately the unpaid apprenticeship is under attack and when combined with recent living wage initiatives it makes it hard for those with the interest and skills to offer these apprenticeships. This the most damning element of this attack is a court has ruled that an employer can derive no immediate advantage as a result of the relationship.

Now to be clear I am not arguing the path I went on is right for everyone and I am a believer in formal education (my great grandmother and wife were teachers) but we have to look at this problem more holistically than we have been if we want to help our children and grandchildren to have choices.

Farm boy sensibilities and the importance of contracts

I like to say that I was raised to have “Farm boy sensibilities“. For me this is a positive statement and talks to how my father and grandfather stressed axioms like “a man is only as good as his word“, “treat others the way you want to be treated” and no matter what “when you say you will do something come hell or high water you better do it.

As a security practitioner this is a little bit of a dichotomy in that the above exposes you to risk when you assume others live by the same rules as you do. Thats why I like the phrase “trust but verify” as I think it more accurately capture what “the modern farm boys” mantra should be.

I bring this up because I was just reminded through a personal experience that not everyone approaches their lives in the same way. This is why (amongst other reasons) having contracts or at a minimum memorandums of understanding that accurately represent not only the mutual understanding but how issues will be handled in the event of a dispute are so important in business.

It is easy to find yourself in a situation where you feel like both parties will respect each others position and “do what is right” and think its not necessary to spend the time to do these documents justice or to create them at all but in practice this only works if both parties play by the same rules which unfortunately is not always the case.

Though often times there is no substitute for proper legal council thankfully there are a few resources available to you online that can make things a little easier when creating  agreements, some of which include:

These can provide good templates for you to work from. When drafting any document you will use yourself though you want to make sure you think about all of the things that could go wrong. This is a lot like what a security practitioner does when they asking themselves where the weak links are in the design of a system they are reviewing.

In any event its important to keep in mind not everyone plays by the same rules and contracts play an important part in ensuring you don’t end up on the wrong end of a good deal.

Removing Friction From Online Signatures

Today there are broadly two different types of signatures done online, electronic signatures and digital signatures. Electronic signatures are a synthetic version of the wet signatures we use in the physical world and digital signatures are a re-envisioning of the idea of signatures that leverage strong cryptography to make an even stronger signature.

But if electronic signatures are the lesser form of the two why do they exist at all? The answer to that question is friction.

In many respects that friction is a self-inflicted wound that is a result of the industry not looking at the problem they are solving holistically. For example today in Adobe Reader it is possible to do both electronic signatures and digital signatures. They have gone out of their way to make these electronic signatures as easy to apply as possible and taken what they likely argued was a principled position and reserved the use of digital signatures for what they considered the “ideal” case where the signer’s private key is on a FIPS 140-2 Level 3 certified key management device.

As a result of this the large majority of “digital signatures” do not actually contain the identity of the signer and instead are simply notarizations of a synthetic wet signature. This is because the user experience available to users for the creation of these synthetic wet signatures is better than what they made available to those doing digital signatures.

I am sure they would argue this is an artifact of the limitations of the technologies but I would argue that is not the case. It is totally possible to apply digital signatures in such a way that it is no more burdensome to a user than a synthetic wet signature.

In prior posts I have discussed the example of key protection; by mandating key compromise can only be mitigated by using FIPS 140-2 Level 3 certified devices they created a structural barrier to vendors from creating a solution that used alternative approaches such as limiting the validity of keys to just a few minutes.

The same holds true of identity, by saying only legal identity can be used in in the credentials used in digital signatures they prevented alternate approaches such as the issuance of a email only credential that is later validated to a higher level or even a pseudo anonymous credential that is later authenticated to a higher level.

Digital signatures can be as usable as the synthetic wet signatures in use today and with the recent changes in the EU with eIDAS we are seeing some of these structural limitations being removed and we can only hope that Adobe follows suit and revises their policies to remove those structural barriers that hold back these alternative approaches.

Wet, Dry, Electronic, Digital and Hybrid Signatures

When talking about signatures there are several different styles of signatures people refer to. The first is the one we are all the familiar with – wet signatures.

A wet signature is created when a person physically puts their mark a document. In some cultures this is done by writing a name in a stylized cursive format or using a seal. The name wet implies that the signature was made with ink or wax, it might also indicate that the signature is “fresh” and the ink has not yet dried. Probably the most recognizable wet signature is that of John Hancock.

john hanko

These sorts of signatures have been in use for as long as we have had a written language (and maybe even before). We do know that since the sixth century forensic document analysis has been used to verify the authenticity of these signatures.

Dry signature is a term used as a way to describe both a wet signature where the “ink has dried” and as a higher level description that captures many other forms of non-ink based signatures (such as electronic and digital signatures).

Electronic signatures for the most part can be thought of as a “synthetic wet signature”. These signatures are produced as their name implies electronically and most commonly try to look as much like a wet signature as possible. Services such as HelloSign and Pandadoc are examples of services that leverage these synthetic wet signatures. With these services you upload a document, they convert it to a PDF and then you insert what is ultimately a picture of something that resembles your wet signature. These pictures of your signature are typically produced by digitizing your signature, uploading a copy of your signature or by the use of varied cursive typography.

With electronic signatures this “picture” intended to make both the signer and recipient of a signed document “feel” like the ritual they are undergoing is equivalent to that of the the traditional paper process that is traditionally used.

That said ones synthetic wet signature very rarely reflect a person’s real wet signature so this is really more about symbolism than anything else. One’s ability to prove a that it was really “you” who signed with an electronic signature is really limited to a statement from the facilitator of the signing that essentially says:

“I saw someone on this IP address who was able to access this email address and they asked us to insert this picture in this document – trust us.”

There is no concept of legal identity involved. For most “electronic signatures” there is also no verifiable proof of the claims from the facilitator about the signature. Anyone could trivially re-create a document or log that says something entirely different and it would be very difficult to prove which one represented the truth.

In this log the question of what was signed is captured by embedding a hash of the document that is being “signed”. It is important to understand that this hash alone does not capture what was seen by the user, it simply captures a fingerprint of a binary file. To understand this point just consider how the same website renders differently on Chrome vs Internet Explorer..

If the document were to be modified by someone after the fact one would need to rely on the database of the facilitator to determine what really happened.

In the event such a signature were to be questioned in a court of law it is for the most part left to a case of he-said-she-said. At best you could ask the facilitator to be a witness in the court case to attest to their operational practices and why their logs associated with the activity are most likely true.

Digital signatures are also technically “electronic signatures” but they are notably different in that they leverage strong cryptographic techniques to make it so that any changes to the document are detectable. If only the signer holds the private key that is used to sign the document it is mathematically provable that only the signer could have placed that signature on the document.

For the same symbolism reasons above these signatures will often also contain a synthetic signature.

The question of identity in electronic signatures is most commonly handled via X.509 certificates where a certificate authority goes through a process to verify the identity of the signer and issues them a digital certificate that states “I verified the following information about the holder of this private key”. The information in the certificate may be as little as their email address or as much as their legal identity and physical address.

The nice thing about this approach is that neither the document signing facilitator nor the certificate issuer can pretend to have signed a document — they do not have the private key.

It is still important to ensure adequate logs are maintained to prove what was presented to the user when they placed their digital signature on the document but this defense of this signature is much easier given there is less trust being put on the facilitator to act responsibly.

Hybrid signatures or notarized electronic signatures represent a mix of “electronic signatures” and “digital signatures”. This is what DocuSign and EchoSign do. They apply a the synthetic wet signature for the user and append a log saying “trust us this is what we saw happen” but they sign the document and that log with their own digital signature acting as a notary of sorts.

This is far superior to what the pure electronic signature providers provide because it in the event there is a question about the validity of the signature there is less question of the integrity of the logs.

For example consider the case where a pure electronic signature was put into question; one could simply argue the service provider’s database was compromised and any data within it was suspect.

With that said it is far better to use a pure digital signature approach as it removes even more arguments about the validity of the signature.

Browser Bound Certificates

The addition of WebCrypto to the browser enables a number of interesting client server opportunities that did not exist prior. One of which I think is interesting is what I have been calling browser bound certificates.

In-fact at least two such scenarios were included in the charter of the W3C WebCrypto working group – Document Signing and Encrypted Mail.

Now neither of these scenarios necessarily prescribe the use of X.509 certificates but considering signed PDF’s are the defacto-standard for signed documents and S/MIME is supported by Android, IOS, Windows Phones and Outlook it seems its not totally silly to say this approach has at least some merit.

To implement both of these one needs to have support for X.509 and its concepts within the browser, this is where Browser Bound certificates and PKIjs comes in. Imagine a client authenticating a user and over that authenticated session the client submits a certificate request bound to that session that is passed to an API on the server side that issues the client a X.509 certificate.

With that the client now has all the material that is necessary to sign and/or encrypt messages on the client side using the formats already in use. The web can interoperate with the desktop.

In our theoretical application need to take all the traditional precautions for both web and crypto-aware applications some of which include:

  1. Not mixing content from other domains,
  2. Loading the site and all of its resources over SSL,
  3. Segmenting the signing and verification code with postMessage,
  4. Using crypto primitives in safe ways,
  5. Using non-exportable keys,
  6. Keeping the keys short-lived.

But we can with these Browser Bound certificates build modern PKI aware applications that have great user experience that can even work without the server being present once provisioned.

A look at short lived certificates, keys and the relevance of FIPS 140-2

Today the defacto-standard for purchasing criteria for a cryptographic component is a US Federal Standard called FIPS 140-2. This is set of assurance levels the US Federal Government uses to ensure that government agencies purchase cryptographic products that are interoperable and address threat-specific risks; Europe has similar set of guidelines called Common Criteria.

These standards were adopted by the security industry because in the beginning the only purchasers of their products were government agencies and if you did not design your products to meet these requirements your product wouldn’t even be considered by your only customer segment.

As the security industry began selling outside of government agencies they started with the Fortune 50 because they were the only ones who understood the risks their businesses were exposed to. This was a time when information security was in-essence a new discipline and the only tried and true examples these organizations had to learn from were from the government and military. For this reason the solutions that were sold and deployed were watered down versions of what they sold to governments.

As the awareness of security risks spread to the rest of the corporate world these same foundational standards continued to be used — in many respects without question. In fact I am always surprised how many customers I encounter who have mandated a specific FIPS assurance level be supported by a product that have no understanding of what protection each level provides.

With the Snowden revelations people are now starting to question these standing assumptions. Should we be using cryptography that is specified governments at all? Is our adoption of government approved cryptography making us more secure or is it exposing us to new risks?

The real questions we must be asking ourselves are:

  1. What is the actual (vs perceived) threat model?
  2. Where are the assets that are valuable to the attacker in my system?
  3. Are we applying security technology and approaches in a balanced way relative to the risks?
  4. What are the consequences of each of the design decisions we are making?

Our reliance on blanket adoption of standards like FIPS 140-2 are in many respects a way to make ourselves feel better about not spending the time to answer the first two questions and the last two questions represent areas where most organizations fall down.

First let me temper what I am about to say with I still believe FIPS 140-2 and Common Criteria have value and they are good solutions for what they were designed for but in many cases they are a round peg in a square hole.

Let’s start this by first understanding the claims and the values of each:

Third-party evaluated – An organization deemed knowledgeable and capable by the government has reviewed the design relative to the stated requirements and found no unresolved issues.

Approved Algorithms – Supports a set of algorithms that the government has decided are necessary for interoperability. The selection of these algorithms by the government is plausibly a result of a rigorous process that determined they are sufficiently secure for their needs. Ex: RSA, ECC /w secp256r1, SHA2, etc.

Uses Approved Algorithms and Methods to Protect Keys – Uses a set of algorithms and approaches the government has decided are sufficient to keep keys of the types specified in approved algorithms secure. Ex: Use crypto and methods at least as strong as the keys being protected.

Production-Grade Components – An attempt to specify a qualitative set of requirements that are intended to ensure there is adequate quality in the solution to be used in production.

Tamper Evidence – Implements mechanisms such as seals and manufacturing techniques that make it visibly obvious that the device has been physically compromised.

Protects Once Compromised – Implements mechanisms that make it difficult to extract the keys from the device once it is physically compromised.

Tamper resistant – Implements mechanisms to destroy the protected keys when a compromise is attempted.

The following table shows you how these traits map across the various FIPS 140-2 assurance levels:

Third-party evaluated Approved Algorithms Uses Approved Algorithms and Methods to Protect Keys Production-Grade Components Tamper Evidence Protects Once Compromised Tamper resistant
Level 1 x x x x
Level 2 x x x x x
Level 3 x x x x x x
Level 4 x x x x x x x

Now each of these traits are desirable but they may also have consequences, for example:

Third-party evaluated – These audits take up to a year to prepare for and complete. Due to the specialized nature and near-monopoly the approved testers have the tests are incredibly expensive. Additionally these testing agencies perform their tasks based on guidelines based published by governments who are very slow to adapt and change and focused on their own immediate needs which restricts innovation.

This all becomes very complicated when you need to respond to security issues in short periods of time and many have come to the conclusion the bureaucracy associated with completing these audits reduces security.

Approved Algorithms – While I am pleased with the fact that NIST runs crypto competitions in some cases they are not used and in others their choices may not be right for you. Additionally there are questions about some of their decisions and what they mean to the security of the algorithms they implement.

In other cases  the requirements may actually hamper adoption of your solution and while the product may be “more secure” it will not be usable by in many cases. A great example being it is only possible to have a software only solution that is evaluated to FIPS 140-2 Level 1 so if you specify anything higher you may significantly reduce the usability and applicability of your solution.

The important thing to remember is there are many ways to mitigate a risk and if we are not careful to take a step back and take a look at the problem and goals as a whole we might as they say miss the forest through the trees.

For example if we come to the conclusion that we require the use of a FIPS 140-2 Level 4 device we preclude the un-augmented use of every Windows or ChromeOS computer that has a TPM when arguably that would expose the product to hundreds of millions of more users. Is the increased security of that that choice worth the it?

Also if we look at the Tamper EvidenceProtects Once Compromised and Tamper resistant goals we can mitigate these risks significantly if we simply generate new keys every 15 minutes. By doing this we reduce the risk of compromise to a very small window and we reduce the value of the key to the attacker.

It’s this last approach I think we should as an industry apply more now; we no longer live in a world of disconnected systems. We are dynamically deploying services using technologies like Docker, Chef, and Puppet and there is no reason we can not deploy our keys to systems and users dynamically as well.