Category Archives: Thoughts

Understanding risks and avoiding FUD

Disclaimer: This post represents personal opinions and thoughts, and does not represent the views or positions of my employer Google, or Let’s Encrypt where I am a member of their advisory board.

The first step in building a complex system or protocol that has any security needs at all is to model the threats that the system is exposed to. Without a deep understanding of the system and the threats it is exposed to it is not possible to effectively secure it from a motivated attacker.

A key part of this process is understanding the abilities, motivations and the perspective of the attacker. This not only helps you understand what it is they are looking for, but what they will do with the surface area they have access to.

I bring this up because today TrendMicro published a blog post titled “Let’s Encrypt Now Being Abused By Malvertisers” and after reading it, I am convinced the authors are not looking at this topic with that frame of mind. Additionally, it seems they either intentionally or unintentionally omit and/or misrepresent details that are imperative to understand the incident they discuss.

In this post, I try to look more critically at what happened and identify what the core issues at play actually were.

Types of certificates

Before I get to the details I want to provide some background. First it is important to understand that there are, broadly speaking, three types of SSL certificates that Certificate Authorities issue, each with increasing levels of “identity assurance”.

The first is what is referred to as a “Domain Validated” or DV certificates. To obtain a DV certificate, one must only prove that they control the host or domain to be certified.

The next is referred to as an “Organization Validated” or OV certificates. To obtain an OV certificate, one must prove control of the domain using the same mechanisms allowed for DV but must also prove two additional things. First is who the legal entity that controls the domain is. Second is that the requester is acting on behalf of that entity. Browsers do not treat these certificates any different than a DV certificate as such they offer the relying party no additional value.

Finally, there is “Extended Validation” or EV certificates. These are logically the same as OV except the requirements that must be met when verifying the legal entity and requestor affiliation are higher. Since knowing the entity that controls a given host or domain can provide useful information to a relying party browsers display a “Green Bar” with the legal entity name visible within it for these certificates.

Let’s Encrypt issues only Domain Validated certificates. To put this in context today around 5% of certificates on the Internet are EV. This is important to understand in that the certificates issued by Let’s Encrypt are effectively no less “secure” or “validated” than the DV certificates issued by other Certificate Authorities. This is because all CAs are held to the same standards for each of the certificate types they issue.

Request validation

So what exactly are the processes a Certificate Authority follows when validating a request for a Domain Validated certificate? The associated requirements document is nearly fifty pages long so to save you muddling through that I will summarize the requirement:

  • the requestor controls the key being bound to the account or key being certified,
  • the requestor controls the host or domain being certified,
  • that issuing the certificate would not give the key holder ability to assert an identity other than what has been verified,
  • the hosts or domains being certified are not currently suspected of phishing or other fraudulent usages.

It also suggests (but does not require) that CAs implement something called CA Authorization (CAA) Resource Records. These are special DNS records that let a domain owner state which Certificate Authority it legitimately uses for certificates.

Let’s Encrypt

If you want to see for yourself what Let’s Encrypt does for each of these requirements you can review their code, that said it can be summarized as:

Proof of possession: Ensure the requestor can sign a message with the key that will be associated with their account and another for the key associated with their certificate.

Domain control: They support several mechanisms but they all require someone in control of the domain or host to perform an action only a privileged user could.

They also check the Public Suffix List (PSL) on each request. They use this to rate limiting certificate issuance of subdomains not on the PSL.

They also do not issue wildcard certificates.

Domain intent: They implement strict support for CAA records, if such a record is found and Let’s Encrypt is not listed the request will be denied.

Fraudulent usage: Every host or domain requested is checked against the Google Safe Browsing service.

This is, at a high level, the same thing every CA is supposed to do for every DV, OV or EV SSL certificate they issue. If they do not, they would not pass their WebTrust audit. The only place there is any room for interpretation is:

  1. Which domain control approaches they support,
  2. How broadly in their orders they use the PSL,
  3. If they support CAA,
  4. Which data source they use for fraudulent use detection.

On fraudulent use detection I know of CAs that utilize various pre-issuance approaches. Some use private databases; others may use Google Safe Browsing, APWG, PhishTank or some combination of all of these. I should add that the Google Safe Browsing API that Let’s Encrypt uses has a reputation of being one of the best available sources for doing these kinds of checks. I know of only a few who do any post-issuance checking of these data sources.

But what about the infamous Let’s Encrypt certificate issuance automation? Contrary to popular belief Let’s Encrypt is not the first, or even the only CA that automates 100% of the issuance process. They are also not the only ones who offer client automation to request the certificate and configure SSL.

Just ask yourself how to providers like CloudFlare or Hosting Providers get and configure SSL certificates in seconds if it is done manually?

If that is the case how is Let’s Encrypt different? I would say there are two very visible differences:

  1. They are 100% free with no limitations on who can get certificates,
  2. Their practices are the most transparent of the publicly trusted CAs.

The FUD

There is a lot of FUD in the TrendMicro post so please bear with me.

The first that jumps out is that the authors essentially say they saw this coming. It’s framed this way in an attempt to convince the reader this was only possible because of Let’s Encrypt.

Unfortunately, the potential for Let’s Encrypt being abused has always been present. Because of this, we have kept an eye out for malicious sites that would use a Let’s Encrypt certificate.

The problem with this argument is that there really is nothing special about what Let’s Encrypt is doing. In fact, Netcraft recently did a post titled Certificate authorities issue SSL certificates to fraudsters where they find numerous other well-respected CAs have issued certificates to people who later used them for bad deeds.

The issue here is that fraudsters understand CAs check orders for fraud pre-issuance to avoid posts just like the one from TrendMicro we are discussing, as a result attackers wait until after a certificate is issued to perpetrate fraud with the associated hosts and domains.

Any technology that is meant for good can be abused by cybercriminals, and Let’s Encrypt is no exception. As a certificate authority ourselves we are aware of how the SSL system of trust can be abused.

This statement is supposed to suggest that if Let’s Encrypt did not issue the associated certificate the attack in question would not be possible. It again goes on to suggest that TrendMicro is somehow different. Recall however that all CAs are subject to the same requirements for verifying domain control, the core difference between Let’s Encrypt and TrendMicro as CAs is that TrendMicro only issues Organization Validation and Extended Validation certificates.

Maybe that is what they are driving at? There is a valid argument to be made the increased cost and complexity getting an OV certificate makes it harder for an attacker to get an SSL certificate because they must also prove organization details. With that said, as long as DV certificates are allowed the user’s risk is the same.

It is worth noting though that it is quite possible to spoof the material necessary to get a OV certificate. In fact, there are numerous ways one can (very inexpensively) produce “false” but “verifiable” documentation that would meet the OV verification bar. But the argument they are making here is a red herring, I will explain why in a bit

Maybe then the issue is that Let’s Encrypt is somehow at fault because they issue certificates via a fully automated mechanism?

A certificate authority that automatically issues certificates specific to these subdomains may inadvertently help cybercriminals, all with the domain owner being unaware of the problem and unable to prevent it.

If so, the core issue is one that touches TrendMicro also. This is because they also issue certificates “in minutes”, something only possible with automation. This too is a red herring though.

Maybe the issue is a result of Let’s Encrypt not meeting the same requirements of other CAs?

Let’s Encrypt only checks domains that it issues against the Google safe browsing API;

As I have called out above, that is simply not true and all you need to do is check their public source repository to confirm yourself.

So maybe the problem is Let’s Encrypt just is not playing an active role in keeping you safe on the internet.

Security on the infrastructure is only possible when all critical players – browsers, CAs, and anti-virus companies – play an active role in weeding out bad actors.

There is a word for this sort of argument it is ‘specious’. The reality is when you look at the backers of Let’s Encrypt (which in full disclosure includes my employer) you see the who’s who of privacy and security.

Well then if the issue is not Let’s Encrypt, maybe it is Domain Validated certificates? Their post does not directly make this claim but by suggesting TrendMicro would not have issued the attacker a certificate they may be trying to subtly suggest as much since they don’t issue DV certificates.

The problem is even if DV did not exist I am confident a motivated attacker could easily produce the necessary documentation to get an OV certificate. It is also worth noting that today that 70% of all certificates in use only offer the user the assurance of a Domain Validated certificate.

OK so maybe the core issue is that SSL was available to the attacker at all? The rationale being if the site hosting the ad was served over SSL and the attacker did not serve their content over SSL the users would have seen the mixed content indicator which might have clued the user into the fact something afoot. The problem with this argument is that study after study has shown that the subtle changes in the lock icon are simply not noticeable by users, even to those who have been taught to look for them.

So if the the above points were not the issue what was?

How was this attack carried out? The malvertisers used a technique called “domain shadowing”. Attackers who have gained the ability to create subdomains under a legitimate domain do so, but the created subdomain leads to a server under the control of the attackers. In this particular case, the attackers created ad.{legitimate domain}.com under the legitimate site. Note that we are disguising the name of this site until its webmasters are able to fix this problem appropriately.

There you go. The attacker was able to register a subdomain under a parent domain and as such was able to prove control of any hosts associated with it. It had nothing to do with SSL, the attacker had full control of a subdomain and the attack would have still worked without SSL.

Motivations

With this post TrendMicro is obviously trying to answer a questions their customers must be asking: Why should I buy your product when I can get theirs for free?

DigiCert, GoDaddy, Namecheap as well as others have done their own variation of FUD pieces like this trying to discourage the use of “Free Certificates” and discredit their issuers. They do this, I assume, in response to the fear that the availability of free certificates will somehow hurt their business. The reality is there are a lot of things Let’s Encrypt is not, and will probably never be.

Most of these things are the things commercial Certificate Authorities already do and do well. I could also go on for hours on different ways they could further improve their offerings and better differentiate their products to grow their businesses.

They should, instead of trying to sell fear and uncertainty, focus on how they can make their products better, stickier, and more valuable to their customers.

That said, so far it looks like they do not have much to worry about. Even though Let’s Encrypt has issued over 250,000 certificates in just a few months almost 75% of the hosts the associated with them did not have certificates before. The 25% who may have moved to Let’s Encrypt from another CA were very likely customers of the other free certificates providers and resellers, where certificates are commonly sold for only a few dollars.

In short, it looks like Let’s Encrypt’s is principally serving the long tail of users that existing CAs have historically failed to engage.

Threat Model

When we look critically at what happened in the incident TrendMicro discusses can clearly identify two failures:

  1. The attacker was able to register a legitimate subdomain due to a misconfiguration or vulnerability,
  2. A certificate was issued for a domain and the domain owner was not aware of it.

The first problem would have potentially been prevented if a formal security program was in place and the system managing the domain registration process had been threat modeled and reviewed for vulnerabilities.

The second issue could have also been caught through the same process. I say “could”, because  not everyone is aware of Certificate Transparency (CT) and CAA which are the tools one would use to mitigate this risk.

At this time CT is is not mandated by WebTrust but Chrome does require it for EV certificates as a condition of showing the “Green Bar”. That said even though it is not a requirement Let’s Encrypt chooses to publish all of the certificates they issue into the logs. In fact, they are the only Certificate Authority I am aware of who does this for DV certificates.

This enables site owners, like the one in question to register with log monitoring services, like the free and professional one offered by DigiCert, and get notified when such a certificate is issued.

As mentioned earlier CAA is a pre-issuance check designed for this specific case, namely how to I tell a CA I do not want them issuing certificates for my domain.

Until all CAs are required to log all of the SSL certificates they issue into CT Logs and are required to use CAA these are not reliable tools to catch mississuances. That said since Let’s Encrypt does both they would have caught this particular case and that is a step in the right direction.

 

[Edited January 9th 2016 to fix typos and make link to trend article point to the Internet Archive Wayback Machine]

Things we think are true but are not

The other day my son and I were talking about common mistakes people make when handling different “common” data-types in application design. Two of the more interesting examples we discussed were time and names.

Here are a few great posts discussing these data-types:

Then there is the question of sexes, ISO 5218 defines Not known, Male, Female, and Not applicable. Facebook on the other hand has a total of 58 options for gender.

Ryan

PKIjs and trust lists

As you probably know Yury and I think Same Origin Certificates (or Browser Bound Certificates) are the way PKI enabled applications will be built in the future. This is why we have been working on PKIjs for so long.

One of the issues you have when building applications that use this concept is deciding what Certificate Authorities you should trust. The answer to that question is pretty nuanced but the short version is only as many as you absolutely need to.

There are four trust stores that I personally think are interesting when thinking about digital signatures these include Mozilla’s, Microsoft’s, Adobe’s and the EUTL.

If you want to work with these lists you need to be able to parse them and get the CA certificates that are meaningful to you.

This is why we have created tl-create, it can (at the time of this post) parse the Mozilla list and the EUTL list.*

* At this time the EUTL trust list does no signature verification and should only be used for experimentation.

The output of which is either a PEM bag of certificates or Javascript array that you can import into your PKIjs based applications as trust anchors.

Hopefully you will find this interesting and useful, pull requests are welcomed.

ECC, NSA and Crypto Agility

Matthew Green, someone I admire, recently did a wonderful post on the NSA announcement deprecating secp256r1 and letting people know they are no longer encouraging further adoption of the Suite B.

As always Mr. Green has put together a well researched article that is a joy to read. I won’t rehash it more than necessary, but I think he missed an angle that deserves some discussion.

Over the last decade (Suite B is over 10 years old) we have seen more improvements in cryptanalysis than you can shake a stick at. This, as his post points out, is important since ECC doesn’t offer much of a margin for error.

“But while the ability to use (relatively) tiny elliptic curve points is wonderful for implementers, it leaves no room for error. If NSA’s mathematicians began to make even modest, but sustained advances in the state of the art for solving the ECDLP, it would put the entire field at risk. Beginning with the smallest of the standard curves, P-256, which would now provide less than the required 128-bit security.”

With hindsight, we can probably say those who advocated its adoption did not fully appreciate this, or how easy and cheap it it is today to get access to massive amounts computing power.

“Did I mention that as part of the recent announcement, NSA also deprecated P-256?”

If I were a betting man, I would say this is why they have deprecated P-256, not due to some conspiracy theory, instead consider, maybe they are simply playing it safe?

But why then stop encouraging the adoption of Suite B all together? I think the answer to this lays, not in some secret knowledge about advancements in quantum computing, but instead is rooted in the reality that after a decade of pushing ECC it’s still seldom used (when compared to RSA).

If the NSA were to spend the next decade pushing Suite B (or more at the current adoption rates)  they will have spent tons (of the governments and others) of money along with their credibility. This would also be a more difficult task given the IETFs push for Curve25519. All of which would just be thrown out once they pick their “winner” for a quantum computing resistant algorithm.

The reality is getting the world to upgrade its crypto is hard and takes time. Operating systems applications and protocols are simply not designed for it. Additionally, with the way things are designed today it works out to be mostly an all or nothing process. Just look at how difficult the relatively simple  deprecation of SHA1 has been.

I am often the the one who says “You’re not paranoid if they really are out to get you” but in this case I think we’re likely looking at the NSA’s version of pragmatism and not a nefarious plan.

On the other hand, as a friend pointed out this could be a way for them to derail Curve25519, either maliciously or benevolently.

Paper in a Digital World

Paper processes are a normal part of person to person exchanges, and like the written signature, we can be sure their use will not disappear overnight. This means it is even more important that we evolve the relationship between our physical and digital experiences that involve paper so they can work more fluidly.

Sometimes these exchanges begin as a physical interaction and transition to the digital but almost always, it is the digital embodiment of that transaction that is relied upon once the exchange ends. This is because these digital representations make it possible to instantly access the data contained in them and correlate it to other data enabling quicker and better decisions.

This is particularly important to keep in mind when we consider that paper based workflows are, broadly speaking, privacy preserving workflows. Only those people who have physical access to the associated documents have knowledge of their contents. Their physical nature also makes it possible for those who have possession to freely review these documents with others. This is not true of most digital workflows where the records are commonly stored in clear text in some database or cloud storage service.

There is also a long history of effective independent forensic analysis of paper documents and written signatures. While there are certainly many things that can be determined from forensic analysis of a digital document, attributing it to an individual, or detecting that it has been tampered with is often next to impossible.

It is possible to provide these same properties with digital documents and do so with even greater assurances with the intelligent application of cryptographic based signatures and encryption.  Despite this, these approaches are seldom used, the primary reason given by vendors is providing them requires investment in complex key management solutions and often results in sub optimal user experiences.

Those that do offer cryptographic signatures seldom use them to represent the signer’s intent and instead rely on digital facsimiles of the signer’s physical signature. They then notarize that they saw a given ip address, at a given time attach that facsimile of a signature. This technically exceeds the legal minimum requirements in the United States but fails to meet the minimum expectations most other countries mandate for electronic signatures.

Even once you design a solution that achieves all these properties you are not done providing an equivalent digital alternative. These person-to-person exchanges often require both paper and digital artifacts and as a result you will need to be able to link the two together. This is not too dissimilar than how an “original” contract with its ink signature is often treated as the authentic “source of truth”. In these hybrid digital and physical interactions one party may have processes or compliance requirements that require a paper representation (and something that approximates a physical signature) of the interaction. while the others involved may prefer the convenience of the digital representation.

So what are the things you minimally need to look for in a digital signature solution beyond usability if it is to deliver the same or better properties as existing paper based solution?

  • Each signer:
    • cryptographically signs the document;
    • attaches a facsimile of their physical signature to the document.
  • The final document:
    • is cryptographically notarized with metadata about the signing;
    • includes a timestamp and the cryptographic metadata needed to verify the signature long into the future;
    • can be encrypted end-to-end ensuring only the parties associated with of the document can read it;
    • is assigned a unique identifier that is placed plainly in the document so when it printed its digital embodiment can be easily found;
    • includes a log of activities that took place during the signing process;
    • is archived so it can easily be retrieve later in case of a dispute.
  • The document and signature formats used are based on broadly accepted standards so:
    • it will be readable and verifiable far into the future;
    • it can be read and verified in third-party applications;
    • enforcing the agreement does not require participation of the solution provider in case of dispute.
  • A free web based reader is available that:
    • does not require registering to read the document;
    • enables participants to share the documents with others;
    • can validate the signatures without the need for plug-ins or desktop applications;
    • works as well on mobile and tablet as it does on the desktop;
    • can be easily and freely integrated into your own applications.
  • An API that makes it possible to integrate into your own applications the signing of:
    • documents;
    • web forms.

With these bases covered you have something that should be able to withhold the test-of-time just as paper processes have been able to do.

WebCrypto and the modern web app

There is a famous Mark Zuckerberg quote from 2012 where he states Facebook’s biggest mistake period (in mobile) was their focus on HTML5. It’s been over three years since that statement and a lot has changed. Despite that I still think if I were making a daily use application like Facebook I would do so as a traditional mobile application but let’s face it mobile apps are horrible for infrequent use.

What has changed? today we have browsers that can run Javascript at near-native speeds and the majority of browsers support enough HTML5 to build beautiful and touch friendly “app” experiences without the overhead of an actual application. But there is more to an “app” like experience than a fast touch friendly user user interface, apps also work when there is no network connection and historically web pages have not.

This might seem like a trivial point but consider how many times you have been in a building or location where there was no WiFi and your cellular coverage just didn’t cut it (sorry T-Mobile customers).

Service Workers go a long way to address this problem, and until they are better supported, Polyfills based on App Cache serve as a reasonable substitute, but the problem doesn’t stop there. You also have the problem of how you authenticate entities and protect data while offline; this is where WebCrypto comes in.

The security issues associated with using crypto in Javascript have been covered in great detail but I think Tony Arcieri does the best covering the topic. Long story short you have to be careful to mitigate all the traditional web risks along with following cryptographic best practices, but even when you do that you don’t have cryptographic keys that are in the sole control of the user.

This is because the code that has access to the client side keys is downloaded by the browser frequently and each time that happens the server has a chance to modify the code so it does what it sees fit. In-fact this is exactly what happened in 2007 with a mail provider called Hushmail.

With all that said it is important to keep in mind that the same argument can be made of the browser and and operating system you are using to read this. Like most things it ends up being a trade off, the most obvious example is that of updates. With a desktop application it can take ages to get a security patch deployed en-mass but with a web application it happens automatically and seamlessly (…and doesn’t require a reboot or restart).

The other end of that argument is that the attacker gets to use every one of those seamless updates as an opportunity to attack the core logic of the application. In operating system terms this attack would this would be a library injection. In a web application you would use Content Security Policy (CSP) to mitigate the risk just as you might customize library load paths on an operating system.

OK, so even though you can do crypto in the browser and the security risks can be managed to the same basic profile as performing the cryptography on the server why would you? The answer to that question really depends on the application but if we think about a modern web application the same way we do a mobile app you start to see lots of ways you might apply cryptography locally. Some examples include:

  • Protecting data at rest;
  • Authenticating the user to the offline application;
  • Protecting from a passive attacker with read access;
  • Mixed content sites where a trusted site gates access to a protected resource (via iframes and PostMessage).

It is also important to remember that cryptography is not a panacea. Even by encrypting data  (in your mobile app or web app) or by using signing with client side keys you have no guarantees how the those keys are protected by your applications host (browser or os), and even if you did there is very little anyone can do against physical attacks.

“If you think cryptography can solve your problem, then you don’t understand your problem and you don’t understand cryptography.” – Bruce Schneier

So what should your take away be? I want you to see that even though WebCrypto is not perfect, mobile and desktop applications have many of the same risks. I also believe despite its imperfection WebCrypto does have a place in the modern web stack and this is particularly true for infrequent tasks where the friction of app acquisition and other associated tasks prove to be a barrier to adoption.

Additionally I want to encourage you to carefully to consider the mistakes mobile application developers make so you you don’t repeat the them when you build these modern web based applications.

P.S. If you have not read Adam Shostack’s “Threat Modeling – Designing for Security” you should give it a look.

Blockchain, Digital Signatures and Identity

It seems anytime I talk to people about the last few years of my professional life, they ask me about how I see traditional X.509 based Public Key Infrastructure and Blockchain technologies intersecting in the future. I think the most obvious intersection between these two technologies is related to contracts.

When cryptography is used for electronic signatures, X.509 certificates are at the core of how signatures are applied. Today there are numerous startups looking at how to squeeze bitcoin into future solutions in this area:

RFC3161 Timestamping Proof Of Existence and Bit Proof
PAdES PDF Signatures BlockSign
X.509 Certificates OneName, World Citizenship, NameCoin, NetKi, etc.

In the United States these alternate Blockchain approaches do not have any regulatory barriers to acceptance, but outside the U.S. they don’t really have much of a chance since most countries specify which specific technologies and processes must be used to qualify as a legal signature.

As such I generally look at these products (at least in the frame of contracts) as solutions looking for problems. The core issue being that they offer limited, if any, material benefit over the existing technological approaches which have both a history and legal framework to support them.

This is particularly a large issue when you consider how global commerce has become, and that each jurisdiction has very different ideas of what constitutes a valid digital signature and contract.

With that said, I am a big believer in the idea of Smart Contracts and do see value in Proof of Existence, but they are features in broader solutions and not products in unto themselves.

But what about the blockchain and Identity Management? When looking at this we first have to remember that at its core Bitcoin is a public ledger — a public repository. The only identity related problem that requires a public repository is discovery of information, more specifically discovery of information that can not be easily discovered in context.

A great example of this is a Bitcoin wallet address. It is both impractical and unreasonable to expect users to pass these values around without error, which is why most of the identity solutions built on Blockchain technology focus on this problem. This is not so different from the problem of discovery of S/MIME or PGP certificates for encrypted mail.

The reality is that one does not need the blockchain to solve this problem, in-fact Facebook recently announced that they are now letting you publish your PGP key on your profile. There is nothing stopping them or any of the other public directory services users already use from publishing other similar values.

I would even go so far to argue the use of Bitcoin given the size of the Blockchain is a liability in these scenarios. Today the Blockchain is over 30GB in size and with over 60% of internet usage being mobile this means (at least for peer to peer cases) one would need to rely on something like Simple Payment Verification (SPV) for mobile devices, which inherently places some trust on a node anyway.

The Bitcoin purist would argue that any use of a trusted third-party is an apples-to-oranges comparison. Here is the kicker though — when it comes to bootstrapping trust you have to trust something/someone and this is especially true when it comes to verifying a legal identity. The net of which is since you have to trust a centralized repository, you do not strictly need a Blockchain based approach.

Long term I see us moving to a model where the federated concept of identity we use with consumer services today is extended to government and business services. We already see this happening with service offerings and the recent work in the EU around eIDAS and the US with NSTIC it seems that this trend won’t be slowing anytime soon.

If that is true then, these Blockchain based identity solutions will either pivot into new solutions or their future will be inextricably tied to the Bitcoin wallet address discovery problem.

What makes an enforceable electronic signature?

While this post should not be thought of as legal advice, in the United States there are five key elements that should be considered when answering the question “Is an electronic signature enforceable?”, these include:

  1. Can you prove who signed the document?
  2. Can you prove when and where they signed the document?
  3. Can you prove that they meant to sign the document?
  4. Can you prove they consented to the use of electronic signatures?
  5. Can you prove the document has not been altered since it was signed?

As they say “On the Internet, nobody knows you’re a dog ” — this makes this first question the hardest to answer.

Internet_dog

Does control of the email address “[email protected]” prove who you are? Not really.  This is important because today most electronic signature solutions provide virtually no concept of identity verification beyond proof of control of an email address. This means that in the event of a dispute it will be up to you, and you alone to answer the question of who it is that signed that document.

The only evidence these solutions provide to support a dispute is a log that says something to the effect of “I saw someone with control of [email protected] at 192.168.0.1 typed B-i-l-l  G-a-t-e-s”. The idea being, that in the event of a dispute, you will be able to use this log to prove it was Bill Gates that signed the document. Of course the ability to type the name “Bill Gates” doesn’t prove it was him and honestly the IP address doesn’t help all that much either.

To make matters worse, in most cases these logs are not cryptographically signed. The solution provider just appends an additional page to the document that contains this log. If you ever had to defend the signature, the idea is that you would hash the document and the log and use those values to ask the solution provider to make a statement that the document and the log has not been modified.

This is particularly troublesome when you consider:

  1. As many as 92% of startups fail;
  2. Industry has accepted the question is not “if you will be compromised” but “when”;
  3. Determining what happened decades later can be problematic.

On the surface this does not sound like a big deal; after-all I was raised to honor my word and I wouldn’t do business with someone I thought did not live by that same principle, but unfortunately many are not above cheating their way out of a contract.

The higher-end solution providers do apply cryptographic signatures but with a few exceptions. They only do so as a notarization of this log which helps but is far from holistically answering these key questions. For example even when a cryptographic notarization has been performed an expert would simply need to argue the solution provider could have been compromised when the log or signature was produced.

To address this risk some solution providers go so far as to sign using dedicated keys for each user in addition to notarizing the document. This is by far superior as long as the service provider themselves could not “sign” without the user’s consent. And becomes quite strong if identity verification has also taken place. In this scenario you end up with a set of evidence that actually states, with some reasonable level of assurance, what happened and who was involved.

In the end it is important to remember enforceability of a contract signed with a handshake, ink, or cryptography will always boil down to case-law and the evidence you maintained to support a potential suit. For this reason it is important that you ask yourself how important is it you can enforce the terms of this contract, and to keep adequate evidence so if you ever have to you can do so effectively.

The bright side of the dark side

The computer network is arguably one of the most important innovations in my lifetime. When we got our first modem over thirty years ago, it opened a whole new world to me. No longer was my view of the world limited to where I lived. I now could travel across the world (albeit at 150 bits per second) and talk to people from all over the world. Some of these people were honest good folks and others… well, they were criminals.

What all of these people had in common was a passion for learning – a thirst for knowledge and for the most part they saw everyone in their digital realm as kindred spirits. Don’t get me wrong, these people also could be ignorant, hostile, mean and rude, but they also understood: not everyone knew this world even existed.

Every morning before school well beyond my bedtime I would be online stumbling across this endless online world, trying to see everything I possibly could. IRC and Usenet were the primary mode of discovery. You see, there wasn’t really a search engine like there is today where you could just look up the information you wanted someone had to share it.

The best places to go and learn things were warez chat-rooms. In my mind these were filled with kids like me who were motivated to learn by the desire to get access to the latest games. In reality, while there were kids, for the most part it was adults. Whoever they were, they knew what they were doing wasn’t legal, so they were secretive and it took a long time to earn their trust.

I started earning their trust by creating ANSI intros for their cracks, but to work up the food chain in these organizations you, really needed to be a cracker. To be a cracker you needed to be good at assembly, so off to the library I went to get a book on 68000 assembly (I had a C64 at the time). The library system only had a few of these books, so I had to be put on a waiting list. A month or so later the book came in and I started on the path of learning to crack games.

I remember starting with a game that I had and diff’ing it to a cracked copy, working back to what was changed and then figuring out why. It took me months before I could figure out how to find flaws in the copy protection logic games implemented or to simply NOP these checks out all together. Once I was able to do this, I started to create my own patches that would effectively remove the copy protection.

Able to display these skills, I was allowed into the inner circle where people shared information more freely. In these forums (even 30 years ago) exploits, credit card numbers and identities were traded openly. There were even well written how-to documents on how to use the exploits along with electronic copies of the manuals showing how to use the compromised systems.

This was exciting for me. You see, I did not fit in at school and I never felt “special” like the kids who were in sports or in the “cool crowd”, but now I was special – I belonged somewhere.

While I was exposed to morally questionable things in these forums, I learned a ton at the same time. It also exposed me to lots of new things. For example, my first exposure to building electronics was due to phone phreaking. I also learned networking, system administration, how to “hack”, and probably more importantly, I learned how to navigate complex social structures.

Along the way I got into trouble and sometimes did things that probably put me in danger or in jail if I were an adult. That said, these experiences also helped me develop the fundamental skills I still use today as a professional.

My father and I were recently discussing this topic and he reminded me of an argument we had where my parents were trying to get me to stop “hacking” in that argument apparently I said:

How am I going to learn about computers without this hacking stuff?

Looking back I have to say that at least in my case, that is true. In an earlier post I mentioned the BBS I wrote; a big part of my motivation was to be able to learn more from this group of people and running a BBS was a status symbol of sorts to impress them.

This journey proved to be a motivator for me. One, where in addition to the support of my family in learning about computers from this community I also was given:

  1. Access;
  2. Direction;
  3. Challenges;
  4. Support.

Long story short, for me the dark side of the internet was really a path to the bright side and I am sure I am not alone in this. This is one reason why I worry about poorly written legislation attempting to control security research.

Today there are an unimaginable set of resources available to help people get involved in computing and you do not need to “go to the dark side” to get access to this information. It is up to us as a parents, friends, neighbors, business people to help provide these other needed  elements to encourage kids to learn practical skills that will give them choices in life.

I think apprenticeships are a great way to do this, but each situation is different, and there are many options out there where you can help. Take the time to do so.

Help Wanted: Apprentice to learn trade

I have taken the “non-traditional path” in both my education and career. At age eight my parents discovered my aptitude and (more importantly) interest in programming. My mother was always learning new things and as a result when she got our first computer and started to learn to program it gave me access to everything I needed to teach myself.

I remember vividly when she purchased our first modem it was a 150 bits per second acoustic coupler. To put this in perspective COMCAST’s lower tier is 106 times faster than my first network connection. Even then it was painfully slow but it opened an entire new world to me – one I never knew existed.

At some point that year I decided I wanted to host a Bulletin Board System of my own (a BBS is very similar to a forum website today) so I asked my parents to buy me the software and telephone line to do this — they of course laughed and said no after all it would cost close to $1000 just for the software.

I had read enough of my moms programming books that I realized that I didn’t need to buy the software I could just make it myself. As a child my mother would always tell me “No does not mean no. It means find another way.” so thats what I did. I completed every exercise in every programming book she had along with a few others from the local library and set off to make my own BBS.

I made very quick progress. I implemented forums, chat, multiline, a download library, ZModem, XModem and more. I remember printing out the source on reams of continuous feed paper using our dot-matrix printer. My father heard the printer going for quite a while so he came in to stop me because he thought I was wasting ink and paper. As an aeronautical engineer by training and former Air Force officer even though he was not a “computer guy” after a few minutes of looking at what I was printing he recognized what I had accomplished and immediately he and my mother began the process of getting get me in programming  classes at the local colleges.

This moment was probably the most significant contributor to where I am today. It was possible because I was lucky enough to find myself in a situation I was given:

  1. Access;
  2. Direction;
  3. Challenges;
  4. Support.

This set me up for what I now think of as a series of unpaid internship and apprenticeships. I helped my professors and teachers teach their classes, grade homework, help students and create courseware. I also helped a few small businesses create automation to help with inventory management and invoicing — all for free.

The system of apprenticeships has been around since the middle ages. A cobbler might teach their children or someone else’s (in exchange for pay) their trade.  In essence these experiences allowed me to learn my trade.

My parents wanted nothing more than for me to go to University and get a degree. The problem was the independence of the path I was on made it hard for me to do give up control and go this route. I also wanted to learn everything I could about computers, programing, applied cryptography, security and realistically not even the most prestigious schools had much to offer in these areas at the time.

This resulted in me dropping out of high school and college where I was taking classes that interested me. My parents didn’t exactly approve and I was a bit rebellious at this point in my life so I got a job in technology and moved out.

This choice came with a set of unique challenges; for example some who looked at my resume would ask “Where did you get your graduate degree?” and when they heard I didn’t even have a diploma many would essentially look the other way. Fortunately computers were still relatively new and I was able to demonstrate my raw abilities which meant I still had plenty of opportunities I just had to look a little harder.

Two years after I moved out my first son came along. At this point I understood the benefits and challenges of the path I had chosen for myself but like all parents I wanted more for my children. I remember watching a television show called Gilmore Girls which was about a single mom who had her own realization along the same lines. She was also a drop-out but decided her daughter would go to University so she could have the benefits that path represented but still wanted her daughter to embrace the benefits of her personal approach to life.

I had decided this is what I wanted for my own children. But as they say they say “the best-laid plans of mice and men often go awry” and my oldest is on a path much closer to my own. He finished high school and moved on to being a software developer in Silicon Valley.

As a parent if my goal was to “get him into University” I made a fundamental mistake. That is by exposing him to an extensive computer science education at home by the time he was ready for college the only schools that looked challenging in computer science were out of reach due to admission requirements. It wasn’t that he wasn’t capable of the better scores and grades that were necessary to get into these schools but instead we got him unpaid internships where he could hone his skills and his grades suffered as a result.

Is this a failure in parenting? A failure in the school system? A little of both? Probably a little of both but a parent’s goal should not be to “get their children into university”. There are lots of ways to find success but what is important that we help them have choices in life and find happiness. The path he is on gives him that and while I still hold out hope that he goes to university the reality is he has the job that most Computer Science graduates dream of after four years of university and doesn’t have the associated debt.

Don’t get me wrong — there are many merits to University (which is why I think he should still go) but the reality is it is not the only path to success.

I bring all of this up because the other day Bill Gates, someone I really admire, blogged about the abysmal college completion rates.  In this post there is a quote that stands out:

By 2025, two thirds of all jobs in the US will require education beyond high school.

As a hiring manager in technology I know how hard it is today find people with the right skills and experiences to build products and services the market demands (Don’t get me started on our visa system!). As a parent I also know the school system is still failing our kids so this talent drain is surely going to get worse.

With that said I think we are not looking at the problem holistically. There are lots of ways to get the skills that are necessary to have options in life — Universities do not have a monopoly on success. Thats not to say University isn’t a good option or that there are not careers where a degree is both useful and/or necessary. It is just that there are lots of ways to get our children choices and we should be embracing them as well.

In my mind the apprenticeship is still one of the best ways to get a practical education. It works exceedingly well in technology. I also know a number of lawyers who have passed the bar without having gone to law school as well as a number of small business owners who essentially got their start as apprentices.

Unfortunately the unpaid apprenticeship is under attack and when combined with recent living wage initiatives it makes it hard for those with the interest and skills to offer these apprenticeships. This the most damning element of this attack is a court has ruled that an employer can derive no immediate advantage as a result of the relationship.

Now to be clear I am not arguing the path I went on is right for everyone and I am a believer in formal education (my great grandmother and wife were teachers) but we have to look at this problem more holistically than we have been if we want to help our children and grandchildren to have choices.