Keeping long-term passwords secured

We all know that passwords should be changed regularly to reduce the value to an attacker and that they should be stored in ways that they can not be easily compromised which is why generally people are encouraged not to write passwords down.

The reality is that the human brain can only retain so much information and the less often you use something the more likely it is that you will forget it.

This is true regardless of how memorable your password happens to be.

This is especially true for passwords used in key management ceremonies. Imagine being there when the first keys were generated for the first root CA on the Internet, this is a key that will exist for decades and the implications for loosing access to this key are huge. More over the passwords involved in these ceremonies do not bellong to an individual, they belong to an organization.

For these reasons key management ceremonies use password record forms; I have attached an example form to this post for your reference.

These forms once filled out are stored securely, how securely being dependent on the security needs of the scenario. For example if the password was associated with a share in a Shamir Secret Sharing scheme (M of N set of keys) one would transport and store them securely in facilities geographically distributed under lock and an dual lock control scheme.

Periodically these stored values are retrieved and changed, as part of a process to ensure continued access to systems and keys is possible.

While not something the average person needs to deal with it is relevant to those doing paper key management for large amounts of Bitcoin, important DNSSEC keys or maybe keys embedded into some device that has been mass produced.

Protecting Bitcoin keys with hardware

One of the most important things you can do to keep your Bitcoin keys safe is to get them off of your general-purpose computer and onto a single use device that is designed to perform cryptography or Bitcoin operations.

This protects you from a number of different attacks that could result in the compromise of your keys but it does so at an expense — it makes it more difficult for you to spend your Bitcoin.

This is of course not unique to Bitcoin; in the Certificate Authority world we think of utility keys (e.g. OCSP and Time-stamping) differently than we think of the keys associated with issuing certificate authorities (the ones used to sign subscriber certificates) which we think of differently than keys associated with root certificate authorities. As such we apply different key management techniques and policies to each of them.

The same is true for your bank accounts; you keep less cash in your checking account than you do your savings. This is in part because you have a bankcard and checks tied to the checking account which makes it easier for an attacker to access your funds.

If you manage your Bitcoin holdings in a similar way by having wallets for your “spending money” and wallets for your “savings” then you make it possible to apply security measures that balance convenience and security while managing your risk. These are commonly referred to as “hot” and “cold” wallets.

Additionally those people with large cash assets limit how much they keep in each account so they stay within the liability limits that their financial institutions offer (for example $250,000 USD in the case of FDIC insured institutions).

Traditional banks do the same sort of things; for example a bank with $80,000,000 USD is required to keep $8,000,000 liquid they then use the remainder in fractional reserve banking as a working asset to fund the bank. This also has the side effect of distributing the risk the bank is exposed to by distributing that capital into many different investments each with their own risk profiles.

So how does this all translate to Bitcoin and hardware key management? For most online wallets such as Coinbase are a fine way to manage the funds you spend regularly but for your savings its advantageous to manage these keys yourself instead of being part of a much larger target like an online wallet.

That takes us to Bitcoin key management solutions; Since its introduction there have been many proposed solutions. Most of these being based on either specially hardened and dedicated computers using LiveCDs like this one built on Ubuntu and this one in Tails, these images use wallets like Armory and Electrum to in these clean-room environments to perform Bitcoin operations.

The processes used here are logically equivalent to what Certificate Authorities do with “ceremony computers” where they use specially prepared Tempest rated computers in Faraday Cages with no visibility from the outside that have isolated power (protecting against Differential Power Analysis) to generate and perform operations with sensitive keys.

During these ceremonies ridged processes and controls are used to configure the machines using known software verifying every binary is as its expected to be, auditing every action under camera with multiple people auditing the activities taking place. Also when keys are generated they are protected using secret sharing schemes such as Shamir Secret Sharing and the shares are distributed to different parties who then travel separately and move those shares to secure storage facilities that are geographically distributed.

Obviously there are lots of dials you can “tweak” to control the time / complexity tradeoff involved in the above process but for those with moderate Bitcoin holdings the above would broadly be considered too onerous to even consider.

This is where turnkey products come into play while there have been a number of promising proposals producing something that is secure, usable and affordable is no small task and most of these projects have failed to achieve sufficient market penetration to succeed.

At this time the most promising solutions that are (to varying degrees available) are PiperWallet which is an Open Source printer with embedded RasberryPi that can be used to create paper wallets, based on its claims it has thought about all the right problems (quality of random numbers, etc.)

Another solution is the Open Source PiWallet, this isn’t terribly different than the PiperWallet conceptually (through it does not make any claims about the quality of its random numbers) but it doesn’t include any input or display without an added display and keyboard.

One of the most promising offerings in this space is the Trezor this is a custom designed device designed not only to be useful for cold wallet storage but for actual personal hot wallet use as well. I am looking forward to getting a chance to use one once it becomes generally available.

On the high end of the equation one could also use a Thales nCipher or a SafeNet while these devices are not Bitcoin specific they can be used along with a ceremony computer and a modified Bitcoin wallet to secure the keys used in your wallets.

Above and beyond these solutions there are a half a dozen half-done not maintained smart card solutions (1 ,23) that have potential but unless you’re a JavaCard developer and/or Smart Card Professional these are frankly not viable options yet.

 

Keeping an eye on your Bitcoin keys

In the government, banking and certificate authority worlds important keys are generated and used within specialized cryptographic devices called Hardware Security Modules (HSMs) or their less powerful cousins smart cards.

In the Bitcoin world keys are most commonly generated and stored in software running on the same machines people use for surfing the web. This means those keys are exposed to all of the same risks as the rest of your computing experience.

This is why there are now multiple variants (1, 2, 3) of wallet stealing malware and many cases of Bitcoins being stolen through key compromise (1).

The reality is we have not even touch the tip of the ice-burg on the sorts of attacks that expose these software keys and as Bitcoin becomes more mainstream the techniques used by attackers will improve so that they can overcome the mitigations more advance software is surely to use.

For example a lot of effort was spent mitigating in memory key access threats in Windows by moving keys out of process and working to minimize the amount of time the key was left in its unencrypted form. In-fact the inadequate in-memory protection of keys in OpenSSL was a contributor to the recent #Heartbleed vulnerability.

It is not possible to totally mitigate the risk of in-memory keys but you can reduce the exposure. Even when you do there will still be moments when they or the components that were used to make them may be exposed; For example they may simply get dumped into your page file

So what can you do to protect yourself? The most important mitigation available to you is to do what governments; banks and certificate authorities have been doing for decades – generate and use your keys within specialized devices. The bad news is that even though there are many projects that aspire to help you do just that for Bitcoin your choices are still quite limited – especially if you want ease-of-use and accessibility.

Absent reasonable hardware solutions for key management people often resort to storing their Bitcoins on paper using keys generated on hardened dedicated operating system installs not connected to any network and while this is an useful technique in your arsenal even paper keys can be compromised through carelessness.

In military and aerospace systems things are often designed for triple redundancy and if your storing a large amount of bitcoin you should also keep this principal in mind when designing your key management strategy.

Not doing so sets you up for failure, just look at Mt. Gox. If we believe Mr Karpele’s story he did not know how much Bitcoin he had at any one point and according to reports he also set himself up as a single point of failure.

Though the advent of multi-signature wallets in Bitcoin along with third-party services like BitRated will help people manage this sort of risk in the future it does not mitigate the need for solid accounting and monitoring of your balances.

As such it also makes sense use wallet watcher services such as Blockchain.info’s “Watch Only Addresses” or BTCBalance.net so that you can know what transactions are happening with your keys at any point and time.

If you have a large amount of Bitcoin it also makes sense to use a Honey Pot where you have wallet(s) placed in locations where they are easily accessible and monitored via services like the above so that you know you need to respond accordingly.

In short when thinking about your key management strategy it’s a good idea to keep in mind what Benjamin Franklin said: By failing to prepare, you are preparing to fail.

Insurance and Bitcoin

There are lots of ways we risk loosing our money one of the biggest is when the institutions we trust to keep those funds safe fail to do so.

This happened in the United States during the Great Depression when widespread bank failures destroyed lifetimes of savings overnight.

To address concerns of this happening again the FDIC and NCUA were created in the 1930s to provide assurances that if such an event were to happen again their savings would still be safe and even this has its limits.

This along with regulations that were put into place around minimum liquidity and other practices are what helped people begin to regain trust in the US financial system.

The fall of Mt. Gox and other smaller exchanges has once again introduced similar concerns but thankfully the influx of venture capital to the Bitcoin ecosystem has brought in a new class of organizations who have both the skill and capital to run more professional institutions.

As a result we now are seeing vault services starting to offer similar insurance via self-insurance programs coordinated with underwriters such as Lloyds of London and Meridian.

This is a huge step forward but there is a long way to go before these offerings are viable because as they stand today its far from cost-effective. As an example lets look at Elliptic if we were to store $250,000 USD of Bitcoin (510.58 BTC at todays price) you would be paying $5,064 USD every year for that peace of mind.

That’s not to suggest everyone needs $250,000 of insurance or that this is not a “reasonable” fee for this service but the reality is the price for this assurance has been established and its FREE.

Vaulting services such as Elliptic and Xapo are also a little different than traditional banks because they do not (for the most part) offer other services. Additionally building and operating services that provide the necessary assurances to qualify for such insurance comes at a cost above and beyond the cost of the insurance itself so a premium of some sort must be charged. The costs here are sure to go down as they get amortized across more customers but regardless the current cost structure these services are based on are out of reach of mainstream users.

Making this insurance more accessible will be necessary for us to see this broader adoption of Bitcoin along with many other changes that will include regulations (either self enforced like the NCUA or government mandated). This will include things like reporting requirements, minimum liquidity levels, operational practices, key management requirements and insurance.

When it comes to insurance I think we will also see it become a “freebee” just like in the classical financial institutions where its costs are built into the profit model of the institution. Today classical financial institutions cover these costs via account maintenance fees, transaction fees, the profit they make on the money they hold for others (via fractional reserve banking) and the overall revenue diversity in their business models.

In the meantime the insurance these vaulting services offer fill a market need for those with large amounts of Bitcoin and but not the time or skill to protect the keys themselves.

The irony is of course it’s the little guy who likely needs the insurance more.

What’s in a certificate chain and why?

Have you ever wondered why your web server certificate has a “chain” of other certificates associated with it?

The main reason is so that browsers can tell if your certificate was issued by a CA that has been verified to meet the security, policy and operational practices that all CAs are mandated to meet. That certificate at the top of the chain is commonly called the “root”. Its signature on a certificate below it indicates that the CA operating the root believes that practices of the CA below it meets that same high bar.

But why not issue directly off of the “root” certificates? There are a few reasons; the main one is to prevent key compromise. To get a better understanding, it’s useful to know that the private keys associated with the “root” are kept in an offline cryptographic appliance located in a safe, which is located in a vault in a physically secured facility.
These keys are only periodically brought out to ensure the associated cryptographic appliance is still functioning, to issue any associated operational certificates (for example an OCSP responder certificate) that may be needed, and to sign fresh Certificate Revocation Lists (CRLs). This means that for an attacker to gain access to these keys, they would need to gain physical access to this cryptographic appliance as well as the cryptographic tokens and corresponding secrets that are used to authenticate the device.

CAs do this because keeping keys offline is a great way to reduce the risk of a compromised key, but it’s a poor way to offer a highly available and performant service, so the concept of an Issuing CA (ICA) was introduced. This concept also enabled the “root” to respond to CA key compromise events by revoking a CA certificate that should no longer be trusted. This also enables delegation of control, limiting those who can influence a given ICA to sign something.

Another way CAs solve the “online CA” problem is to use what is commonly referred to as a Policy Certificate Authority (PCA). This model allows a CA to segment operational practices more granularly. For example, maybe the CA is audited to be in compliance with a specific set of government standards so the ICAs associated with those practices would be signed by the corresponding PCA. This not only allows segmentation of policy and procedures, but it also enables separation of usage scenarios. For example, one PCA may only allow issuance of certificates for secure mail while the other PCA may allow issuance of SSL certificates. These PCAs are also very commonly operated as offline entities and have ICAs right underneath them.

While the above two models represent the most common ways a PKI might be segmented, they are not the only two. For example, the operational practices required to be a publicly trusted CA are far stricter than what a typical data center might employ. For this reason, it’s very common for CAs to manage PKIs for other organizations within their facilities.

CAs may also “roll” ICAs as a means to manage CRL size. For example, if a given CA has had to revoke many certificates during its lifespan, it may decide to manage the size of CRLs – it would be appropriate to create a new ICA and take the previous one out of service so that future CRLs can still be downloaded quickly by clients. When this happens both CA certificates may be valid for an overlapping time, but only the more recent one is actively in use.

Long story short, some counts on the number of Certificate Authorities that exist on the internet can be deceiving. One of the easiest ways to see this is to look at a CA called DFN-Verein. They are an educational PKI that manages all of the CAs in their PKI in the same facilities, using the same practices, but for security reasons they create separate ICAs for each organization in their network.

Simply put, the count of CAs in a PKI is not a good way to assess the number of entities issuing certificates in the PKI ecosystem. What you really want to count is how many facilities manage publicly trusted certificates. The problem is that it is too difficult to count – what you can do, however, is count the number of organizations associated with ownership of each “root”. Thankfully Microsoft makes this fairly easy. In March, I did a post on my blog showing a breakdown of the ownership. Unfortunately, this approach does not give you a count of operational facilities that are used for the subordinate CAs, but it’s quite likely that given the operational requirements and costs associated with maintaining them that these two numbers are relatively close.

So what would I like for you to take away from this post? I suppose there are two key points:

  • A public CA using several Certificate Authorities under their direct control is actually a good thing as it indicates they are managing the risk of operating their services and planning for migrations to new algorithms and keys as appropriate.
  • Counting the number of “roots” and “subordinate CAs” found by crawling the web does not actually represent the number of organizations that can act as publicly trusted certificate authorities.

That is not to say the efforts to crawl the web to understand how PKI is deployed and used is not valuable, it is – quite valuable. These projects are an important way to keep an eye on the practices that are actually used in the management of Public PKI.

Additionally, efforts to support Least Privilege designs in PKI and adopt means to actively monitor certificate issuance, such as Certificate Transparency, all represent positive moves to help us better understand what is actually out there.

How to generate and send a timestamp request for Authenticode

When code is signed with a digital certificate the signature can not be relied upon past the validity of the certificate that was used because you don’t know when the signature was applied. If you trusted it beyond that period who knows, the certificate could have been expired and the key posted on a wall somewhere which would mean anyone could have produced that signature.

To address this the concept of time-stamping was added to Authenticode, you can time-stamp something using the time-stamp command in signtool. Here are some steps on how to go about doing that.

 

1. Create the file you will want to timestamp

To do this you will need to find a file on your computer that has an embedded Authenticode signature, on my system that’s skype.exe. You dont want to be messing with its digital signature so make a copy of it:

copy “c:\Program Files (x86)\Skype\Phone\Skype.exe” test.exe

2. Generate a timestamp request

signtool timestamp /t “http://timestamp.globalsign.com/scripts/timestamp.dll” test.exe

3. Verify the timestamp response

signtool verify /v /pa test.exe

This is going to give you some output that looks something like this:

Verifying: test.exe
SHA1 hash of file: 47BC1CD615A52EC31144E8F7A09924ED279C0953
Signing Certificate Chain:
Issued to: VeriSign Class 3 Public Primary Certification Authority – G5
Issued by: VeriSign Class 3 Public Primary Certification Authority – G5
Expires: 7/16/2036 4:59:59 PM
SHA1 hash: 4EB6D578499B1CCF5F581EAD56BE3D9B6744A5E5

Issued to: VeriSign Class 3 Code Signing 2010 CA
Issued by: VeriSign Class 3 Public Primary Certification Authority – G5
Expires: 2/7/2020 4:59:59 PM
SHA1 hash: 495847A93187CFB8C71F840CB7B41497AD95C64F

Issued to: Skype Technologies SA
Issued by: VeriSign Class 3 Code Signing 2010 CA
Expires: 8/19/2014 4:59:59 PM
SHA1 hash: FA16CD8165DEDF5A79984E7B301323C07C550801

The signature is timestamped: 9/16/2013 10:20:53 PM
Timestamp Verified by:
Issued to: GlobalSign Root CA
Issued by: GlobalSign Root CA
Expires: 1/28/2028 5:00:00 AM
SHA1 hash: B1BC968BD4F49D622AA89A81F2150152A41D829C

Issued to: GlobalSign Timestamping CA – G2
Issued by: GlobalSign Root CA
Expires: 1/28/2028 5:00:00 AM
SHA1 hash: C0E49D2D7D90A5CD427F02D9125694D5D6EC5B71

Issued to: GlobalSign TSA for MS Authenticode – G1
Issued by: GlobalSign Timestamping CA – G2
Expires: 9/22/2024 5:00:00 PM
SHA1 hash: 8CE69F5012E1D1A8FB395E2E31E2B42BDE3B343B

Successfully verified: test.exe

Number of files successfully Verified: 1
Number of warnings: 0
Number of errors: 0

Hope this helps.

How to generate and send an RFC3161 timestamp with OpenSSL and curl

For me the blog is this odd mix of diary and technical notes, this time I guess the post is more of a technical note — here is a quick guide on how to generate RFC 3161 time-stamps using OpenSSL and curl.

 

1. Create the file you will want to timestamp

 

copy con inputfile.txt

This is my test file.

^Z

 

2. Generate a timestamp request

 

openssl ts -query -data inputfile.txt -cert -sha256 -no_nonce -out request.tsq

 

3. Send the request to your timestamp server

 

curl -k -H “Content-Type: application/timestamp-query” -H Host:timestamp.globalsign.com –data-binary @request.tsq “http://timestamp.globalsign.com”  >inputfile.txt.tsr

 

4. Verify the timestamp response

 

openssl ts -reply -in inputfile.txt.tsr -text

 

This is going to give you some output that looks something like this:

 

Using configuration from C:\OpenSSL-Win64\bin\openssl.cfg

Status info:

Status: Granted.

Status description: unspecified

Failure info: unspecified

TST info:

Version: 1

Policy OID: 1.3.6.1.4.1.4146.2.2

Hash Algorithm: sha256

Message data:

    0000 – 58 df 63 8c 5b bf ff ca-ad 13 c9 6e 93 96 cd 25   X.c.[……n…%

    0010 – 66 5e f1 eb ba 8e 7f 74-6d 65 04 3c 5d ea e4 35   f^…..tme.<]..5

Serial number: 0x2487F5EA8A5A085844ED68A8A7426E07E692E1BD

Time stamp: Sep 17 05:08:38 2013 GMT

Accuracy: unspecified

Ordering: no

Nonce: unspecified

TSA: DirName:/C=SG/O=GMO GlobalSign Pte Ltd/CN=GlobalSign TSA for Standard – G1

Extensions:

It’s that easy.

The weaknesses of fingerprint based authentication

I love security people, were paranoid, were fun, we think outside of the box but we are also human and often times get distracted by dogma and shiny objects. I think this happens a lot when talking about stuff like fingerprint based authentication.

The technology has a very big brother feel to it after all for most people only ever get fingerprinted when you’re registering with the government – hopefully because you want to go on a trip, not because you’re being sent up the river.

Unfortunately attackers don’t usually have this problem, the reason is their goals are more specific – they want in and if they are any good they quickly identify when a path they are perusing isn’t going to be effective so they move on to the next attack vector until they are in or exhaust all their options.

When we look objectively at a fingerprint based authentication solution like is being provided by Apple what are the weakness an attacker is going to go after?

First its useful to understand how these systems usually work, for those of you who have not read my last post here is a quick recap:

A picture is taken of the finger, the picture is converted to the set of features that are extractable from the picture, and those features are stored in something called a template. On use this process is repeated and the features are compared if enough match it must be you. Since its you your password or pin is released to a process to use it on your behalf.

The attacker here starts on the tail end of this process, he sees that in the end we are just taking about a password or pin – if this biometric mechanism is just an option and we can still use the password or pin we have not raised the security bar.

But what if the biometric is releasing access to a key or a strong password instead of your regular old password or pin? Then the attacker would probably start to ask the question how strong is a fingerprint compared to a password? Here is a good post summarizing the effective strength of a fingerprint based biometric – long story short it’s about the same as a six character password.

But wait, why go through the front door if the backdoor is open? For the attacker to execute this attack they have to have physical access to the device right? And the first rule of computer security is that if the attacker has physical access to your device its not your device any longer.

With the device in hand some other things the attacker would want to know are:

  • How the matching works, is it done in software or hardware?
  • What about where the template is stored? How is access to read and write the template accomplished?
  • How is the template secured at rest? Is it readable? Can it be modified?
  • What about where the password and pin is stored? Is it readable? Can it be modified?
  • Does all of this happen in a single piece of hardware or is it shared across multiple devices?
  • If its multiple devices, how do they authenticate each other if at all?
  • How is access to the hardware doing the work authenticated? Is the software authenticated in any way?
  • How about the integrity of the operating system, is there some mechanism that provides guarantees about its running state?

This isn’t an exhaustive list but gives you an idea of the problems here, they are core operating system problems which means that if you use this capability on a jail-broken device you are probably reducing your security above and beyond where you would have been otherwise.

As you can probably see its likely that the security of this solution isn’t going to be based on how good the sensor is at detecting fakes but how the overall system itself was architected.

So what do I think about the use of biometrics given the issues discussed here? I actually believe its possible to design, build and deploy a system that is reasonably secure based on this technology and that if done right you can measurably improve the security for the user.

It doesn’t need to be perfectly secure to do that either, you simply have to get the user to the point where he you can use long and strong passwords and not the sort of thing that they normally use:

clip_image002

For example in Windows one might rely on the following:

  1. UEFI secure boot ensures the boot loader is authenticated and non-tampered.
  2. Windows Secure boots only authenticated non-tampered code is loaded.
  3. Driver signing ensures all drivers are authenticated and non-tampered.
  4. Bitlocker ensures the disk is encrypted and cannot be modified outside the scope of the machine.
  5. The TPM ensures the Bitlocker keys are secured from the average attacker.
  6. Password Policies ensure passwords are sufficiently long and complicated.
  7. Reversibly encrypted passwords are stored using a strong key hierarchy where the root of the hierarchy is protected by the TPM.
  8. Fingerprints make it possible for the user to successfully log on with the long complicated passwords efficiently.
  9. Use sensors with reasonable fake finger and liveness tests.

There are still attacks in the above configuration but the low hanging fruit have been nailed and deploying a solution like this likely raises the bar an attacker has to cross to successfully exploit your environment.

Is it as good as other mechanisms (for example smart cards) certainly not, but better than what people actually do in production environments today? Yes, very likely.

Is this for everyone? Certainly not; but for many it would be a step up.

I am not familiar enough with the security architecture of the iPhone and OSX to say if the same sort of protections exist, they may – but without them one thing is for sure the “gummy bear attack” is probably the least of your worries.

Thoughts on Apple and fingerprint based biometrics

Yesterday Apple finally confirmed what their acquisition of Authentec was all about, integrating fingerprint based biometrics in the iPhone. This was not exactly a surprise but its one thing to know they were going to do it and another to see how they went about it.

Details are still a little light on the implementation and there appears to be a lot of speculation about how they did things but I have not seen any one provide a reasonable write up of how this technology works, what its limitations are and the value it has.

Let’s start with what these things actually do, plain and simple a fingerprint sensor is a camera. It takes a picture of the structure of your finger, some sensors look just at the surface, some look a little below the surface. Different sensors use optical capture, while others use capacitance and other mechanism but regardless of all of this they all take a picture of your finger.

This picture is then processed looking for “minutia”, the little details that make that image unique (ridges, valleys, swirls, etc.). These are then mapped into something commonly called a template, it is this template that is stored.

clip_image001

The important thing to take away here is when you are enrolling the biometric sample itself isn’t actually stored, it’s simply not needed.

Once you are enrolled the same process happens (capture image, identify minutia, create template) except instead of storing it this time it is compared to the stored template. Now each time you log in you present your finger slightly differently, this means that not all of the same minutia will be seen in every captured image.

As a result the matcher has to guess if the person is you or not based on how many minutia it sees in common with the stored template.

This works fairly well when doing what is referred to as verification, this is when the sample is compared to just one sample as is probably the case when dealing with a device like the iPhone. When doing identification though (the one to many variant) there are a number of other problems to consider; I won’t discuss that here.

Now each of the image capture approaches used by sensors have different security properties; for example with optical sensors I have seen people lift the fingerprint from the sensor glass itself and re-use it.

According the press conference sensor used by the iPhone looks sub-dermally, the primary thing this helps with is resilience to small cuts and scrapes that could push the threshold authentication done with biometrics over the edge making it impossible to match you – it also does provide some security value in that the characteristics are not exactly the same ones you leave everywhere.

Now the good sensors also have logic in them to detect fake fingers, some of these are simple live-ness tests while others look at the characteristics of the flesh itself. For example a swipe sensor may look at the elasticity of the finger as it is dragged across the sensor.

One of the real problems here is that when you are buying a device with one of these sensors on them you have zero clue how good the mechanisms (if any) they use are. In my case I went and bought several fingerprint handgun boxes that had biometrics cracked them open to see who manufactured the sensor and contacted the heads of the engineering departments at the sensor companies (that I happened to know due to my work) and I had them help me figure out which device had the best fake finger detection so I knew which one to use.

In the case of the Apple sensor again we have no clue what kind of fake finger / live-ness tests they have implemented. I am sure thanks to security researchers once the devices ship we will found out how effective they are dealing with this in short order.

But what happens when the matcher decides there’s a good chance it’s you? It releases a “secret” and what is that secret you ask? Well in most systems it’s actually the password the user would have entered had the sensor not been there.

That’s right, all of the above magic to make entry of the password easier. This isn’t actually a bad thing, but again it depends on how it was implemented.

As a practical matter people can remember 7-9 character passwords, these passwords get re-used or trivially modified which greatly reduces their effectiveness. By using biometrics to gate access to them we can replace that short junky password with a longer key, in the process we can greatly increase the security of a system as a result.

Here is the thing — it doesn’t sound like that’s what they have done here, it seems they have applied the biometric to the four digit numeric pin and made the application store password protected by that pin. I say this because (according to the New York Times) which you will have to set the pin still for recovery purposes. If this is the case (and until we see them we will not know) the biometric is no more secure than the four digit pin its gating access to.

It could still have some value though, for example according to the press conference around 50% of iPhone users set the pin if this can meaningfully increase that number on aggregate were in a better world.

One more troubling thing for me as I think about the Apple integration is that they are one of the most secretive tech companies out there and were not likely to hear the answers of how they have handled the above issues or any others.

The origin of the Windows Biometric Framework

One of the projects I was responsible for when I was at Microsoft was the design and delivery of the Windows Biometric Framework. This was actually one of my favorite projects during my entire tenure at the company.

What most people don’t know is the feature was a very late addition to Windows 7, an analysis of the crash data sent by clients to Windows showed that right after Video Cards these fingerprint readers were one of the largest sources of the “Blue Screen of Death”.

Something else most people don’t know that this was not the first attempt to provide a platform for biometrics into Windows, the prior attempts failed for various reasons but they did not have the same data we had – we knew these devices that were barley used were creating a bad experience for anyone just because the OEM decided to include these sensors as a differentiator.

To fix this a stellar team of senior engineers were re-assigned from other projects and began work almost halfway through the Windows 7 release cycle. Our goal was to make it possible for these devices (and in the future other forms of Biometrics) to work well with the minimal amount of vendor code.

You see these devices need to respond really quickly, especially swipe sensors because they take a picture of your finger as you drag it across the sensor and that data needs to get to a number of components very quickly to make a workable user experience. For this reason the sensor manufacturers all thought they needed to be in kernel, and since most driver developers have only been doing driver development for a few years they did a really bad job of it which resulted in the blue screens.

The value proposition of this project for the vendors was that ultimately they wouldn’t need to write so much software to get their sensors to work, for some this was seen as a positive – especially the new entrants since they didn’t have as much software. For those vendors with complete software suites, they were less thrilled in that a driver framework of any kind places architectural restrictions on the way you build things and since they didn’t get the support calls they didn’t really care that much about the crashes their software caused.

Despite this nearly every biometric vendor begun work on new driver’s client software based on this platform that was being developed at almost the same time they were writing their components. Every vendor who was in the first round of development made significant time investments to make the platform a reality.

The software team at Microsoft also executed amazingly, I am confident that I will allways think of this group of individuals as one of the best I worked with at the company.

In the end the platform made it out the door with more features than we had originally hoped with all the major vendors on board and by the general availability to OEMs people were ready to support real customers.

The late start did mean OEMs had to work very hard to get the new device drivers in their initial images, they had very little runway to do so but several did and those that did not at first got their revised images out shortly later.

This new model delivered a kernel mode and user mode driver model that significantly reduced the amount of software that was needed to get one of these devices working in Windows, we delivered a test suite that helped ensure the devices and their drivers worked reliably. And we were even able to deliver a framework for use, management, group policy, and a hook for enrollment.

The combination of the above meant that we essentially eliminated the historic problems these devices caused, improved the security of how they worked when integrating with windows, made them more supportable and simply work well.

The team went on to work on some very cool additions, many of the team changed but some remained and the work that we began after the “freeze” in the Windows 7 release cycle ultimately found their way out in Windows 8/8.1 – I hope the rest of the things this team were doing find their way out someday soon also.