Tag Archives: Windows

How to keep a secret in Windows

Protecting cryptographic keys is always a balancing act. For the keys to be useful they need to be readily accessible, recoverable and their use needs to be sufficiently performant so their use does not slow your application down. On the other hand, the more accessible and recoverable they are the less secure the keys are.

In Windows, we tried to build a number of systems to help with these problems, the most basic was the Window Data Protection API (DPAPI). It was our answer to the question: “What secret do I use to encrypt a secret”. It can be thought of as a policy or DRM system since as a practical matter it is largely a speed bump for privileged users who want access to the data it protects.

Over the years there have been many tools that leverage the user’s permissions to decrypt DPAPI protected data, one of the most recent was DPAPIPick. Even though I have framed this problem in the context of Windows, Here is a neat paper on this broad problem called “Playing hide and seek with stored keys”.

The next level of key protection offered by Windows is a policy mechanism called “non-exportable keys” this is primarily a consumer of DPAPI. Basically, when you generate the key you ask Windows to deny its export, as a result the key gets a flag set on it that can not, via the API, be changed. The key and this flag are then protected with DPAPI. Even though this is just a policy enforced with a DRM-like system it does serve its purpose, reducing the casual copying of keys.

Again over the years, there have been numerous tools that have leveraged the user’s permissions to access these keys, one of the more recent I remember was called Jailbreak (https://github.com/iSECPartners/jailbreak-Windows). There have also been a lot of wonderful walkthroughs of how these systems work, for example, this nice NCC Group presentation.

The problem with all of the above mechanisms is that they are largely designed to protect keys from their rightful user. In other words, even when these systems work they key usually ends up being loaded into memory in the clear where it is accessible to the user and their applications. This is important to understand since the large majority of applications that use cryptography in Windows do so in the context of the user.

A better solution to protecting keys from the user is putting them behind protocol specific APIs that “remote” the operation to a process in another user space. We would call this process isolation and the best example of this in Windows is SCHANNEL.

SCHANNEL is the TLS implementation in Windows, prior to Windows 2003 the keys used by SCHANNEL were loaded into the memory of the application calling it. In 2003 we moved the cryptographic operations into Local Security Authority Subsystem Service (LSAS) which is essentially RING 0 in Windows.

By moving the keys to this process we help protect, but don’t prevent, them from user mode processes but still enable applications to do TLS sessions. This comes at an expense, you now need to marshal data to and from user mode and LSAS which hurts performance.

[Nasko, a former SCHANNEL developer tells me he believes it was the syncronous nature of SSPI that hurt the perf the most, this is likely, but the net result is the same]

In fact, this change was cited as one of the major reasons IIS was so much slower than Apache for “real workloads” in Windows Server 2003. It is worth noting those of us involved in the decision to make this change surely felt vindicated when Heartbleed occurred.

This solution is not perfect either, again if you are in Ring 0 you can still access the cryptographic keys.

When you want to address this risk you would then remote the cryptographic operation to a dedicated system managed by a set of users that do not include the user. This could be TCP/IP remoted cryptographic service (like Microsoft KeyVault, or Google Cloud Key Manager) or maybe a Hardware Security Module (HSM) or smart card. This has all of the performance problems of basic process isolation but worse because the transition from the user mode to the protected service is even “further” or “bandwidth” constrained (smart cards often run at 115k BPS or slower).

In Windows, for TLS, this is accomplished through providers to CNG, CryptoAPI CSPs, and Smartcard minidrivers. These solutions are usually closed source and some of the security issues I have seen in them over the years are appalling but despite their failings, there is a lot of value in getting keys out of the user space and this is the most effective way of doing that.

These devices also provide, to varying degrees, protection from physical, timing and other more advanced attacks.

Well, that is my summary of the core key protection schemes available in Windows. Most operating systems have similar mechanisms, Windows just has superior documentation and has answers to each of these problems in “one logical place” vs from many different libraries from different authors.

Ryan

The weaknesses of fingerprint based authentication

I love security people, were paranoid, were fun, we think outside of the box but we are also human and often times get distracted by dogma and shiny objects. I think this happens a lot when talking about stuff like fingerprint based authentication.

The technology has a very big brother feel to it after all for most people only ever get fingerprinted when you’re registering with the government – hopefully because you want to go on a trip, not because you’re being sent up the river.

Unfortunately attackers don’t usually have this problem, the reason is their goals are more specific – they want in and if they are any good they quickly identify when a path they are perusing isn’t going to be effective so they move on to the next attack vector until they are in or exhaust all their options.

When we look objectively at a fingerprint based authentication solution like is being provided by Apple what are the weakness an attacker is going to go after?

First its useful to understand how these systems usually work, for those of you who have not read my last post here is a quick recap:

A picture is taken of the finger, the picture is converted to the set of features that are extractable from the picture, and those features are stored in something called a template. On use this process is repeated and the features are compared if enough match it must be you. Since its you your password or pin is released to a process to use it on your behalf.

The attacker here starts on the tail end of this process, he sees that in the end we are just taking about a password or pin – if this biometric mechanism is just an option and we can still use the password or pin we have not raised the security bar.

But what if the biometric is releasing access to a key or a strong password instead of your regular old password or pin? Then the attacker would probably start to ask the question how strong is a fingerprint compared to a password? Here is a good post summarizing the effective strength of a fingerprint based biometric – long story short it’s about the same as a six character password.

But wait, why go through the front door if the backdoor is open? For the attacker to execute this attack they have to have physical access to the device right? And the first rule of computer security is that if the attacker has physical access to your device its not your device any longer.

With the device in hand some other things the attacker would want to know are:

  • How the matching works, is it done in software or hardware?
  • What about where the template is stored? How is access to read and write the template accomplished?
  • How is the template secured at rest? Is it readable? Can it be modified?
  • What about where the password and pin is stored? Is it readable? Can it be modified?
  • Does all of this happen in a single piece of hardware or is it shared across multiple devices?
  • If its multiple devices, how do they authenticate each other if at all?
  • How is access to the hardware doing the work authenticated? Is the software authenticated in any way?
  • How about the integrity of the operating system, is there some mechanism that provides guarantees about its running state?

This isn’t an exhaustive list but gives you an idea of the problems here, they are core operating system problems which means that if you use this capability on a jail-broken device you are probably reducing your security above and beyond where you would have been otherwise.

As you can probably see its likely that the security of this solution isn’t going to be based on how good the sensor is at detecting fakes but how the overall system itself was architected.

So what do I think about the use of biometrics given the issues discussed here? I actually believe its possible to design, build and deploy a system that is reasonably secure based on this technology and that if done right you can measurably improve the security for the user.

It doesn’t need to be perfectly secure to do that either, you simply have to get the user to the point where he you can use long and strong passwords and not the sort of thing that they normally use:

clip_image002

For example in Windows one might rely on the following:

  1. UEFI secure boot ensures the boot loader is authenticated and non-tampered.
  2. Windows Secure boots only authenticated non-tampered code is loaded.
  3. Driver signing ensures all drivers are authenticated and non-tampered.
  4. Bitlocker ensures the disk is encrypted and cannot be modified outside the scope of the machine.
  5. The TPM ensures the Bitlocker keys are secured from the average attacker.
  6. Password Policies ensure passwords are sufficiently long and complicated.
  7. Reversibly encrypted passwords are stored using a strong key hierarchy where the root of the hierarchy is protected by the TPM.
  8. Fingerprints make it possible for the user to successfully log on with the long complicated passwords efficiently.
  9. Use sensors with reasonable fake finger and liveness tests.

There are still attacks in the above configuration but the low hanging fruit have been nailed and deploying a solution like this likely raises the bar an attacker has to cross to successfully exploit your environment.

Is it as good as other mechanisms (for example smart cards) certainly not, but better than what people actually do in production environments today? Yes, very likely.

Is this for everyone? Certainly not; but for many it would be a step up.

I am not familiar enough with the security architecture of the iPhone and OSX to say if the same sort of protections exist, they may – but without them one thing is for sure the “gummy bear attack” is probably the least of your worries.