Generic Identity Device Specification Published

In the PC ecosystem, when a new device (say mass storage) technology is introduced, commonly there is little standardization, vendors produce proprietary software stacks for interacting with that device, they have custom hardware interfaces for interacting with the device, custom software for managing those devices, etc.

As a device picks up in popularity common platform programing interfaces typically emerge, sometimes these are commercial in nature, other times they are standards based; in either case the goals of these interfaces are simple: abstract out the variety in the hardware ecosystem for the application developer allowing them to write software that can run on any machine regardless of which vendor manufactured a given device. These abstractions also commonly allow the sharing of devices so that multiple applications can use them at the same time.

The next phase in a devices maturity is normally the definition of a class interface for interacting with hardware, it’s this last phase that allows the “no driver needed” story that users like so much; we all reap the rewards of this with flash drives today, plug in the device and it just works (the same is true for display technologies like VGA).

These class drivers commonly cater to the lowest common denominator when it comes to functionality, but vendors are always able to add additional capabilities that are exposed when their drivers and custom software are present (again think about display technologies here as a good example).

There is one device in particular that has not entirely followed this flow that I wanted to talk about and that is Smart Cards; as a concept was they emerged in the 1970s, the first cards went into production in the late 70s. Here we are 40 years later and there is no clear “class driver” for these devices, that is not to say there have not been attempts, some even with success, but those that have had success have been closed system solutions, for example the PIV interfaces used within the US Federal Government.

In the commercial space however, no class specification that has been attempted really was viable, there are lots of reasons for this but I am cautiously optimistic that there is now a candidate.

One of the projects I was working on over the last few years was the specification of the Generic Identity Device Specification, this attempts to build on the success of the government based card specifications and extend it to commercial applications as well.

I had opportunities to work with some great folks on this effort, we all had the same goal make smart cards as reliable, cost effective and accessible as possible; I believe this work does just that.

This specification has now been released by Microsoft under the Microsoft Community Promise, that means it is available royalty free for anyone to adopt; this is a big win for our partners and above all the customers who will benefit the most from it.

So what does this mean for you? Well if you’re a customer looking to deploy smart cards you should seriously look for vendors who produce cards that are compliant with this specification, it means lower cost of deployment, makes it easier for you to multi-source cards and in the end it will likely reduce the overall cost of cards as volumes go up based on function of scale.

For a card manufacturer there are a number of benefits as well, it is possible to develop a GIDS card that is compatible with the PIV card-edge, this means you can develop a single card stock get it evaluated for FIPS (or whatever other standard) that can be sold into commercial or government applications (reducing cost) and these cards will have a great experience in Windows.

If you are a platform or operating system developer you now have a specification you can use as a baseline for testing card scenarios, a way to (hopefully) support a large number of “real” cards that will exist on the market (soon I hope), if this happens we can experience driver coverage numbers similar to other device classes.

For those of you not in this segment, this last point is super important, there is so much fragmentation in the market no solution has over a couple percent of card coverage in-box, if this specification gets adopted that number can start to look more like other device classes where the number is in the 90 percentile range.

In any event, I am pleased to see this out there, here’s hoping it gets adopted broadly…

How to tell if a volume is Bitlocker Protected with TPM and PIN

Today I was presented with a question, how can I tell if the OS volume is protected with Bitlocker a TPM and a PIN.

Since I could not sleep (its 2:30AM right now) I figured I would throw together a quick and dirty script that checks for that, it was pretty easy to do.

I started with the documentation for Win32_EncryptableVolume which I recall seeing previously in a unrelated mail at some point, from there I discovered the GetKeyProtectors method, I then did a search on Live for GetKeyProtectors and VBSCRIPT that was scoped to Microsoft.com domains.

This got me a handful of samples, I took one hacked it up and came up with this:

‘ ——————————————————————————–
‘ Get configuration we will need
‘ ——————————————————————————–
‘ Get the OS System Drive
set shell = WScript.CreateObject( “WScript.Shell” )
strDriveLetter = shell.ExpandEnvironmentStrings(“%SystemDrive%”)

‘ Target computer name
‘ Use “.” to connect to the local computer
strComputerName = “.”

‘ ——————————————————————————–
‘ Connect to the BitLocker WMI provider class
‘ ——————————————————————————–

strConnectionStr = “winmgmts:” _
& “{impersonationLevel=impersonate,authenticationLevel=pktPrivacy}!\\” _
& strComputerName _
& “\root\cimv2\Security\MicrosoftVolumeEncryption”
On Error Resume Next ‘handle permission errors

Set objWMIService = GetObject(strConnectionStr)

If Err.Number <> 0 Then
WScript.Echo “Failed to connect to the BitLocker interface (Error 0x” & Hex(Err.Number) & “).”
Wscript.Echo “Ensure that you are running with administrative privileges.”
WScript.Quit -1
End If

On Error GoTo 0

‘ ——————————————————————————–
‘ Get a list of volumes that could be bitlocker protected.
‘ ——————————————————————————–

strQuery = “Select * from Win32_EncryptableVolume where DriveLetter='” & strDriveLetter & “‘”
Set colTargetVolumes = objWMIService.ExecQuery(strQuery)

If colTargetVolumes.Count = 0 Then
WScript.Echo “FAILURE: Unable to find BitLocker-capable drive ” &  strDriveLetter & ” on computer ” & strComputerName & “.”
WScript.Quit -1
End If

‘ there should only be one volume found
For Each objFoundVolume in colTargetVolumes
set objVolume = objFoundVolume
Next

‘ ——————————————————————————–
‘ Now check if it was protected with a TPM and a PIN
‘ ——————————————————————————–

nKeyProtectorTypeIn = 4 ‘ type associated with “TPM and Pin” protector

nRC = objVolume.GetKeyProtectors(nKeyProtectorTypeIn, aKeyProtectorIDs)

If nRC <> 0 Then
WScript.Echo “FAILURE: GetKeyProtectors failed with return code 0x” & Hex(nRC)
WScript.Quit -1
End If

‘ there should only be one volume found
For Each objFoundVolume in colTargetVolumes
set objVolume = objFoundVolume
Next

‘ ——————————————————————————–
‘ Now return what we found.
‘ ——————————————————————————–
On Error Resume Next ‘handle unitialized array

If IsNull(aKeyProtectorIDs(0)) Then
WScript.Echo “This volume is NOT TPM and PIN protected.”
Else
WScript.Echo “This volume IS TPM and PIN protected.”
End If

 

From the time I decided to write the script, to the time I wrote it and tested it was about 15 to 20 minutes; the samples were great, the MSDN documentation was pretty decent too; all this without ever doing anything with Bitlocker before, WMI is great stuff.

I may never use this but if nothing else it was quick and fun to throw together, maybe it will help you.

How to clear the IE SSL Cache from JS

Just ran accross a good blog post on how to clear IE’s SSL Session Cache, this is useful if you want to have a “logoff” button on a website that uses mutually authenticated SSL.

Specifically a new execCommand was added in to enable this:

// Clear current credentials
// Requires IE6 SP1 or later
document.execCommand(“ClearAuthenticationCache”, false)

Check it out: http://www.adopenstatic.com/cs/blogs/ken/archive/2005/04/12/14.aspx

What is a wildcard certificate and why are they a bad idea?

Wildcard certificates are SSL/TLS server certificates that unlike their traditional counterparts bind a entire domain (or sub-domain) to a single private key, for information on the kind of wildcards one can specify see: http://support.microsoft.com/kb/258858.

Why would someone want to have a certificate like this? Well in my experience the decision is most often made on cost factors (acquisition, management, and politics), in other words I would rather buy onecertificate for my entire server farm instead of one for each server.

What makes the use of these certificates a bad idea? Well there are several reasons:

FirstSSL/TLS typically provides two key properties; authentication of the server and confidentiality of the session; the core value of that pair is the authentication of the server as you should not even consider submitting data if you don’t know who its going to and if you never submit data you don’t need to worry about if the session is encrypted (most of the time).

Lets explore why we care about “who” were talking to and why that need to specific, certification authorities only issue certificates to entities that agree to something commonly referred to as a subscriber agreement; this agreement obliges the subscriber to have certain practices, the most basic of which might be not to publish the associated private key on the web or to not host malicious content.

With a wildcard certificate you may or may not know if the site content your experiencing agreed to such an agreement, more over you have a pretty high probability that the private key associatedwith the certificate exists in multiple locations, this also increases the likely hood that the private key is in software and not hardware making it a practical possibility a remote exploit could expose the private key to an attacker.

Then there is the question of who your talking to, many sites use sub-domains (for example WordPress), this becoming more and more common with the exhaustion of easy to remember domains and new community sites that what remember-able URLs for their members, each of these “subs-sites” typically have control over their own content and knowing who their hosting provider is doesn’t tell you anything about them in particular.

Now I am not suggesting that having a certificate tells you that the content being served from that host is somehow more trustworthy but knowing who you are talking to went to the trouble to prove who they are to you is useful when making a trust decision.

Secondly there is the practical issue of key management, as I mentioned earlier you know that the key has (very likelybeen shared amongst multiple hosts, and keys that have been spread out like the dogs breakfast are much less trustworthy than ones that have never been shared.

If these things are so bad why did they get developed? Well there are a few reasons, certainly cost was one of the but I actually believe it was done in the hope to enable intermediary SSL/TLS accelerators too to service multiple sites; the thing is that these devices have always been able to handle different certificates for each of the hosts they secure so this would have just been a excuse.

In my opinion all browsers should have a setting that allows users to disable the wildcard behavior, in-fact I would go so far to say that this should be the default.

There is another similar case to the wildcard certificate, it is possible to bind multiple identities to a single key pair, one can put in multiple Subject Alternate Names, in other words I can create a certificatethat binds 100 host names into a single certificate; this still has many of the problems of a wildcard certificate and I would argue that this case should also be covered by any setting that disabled ambiguously bound certificates.

Alun Jones has a good blog post from a while ago on this topic that is worth a read too.

ASKING THE DRUNK IF THEY ARE DRUNK

For a over decade there has been a slow move towards building networks where only authenticated users can get on (wireless networks are a great example of this), this later moved towards building networks where onlyauthenticated users on managed machines get on, the most recent change to this trend has been to add checks that also require these machines meet a base-line configuration that gives the network administrators some idea of what kind of risks they are exposed to by letting these machines on their networks.

This all makes sense, after all machines that do not have up-to-date antivirus or the latest patches certainly represent more risk to the network than those that do.  A great example of this is actually SQL Slammer, even today five years after the patch for this vulnerability it is still negatively effecting customer environments.

Analysts have given solutions in this space the label of Network Admissions Control (NAC), these systems rely on two basic models one where the behavior of the host is externally monitored and its access is limited if it behaviors out of norm with the networks policy and another where the host makes claims about its configuration that is then evaluated to decide of the host is conformant with the networks policy to see if access should be limited.

Both of these models are examples of what I call “asking the drunk if he is drunk”,  I say this because in both cases the client can control if it displays a behavior (I used to work with a security consultant who would always say “give me your rules and I will comply with them”, this is a example of just that – warning pdf link) or produces a claim that will result in access being given or restricted. The core difference between the two approaches is how rich is the information that is available to make policy decisions on.

The problem with this is if we are to deploy solutions like this we must also accept that it is possible for malware (like root kits or other malicious host software) to change the behavior of the host so that it isn’t “caught” by these checks.

If thats really the case do these systems still have value? I would argue they do, you see they give systems administrators get something they have never really had before, insight into their networks risk profile (how many people on my network do not have this patch, are not running antivirus or have out of date signatures, etc.).

There are other values too, in general users actually “want to do the right thing™” but they don’t necessarily know what that means and these systems help them do just that, in addition these systems often implement concepts of “automatic remediation” so that you can remove the obligation of the user to do anything and instead you just fix it for the user without their interaction.

These reasons underscore why even in computing it’s important to “pull the drunk over and ask him to walk the line”, but does this mean we should accept the status quo? I would say no, although it will never be possible for a running system to evaluate its own state in without potentially being fooled there are lots of things we can do to make the host itself more trustworthy.

As a example many of the technologies in VISTA do just that (and I thinkSymantec would agree), some of the technologies in VISTA that help here today include:

  • Bitlocker – Protects the host from offline attacks of trusted software components and cryptographic keys.
  • Secure Startup – Increases the trustworthiness of the OS by checking to see that core components are authentic as part of the startup sequence.
  • Code Integrity – Increases the trustworthiness of the running state of the OS by checking to see if they are authentic before being loaded.
  • UAC – Reduces the negative impact that can be caused by careless users and/or malware by giving these processes a restricted token.
  • ASLR – Makes it more difficult for malware to attack running code by randomizing where code is running in memory.
  • DEP – Makes it more difficult for malicious code to get and keep running.

This represents just the beginning through, future versions of these conditional network access solutions will also leverage hardware that runs before the operating system (TPM) to verify that the components that implement the above are not tampered with before the operating system even is invoked.

Later systems will even be able to use virtual machines to add even more protection from these sorts of attacks, again that’s not to say that when we get here we will have made these systems fool proof; it’s just not possible(my friend Cem always has great analogies, one of my favorites is “Security is often a exercise of re-arranging deck chairs on a sinking ship” this problem is a example of just that) but that doesn’t mean there is not value in raising the bar and I think the industry is well on its way to do just that.

As a side note, we have deployed the Microsoft solution in this space (Network Access Protection) to over 75,000 hosts over the last few months pretty cool if you ask me.

Understanding Server Gated Cryptography

Chris Baily and I wrote this paper back in 2004 and for various reasons we never got around to publishing it, while looking through my un-published articles I saw this and though that it might still have some value so I am publishing it here.

Introduction
If you have ever looked into purchasing a certificate to secure your Web site, you’ve probably heard of Server Gated Cryptography (SGC). If you have ever wondered what it is and how it affects your Web site, this is the article for you.

Let’s begin with why we even bother with SSL and TLS. Now, most people know that SSL and TLS are the technologies that enable the “lock” icon in browsers. This icon signifies that the session is “secure” to users, but what does this really mean?

Name Authenticity – The client can rely on the fact that it is communicating with the server represented in the host name in the Web address.

Confidentiality – The information being sent from both the client and your SSL and TLS enabled server has been encrypted. These are all great characteristics, but there’s more to them than may be apparent on the surface.

Let’s explore each of them what makes them factual.

Name Authenticity
The SSL and TLS protocols are based on the premise that each server or cluster of servers that represent a particular host name has a secret called a private key. The operative word here is private. If the secrecy of this key is compromised, an attacker could spoof the server’s identity or even decrypt the data being sent to the server.

Having a private key isn’t enough—you also need the corresponding public key. That key, along with the name of the server, is wrapped up or bound together into a certificate. A certificate asserts the name of the server in a way that is provable. Why is having a certificate important?

Consider the HTTP protocol and the use of the Domain Name System (DNS) resolution protocol to resolve the IP address of the server being communicated with. Because DNS is a protocol that was designed without a security subsystem, DNS can suffer from a number of network-based attacks. For example, in DNS there is simple man-in-the-middle attack where an attacker intercepts a request for the IP address that is associated with a server. The attacker then returns the IP address for their server instead of the one originally requested. When that server hosts a web site the attacker can then duplicates the look and feel of the Web site in question and leverages the user’s trust of that Web site to gather confidential information from that user.

Without the SSL and TLS, protocols, the user wouldn’t have known that this attack had taken place. By using SSL and TLS in the HTTP protocol, however, we can detect these types of attacks and prevent connections to fraudulent servers. SSL and TLS use certificates to enable this. The client and the server use a negotiation process to prove that the server has the private key associated with the certificate presented by the server, that the server certificate was issued by a certificate authority trusted by the client, and that the server certificate is associated with the same server that they are communicating with.

What is a certificate authority? A certificate authority is a third party that verifies the identities of individuals and their affiliations with the organizations that they claim to represent. They do this by having strong authentication and operational practices for themselves and their subscribers.

Confidentiality
The confidentiality offered by SSL and TLS are primarily the result of two assumptions in the protocol—the secrecy of the private key possessed by the Web server, and the strength of the cryptographic algorithms and keys that are used to protect data as it passes between the client and the server.
Although the identity of the server is authenticated using its private key and associated certificate, the data that is passed between the client and the server are encrypted using secret keys (symmetric keys) that are exchanged as part of the SSL and TLS protocol.
A number of cryptographic algorithms are based on secret keys. The strength of these types of algorithms is measured in part by the length of the secret keys that they can use to encrypt data. These algorithms provide varying levels of security. The accepted standard is that algorithms that can encrypt data with keys that are 128-bit or longer are strong while anything protected with keys of lengths less than that are considered weak. This is where SGC has an important role.

What is Server Gated Cryptography?

SGC is a (software) technology from Microsoft that allows U.S. vendors to ship software outside of the U.S. to companies that need strong cryptography for session protection—not for the encryption of the data to be persisted. This technology requires digital certificates.

The U.S. government at one time restricted U.S. vendors from offering strong cryptography to countries outside North America. In fact, strong cryptography was considered to be munitions by what is now known as the Bureau of Industry and Security (Department of Commerce, U.S. Government). Despite this, we still had in a global economy. The need for secure communications with parties outside the U.S. still existed. This is why SGC was developed.

To satisfy the requirement for security, the software vendors who supported cryptography had to ship two different versions of their software—export and nonexport.

To get permission to export this software outside of the U.S., these vendors had to implement constraints to prevent entities considered to be hostile from getting these more secure sessions. In the case of SSL/TLS, this was done by requiring that the authorities that issued certificates signed contracts agreeing to issue only certificates capable of SGC to authorized users such as financial institutions. Without a certificate from one of these SGC-enabled certificate authorities, clients without direct support for strong cryptography were only protected by 40- or 56-bit security, which during software export restrictions was considered weak.

Microsoft and other software vendors stopped shipping nonexport or weak encryption versions of their software shortly after the U.S. lifted its export restrictions in early 2000.

The Waning Need for Server Gated Cryptography

Today, almost all Web software by default supports strong (128-bit or greater) SSL without the use of SGC. The primary value of SGC–to ensure a strong encryption SSL session to clients supporting only weak cryptography–has waned. The importance of having a strongly encrypted SSL session has not. Advances in computing power along with more sophisticated attacks have made the adoption of 128-bit security more important than ever. RSA Laboratories estimates that an arbitrary 56-bit symmetric key can be broken by an attacker in less than 5 minutes. Additionally, the recent improvements in specialized factoring hardware has changed the landscape when it comes to the security offered by small RSA keys as well.

As of January 2004, an estimated 85% of browsers in use were shipped with strong encryption by default [IE 5.5, IE 6.0, Mozilla 1, Opera 7.0, and Safari]. Additionally, conservative figures would suggest that 10% of the platforms that didn’t support strong cryptography have been upgraded, and the remaining 5% are rapidly being upgraded as well.

This data shows that only the browsers released before the export restrictions were lifted (2000) don’t support strong cryptography out of the box. These browsers also have known security issues. For example, Internet Explorer 5.01, released in 1998, was the last browser released by Microsoft that didn’t by default support strong cryptography. This version of the browser had sixteen critical updates and each one of these updates may have fixed several issues. Security issues are not limited to Internet Explorer. For example Netscape Navigator also has had numerous critical issues fixed since it was originally released. These older browsers don’t support the newer revisions of the SSL and TLS protocol that were developed to address security vulnerabilities in these protocols. This alone may be adequate justification for requiring older clients update their browsers.

Although clients play a significant role in the establishment of secure communications with servers, administrators can take small steps to help enable secure communications with their customers. Specifically, to significantly reduce the risks of downgrade attacks, their can configure their servers to use only strong cryptographic suites and the latest versions of SSL (TLS).

SGC can help strengthen the security of the SSL session. SGC does this by increasing the strength of the encryption used for clients running on old browsers. This object is defeated if a potential attacker can exploit a security weakness in the browser or the protocol that is not related to encryption strength. Users who want stronger security must keep their software up to date, and if possible, upgrade to the latest browser versions (see sidebar) so as to benefit from the security improvements offered in these more recent versions.

Summary
Encouraging clients to migrate to more recent browsers that are capable of supporting the more modern security protocols and strong cryptography will also address known security vulnerabilities in these clients. As a result, users will have a more secure and reliable Web experience. Continuing to depend solely on SGC provides only a feeling of security rather than a more secure environment if these other issues are not also addressed.