Category Archives: Security

Using OpenSSL to create a test Qualified Subordination PKI hierarchy

The other day posted about “Least Privilege and Subordinate Certificate Authorities”, this post talked about how you can delegate only a limited set of rights to a subordinate CA. I thought you might find a set of configuration files and batch files I put together to test these scenarios useful, here you go.

I threw this together on a Windows machine that had the Shinning Light OpenSSL distribution on it, it has several batch files:

  • CleanPKI.bat – Remove all generated content
  • MakePKI.bat – Make a new PKI

Then there is the OpenSSL configuration files, most of this is straight forward but for the stuff that is not check out the OpenSSL documentation.

Least Privilege and Subordinate Certificate Authorities

One of the most fundamental design principals when designing a secure system is that of least privilege, in the case of CAs one scenario where this can be applied is the subordination of another CA.

The application of this concept in this scenario is referred to as qualified subordination,  it was first formalized in the IETF standards for X.509 in 1999 in RFC 2459 through the introduction of the Basic Constraints, (see section 4.2.1.10), Name Constraints (see section 4.2.1.11) and Policy Constraints (see section 4.2.1.11).

Unfortunately broad product support did not begin to emerge until the RFC 3280 was released in 2002.

The development and deployment of these concepts was primarily driven by the US Federal Government’s deployment of PKI as a foundational technology for their security infrastructure. One of the many benefits of the government adopting these concepts was that NIST published a robust Test Suite to validate conformance with their interpretations of RFC 3280 which included extensive coverage of Qualified Subordination.

When these concepts are used together a Root CA is able to delegate the right to issue certificates to another CA while restricting them from creating other CAs or issuing certificates for names they are not authoritative for.

The Federal Bridge made extensive use of these concepts; they were able to do so through the mandate to use software that met the published guidelines. Adoption on the Internet however took much longer given the historically slow adoption rates for browsers, that gladly has changed and there is now sufficient browser support to deploy these restrictions.

In addition Microsoft introduced another mechanism to restrict the scope in which a CA is trusted for, they did this by treating the Extended Key Usage (see section 4.2.1.13) extension as a means to delegate only certain issuance capabilities to a Certificate Authority.

It accomplishes this by using the same logic specified in RFC 3280 for Certificate Policies (see section  4.2.1.5), more specifically it assumes when an issuer lists an Extended Key Usage (such as the one for S/MIME encryption) in a CA certificate that its issuer intended to restrict the usage of that CA to the EKUs present in the certificate. A simplified version of this logic was also adopted by OpenSSL for SSL certificates.

Given the Microsoft behavior is more restrictive than the behavior specified in RFC 3280 it does not break applications that do not support it and allows a CA to restrict behavior even further for clients that use the Windows certificate validation logic (nearly 70% of the deployed browsers today).

 

Client Compatibility

Most browsers and email clients support these concepts, however unfortunately not all of them support Name Constraints.

Despite that that they all do support honoring the RFC 3280 behavior for critical extensions (see section 4.2), which states:

A certificate using system MUST reject the certificate if it encounters a critical extension it does not recognize

This means by marking the Name Constraints extension Critical those implementations that do not support the concept will “fail-closed”.  This means it can be used as an effective way to technically enforce that CAs are not trusted for names they are not authoritative for, it also means that there will be cases where they may be authoritative but clients cant trust the certificates they issue.

This issue can be addressed by not marking the extension Critical, when this is done the clients that understand Name Constraints will continue to honor the policies expressed in it and those that do not will simply ignore the extension.

This is of course a trade-off of security in exchange for compatibility, with that said one with far more positive trade-offs than negative ones.

Specifically this approach means users of clients that do not support the extension are no-worse off than they are without its use and those with support get the additional protection from cases where a subordinate CA has been compromised or is willfully issuing certificates that it is not authoritative for.

With that said, support for Name Constraints is actually quite good as the following table illustrates.

 

Honor Criticality Support Basic Constraints Supports DNS Name Constraints Supports RFC 822 Name Constraints Supports Policy Constraints Supports constrained EKU Successfully enforces
IE [1] Yes Yes Yes N/A Yes Yes Yes (Open)
Outlook [1] Yes Yes Yes Yes Yes Yes Yes (Open)
Firefox [1] Yes Yes Yes Yes Yes No Yes (Open)
Thunderbird [1] Yes Yes Yes Yes Yes Yes Yes (Open)
Opera [1] Yes Yes No[2] No[2] No[2] Yes (SSL only) [3] Yes (Closed)
Windows / Safari [1] Yes Yes Yes Yes Yes Yes Yes (Open)
OSX / Safari[4] Yes Yes No[5] No[5] No[5] No Yes (Closed)

 

What this table shows is:

  1. It is possible to rely on the Name Constraints extension as an effective enforcement technique if the extension is marked as critical.
  2. It is possible to rely on the Basic Constraints extension as an effective enforcement technique.
  3. In the case of Safari and Opera that this success is due to these browsers support of honoring the semantics for critical extensions vs. understanding the Name Constraints extension.

For customers this means if you must interoperate with Opera or Safari (yes even on iPad and iPhone) the use of a certificate with a “Critical” “Name Constraints extension” in it will result in the certificate chain looking invalid.

Thankfully according to StatCounter these represent less than 6% of all browsers on the Internet and antidotal evidence shows almost no use in the enterprise.

With that said most environments business requirements will not allow them to fail even for such a small number, in these environments deploying Name Constraints as a non-critical extension will be required, not 100% of the security benefits are realized with this approach but it does significantly reduce the risk.

In such cases it is recommended that once the remaining legacy clients that do not support Name Constraints have been replaced with more recent versions that do the CAs be re-issued with the extension marked as critical.

 


[1] Tests on Windows were completed with Windows 7, IE 9.0, Outlook 2007, Safari 5.05, Opera 11.61, Firefox/Thunderbird 10.0.2.

[2] OpenSSL supports name constraints for both name forms as well as policy constraints, Opera has chosen not to enable thee capabilities until demand was present. This work was done in OpenSSL in 2008 as part of a contract to Google.

[3] Opera uses OpenSSL which supports restricting a CA from issuing valid SSL server certificates if it’s parent did not place the SSL EKU  in it’s certificate.

[4] Tests on OSX were completed with Lion and Safari 5.05

[5] Safari on the Mac uses the PKITS tests so they are aware of the deficiency in their validation logic, they have not publically stated they will support them but we expect support in the future.

 

Server Compatibility

If you have server that accepts or validates client certificates you will also care about their support for validating certificates that have these constraints.

Each environment is a little different and the number of server choices one sees in these cases feels limitless at times, as such we are only able to provide more abstract guidance here.

In the case of Windows servers such as IIS the important factor is what version of Windows you are running on as the support for PKI is built into the Windows platform. Applications are most commonly built on this platform when they are designed for Windows and is always the case for Microsoft applications.

The concepts discussed here were all supported since Windows 2003, though there were significant improvements in the 2008 release.

The net of the above is that if your server platform is built on this API you gain support for these concepts, on other platforms it of course depends on which libraries they chose to use for support for certificate validation.

 

SSL/TLS Deployment Best Practices

SSL/TLS seems simple, you go to a CA to prove who you are they give you a credential, you install it on your server, turn on SSL and then you are done.

Unfortunately there is more to it than that, I recently had an opportunity to contribute to a Best Practices Guide (PDF)  that aims to provide clear and concise intructions to help administrators understand how to people deploy it securely.

The intention is to work on an advanced version of this document in the future that covers more details and advanced topics as well (think OCSP Stapling, SPDY, etc).

I hope you find it useful.

How to mitigate the risk of the DigiNotar *.google.com SSL certificate

Given the recent news relating to DigiNotar issuing a certificate to an entity claiming to represent google that has turned out to be a malicious entity it’s probably most appropriate to cease trusting the DigiNotar root until the specifics of the issue have been identified.

As a practical matter they do little work outside the EU and are a very small player so your experience on the internet is not likely to be diminished as a result of not trusting them anyways.

That begs the question of how to do that? On the surface you might think what you need to do is to remove the DigiNotar root from your root store and in the case of Firefox, Opera that would do it (at least until they next patch and it gets added back in, that is unless they nix it too.).

In the case of IE and Chrome (which uses the Windows trust anchors) this is insufficient, there is a feature called “Automatic Root Update” (http://netsekure.org/2011/04/automatic-ca-root-certificate-updates-on-windows) that maintains the roots for you based on a policy that Microsoft maintains. When its enabled Windows will check with Windows Update as part of certificate validation to see if it should add a root to enable the path to build. You do not have to use this capability but I would not recommend disabling it unless you are a PKI savvy.

If that’s the path you follow, be sure to delete the root certificate from your Computer Accounts Third-Party Certification Authorities store also (if it happens not to be there, don’t fret if it isn’t that just means you have never encountered a certificate from them).

You also might want to check out a couple posts that Nasko has done relating to managing your own certificate store like this one (http://netsekure.org/2010/05/results-after-30-days-of-almost-no-trusted-cas/) and this one (http://netsekure.org/page/2/).

For everyone else all I would recommend is:

  1. Download the DigiNotar Root Certificate (http://www.diginotar.nl/files/Rootcertificaten/DigiNotar%20root%20CA2007.crt)
  2. Run mmc.exe
  3. Add the Certificate Management console
  4. Target it at the Computer Account certificate stores
  5. Add the DigiNotar Root Certificate to the “Untrusted Certificate” store

Now this is a bit more draconian than you may strictly need to but until its clear if it was the root that was compromised, a subordinate or their vetting practices the right thing to do is not to trust any certificates from them.

As I said before this is not likely to have any negative effects on your experience on the web and it will protect you from the attacks this issue represents.

I should note that I am assuming (and you know what they say about assumptions) that this is the only DigiNotar root, trusted by the browsers; I checked a few sources and it seems like that is the case but all of the CA trust programs do a poor job publishing this stuff these days. When I ran the Windows program we maintained a KB with the trusted CAs and their certificates in it, that doesn’t seem to be the case any longer, sigh.

Good luck,

Ryan

P.S. Nasko has also done a good post on how to manage the root stores on Windows you can find it here (http://netsekure.org/2010/04/how-to-disable-trusted-root-certificates/)

P.P.S. I have verified that for sites that have been pinned Chrome (only Chrome and only Pinned) google will flag these, IMHO this is good but you still need to remove it to be safe in the other cases. (see: http://www.breitbart.com/article.php?id=CNG.f17dd620575edb02954a7f8f0971f63b.4c1&show_article=1)

P.P.P.S. Looks like all 3 major browsers have untrusted DigiNotar

http://www.microsoft.com/technet/security/advisory/2607712.mspx?pubDate=2011-08-29.http://blog.mozilla.com/security/2011/08/29/fraudulent-google-com-certificate/.http://code.google.com/p/chromium/issues/detail?id=94673.

The Contrition of a Security Practitioner

The Encarta World Dictionary says that Contrition is “the deep and genuine feelings of guilt and remorse”. Having been involved in information security for 20 years, now, I think I can sincerely say that many security practitioners would say this is how they feel about the early days of their careers.

Why, you ask? Well, in my case, I started my career doing work for large financial institutions and governments. Back then these sorts of customers often had a “security at any price” mantra. While one would need to assess the risk of a system to secure it, these sorts of customers would also plan to mitigate as many of the identified risks as possible.

For these customers this was not necessarily a bad approach, but that had more to do with what was at risk than it did with the approach being a sound one.

Today the world is a different place; security is something that even the smallest businesses need to consider. This change did not occur overnight. It was gradual and I guess this is where the contrition comes in.

You see many applied the same approaches that worked with those financial and government customers with Fortune 500 and later Fortune 1000 companies. While in some cases this was appropriate, in most cases it was not.

The modern security practitioner needs to take a more holistic look at the business and platform they are servicing to understand its schedule and technological needs along with what the immediate business risks are.

Beyond that, the breadth of the role has changed and expanded. Security practitioners are now commonly responsible for Compliance, Reliability and Privacy, as well.

This puts the security practitioner in an interesting position; with this more complete view they can now help improve:

  • time to market, by recommending solutions that are risk-appropriate for the business;
  • engineering efficiencies, by identifying areas where work is being done inefficiently;
  • systems and processes, by identifying gaps and potential failure points that can negatively impact the business;
  • how teams allocate their scarce resources, by identifying opportunities where they’ll do the most good, based on risk vs. return.

This represents a significant shift from a decade or two ago, and requires the security practitioner to no longer simply be an outside expert but become part of the development team they support.

This is one of the reasons the Security Champion model is used in many teams here at Microsoft. While it has its challenges, as a member of the feature team a champion has the opportunity to have and share these more holistic insights as I called out above.

A good example of this is the application of cryptography to solve business problems. Cryptography is a powerful tool, but it’s often misapplied, introducing fragility and operational overhead that can be avoided; I think this is best summed up by this quotation:

If you think cryptography will solve your problem, then you don’t understand cryptography… and you don’t understand your problem. — Bruce Schneier

So, my ask of you as an engineering manager is to have a formal Security Assurance program for your team and as a software engineer incorporate your security specialists early and often. They either have direct experience in the areas I discussed here, or are in the position to bring those resources to your aid … to not only help you secure your offerings, but to do so in record time, as well.

Generic Identity Device Specification Published

In the PC ecosystem, when a new device (say mass storage) technology is introduced, commonly there is little standardization, vendors produce proprietary software stacks for interacting with that device, they have custom hardware interfaces for interacting with the device, custom software for managing those devices, etc.

As a device picks up in popularity common platform programing interfaces typically emerge, sometimes these are commercial in nature, other times they are standards based; in either case the goals of these interfaces are simple: abstract out the variety in the hardware ecosystem for the application developer allowing them to write software that can run on any machine regardless of which vendor manufactured a given device. These abstractions also commonly allow the sharing of devices so that multiple applications can use them at the same time.

The next phase in a devices maturity is normally the definition of a class interface for interacting with hardware, it’s this last phase that allows the “no driver needed” story that users like so much; we all reap the rewards of this with flash drives today, plug in the device and it just works (the same is true for display technologies like VGA).

These class drivers commonly cater to the lowest common denominator when it comes to functionality, but vendors are always able to add additional capabilities that are exposed when their drivers and custom software are present (again think about display technologies here as a good example).

There is one device in particular that has not entirely followed this flow that I wanted to talk about and that is Smart Cards; as a concept was they emerged in the 1970s, the first cards went into production in the late 70s. Here we are 40 years later and there is no clear “class driver” for these devices, that is not to say there have not been attempts, some even with success, but those that have had success have been closed system solutions, for example the PIV interfaces used within the US Federal Government.

In the commercial space however, no class specification that has been attempted really was viable, there are lots of reasons for this but I am cautiously optimistic that there is now a candidate.

One of the projects I was working on over the last few years was the specification of the Generic Identity Device Specification, this attempts to build on the success of the government based card specifications and extend it to commercial applications as well.

I had opportunities to work with some great folks on this effort, we all had the same goal make smart cards as reliable, cost effective and accessible as possible; I believe this work does just that.

This specification has now been released by Microsoft under the Microsoft Community Promise, that means it is available royalty free for anyone to adopt; this is a big win for our partners and above all the customers who will benefit the most from it.

So what does this mean for you? Well if you’re a customer looking to deploy smart cards you should seriously look for vendors who produce cards that are compliant with this specification, it means lower cost of deployment, makes it easier for you to multi-source cards and in the end it will likely reduce the overall cost of cards as volumes go up based on function of scale.

For a card manufacturer there are a number of benefits as well, it is possible to develop a GIDS card that is compatible with the PIV card-edge, this means you can develop a single card stock get it evaluated for FIPS (or whatever other standard) that can be sold into commercial or government applications (reducing cost) and these cards will have a great experience in Windows.

If you are a platform or operating system developer you now have a specification you can use as a baseline for testing card scenarios, a way to (hopefully) support a large number of “real” cards that will exist on the market (soon I hope), if this happens we can experience driver coverage numbers similar to other device classes.

For those of you not in this segment, this last point is super important, there is so much fragmentation in the market no solution has over a couple percent of card coverage in-box, if this specification gets adopted that number can start to look more like other device classes where the number is in the 90 percentile range.

In any event, I am pleased to see this out there, here’s hoping it gets adopted broadly…

How to tell if a volume is Bitlocker Protected with TPM and PIN

Today I was presented with a question, how can I tell if the OS volume is protected with Bitlocker a TPM and a PIN.

Since I could not sleep (its 2:30AM right now) I figured I would throw together a quick and dirty script that checks for that, it was pretty easy to do.

I started with the documentation for Win32_EncryptableVolume which I recall seeing previously in a unrelated mail at some point, from there I discovered the GetKeyProtectors method, I then did a search on Live for GetKeyProtectors and VBSCRIPT that was scoped to Microsoft.com domains.

This got me a handful of samples, I took one hacked it up and came up with this:

‘ ——————————————————————————–
‘ Get configuration we will need
‘ ——————————————————————————–
‘ Get the OS System Drive
set shell = WScript.CreateObject( “WScript.Shell” )
strDriveLetter = shell.ExpandEnvironmentStrings(“%SystemDrive%”)

‘ Target computer name
‘ Use “.” to connect to the local computer
strComputerName = “.”

‘ ——————————————————————————–
‘ Connect to the BitLocker WMI provider class
‘ ——————————————————————————–

strConnectionStr = “winmgmts:” _
& “{impersonationLevel=impersonate,authenticationLevel=pktPrivacy}!\\” _
& strComputerName _
& “\root\cimv2\Security\MicrosoftVolumeEncryption”
On Error Resume Next ‘handle permission errors

Set objWMIService = GetObject(strConnectionStr)

If Err.Number <> 0 Then
WScript.Echo “Failed to connect to the BitLocker interface (Error 0x” & Hex(Err.Number) & “).”
Wscript.Echo “Ensure that you are running with administrative privileges.”
WScript.Quit -1
End If

On Error GoTo 0

‘ ——————————————————————————–
‘ Get a list of volumes that could be bitlocker protected.
‘ ——————————————————————————–

strQuery = “Select * from Win32_EncryptableVolume where DriveLetter='” & strDriveLetter & “‘”
Set colTargetVolumes = objWMIService.ExecQuery(strQuery)

If colTargetVolumes.Count = 0 Then
WScript.Echo “FAILURE: Unable to find BitLocker-capable drive ” &  strDriveLetter & ” on computer ” & strComputerName & “.”
WScript.Quit -1
End If

‘ there should only be one volume found
For Each objFoundVolume in colTargetVolumes
set objVolume = objFoundVolume
Next

‘ ——————————————————————————–
‘ Now check if it was protected with a TPM and a PIN
‘ ——————————————————————————–

nKeyProtectorTypeIn = 4 ‘ type associated with “TPM and Pin” protector

nRC = objVolume.GetKeyProtectors(nKeyProtectorTypeIn, aKeyProtectorIDs)

If nRC <> 0 Then
WScript.Echo “FAILURE: GetKeyProtectors failed with return code 0x” & Hex(nRC)
WScript.Quit -1
End If

‘ there should only be one volume found
For Each objFoundVolume in colTargetVolumes
set objVolume = objFoundVolume
Next

‘ ——————————————————————————–
‘ Now return what we found.
‘ ——————————————————————————–
On Error Resume Next ‘handle unitialized array

If IsNull(aKeyProtectorIDs(0)) Then
WScript.Echo “This volume is NOT TPM and PIN protected.”
Else
WScript.Echo “This volume IS TPM and PIN protected.”
End If

 

From the time I decided to write the script, to the time I wrote it and tested it was about 15 to 20 minutes; the samples were great, the MSDN documentation was pretty decent too; all this without ever doing anything with Bitlocker before, WMI is great stuff.

I may never use this but if nothing else it was quick and fun to throw together, maybe it will help you.

How to clear the IE SSL Cache from JS

Just ran accross a good blog post on how to clear IE’s SSL Session Cache, this is useful if you want to have a “logoff” button on a website that uses mutually authenticated SSL.

Specifically a new execCommand was added in to enable this:

// Clear current credentials
// Requires IE6 SP1 or later
document.execCommand(“ClearAuthenticationCache”, false)

Check it out: http://www.adopenstatic.com/cs/blogs/ken/archive/2005/04/12/14.aspx

What is a wildcard certificate and why are they a bad idea?

Wildcard certificates are SSL/TLS server certificates that unlike their traditional counterparts bind a entire domain (or sub-domain) to a single private key, for information on the kind of wildcards one can specify see: http://support.microsoft.com/kb/258858.

Why would someone want to have a certificate like this? Well in my experience the decision is most often made on cost factors (acquisition, management, and politics), in other words I would rather buy onecertificate for my entire server farm instead of one for each server.

What makes the use of these certificates a bad idea? Well there are several reasons:

FirstSSL/TLS typically provides two key properties; authentication of the server and confidentiality of the session; the core value of that pair is the authentication of the server as you should not even consider submitting data if you don’t know who its going to and if you never submit data you don’t need to worry about if the session is encrypted (most of the time).

Lets explore why we care about “who” were talking to and why that need to specific, certification authorities only issue certificates to entities that agree to something commonly referred to as a subscriber agreement; this agreement obliges the subscriber to have certain practices, the most basic of which might be not to publish the associated private key on the web or to not host malicious content.

With a wildcard certificate you may or may not know if the site content your experiencing agreed to such an agreement, more over you have a pretty high probability that the private key associatedwith the certificate exists in multiple locations, this also increases the likely hood that the private key is in software and not hardware making it a practical possibility a remote exploit could expose the private key to an attacker.

Then there is the question of who your talking to, many sites use sub-domains (for example WordPress), this becoming more and more common with the exhaustion of easy to remember domains and new community sites that what remember-able URLs for their members, each of these “subs-sites” typically have control over their own content and knowing who their hosting provider is doesn’t tell you anything about them in particular.

Now I am not suggesting that having a certificate tells you that the content being served from that host is somehow more trustworthy but knowing who you are talking to went to the trouble to prove who they are to you is useful when making a trust decision.

Secondly there is the practical issue of key management, as I mentioned earlier you know that the key has (very likelybeen shared amongst multiple hosts, and keys that have been spread out like the dogs breakfast are much less trustworthy than ones that have never been shared.

If these things are so bad why did they get developed? Well there are a few reasons, certainly cost was one of the but I actually believe it was done in the hope to enable intermediary SSL/TLS accelerators too to service multiple sites; the thing is that these devices have always been able to handle different certificates for each of the hosts they secure so this would have just been a excuse.

In my opinion all browsers should have a setting that allows users to disable the wildcard behavior, in-fact I would go so far to say that this should be the default.

There is another similar case to the wildcard certificate, it is possible to bind multiple identities to a single key pair, one can put in multiple Subject Alternate Names, in other words I can create a certificatethat binds 100 host names into a single certificate; this still has many of the problems of a wildcard certificate and I would argue that this case should also be covered by any setting that disabled ambiguously bound certificates.

Alun Jones has a good blog post from a while ago on this topic that is worth a read too.

ASKING THE DRUNK IF THEY ARE DRUNK

For a over decade there has been a slow move towards building networks where only authenticated users can get on (wireless networks are a great example of this), this later moved towards building networks where onlyauthenticated users on managed machines get on, the most recent change to this trend has been to add checks that also require these machines meet a base-line configuration that gives the network administrators some idea of what kind of risks they are exposed to by letting these machines on their networks.

This all makes sense, after all machines that do not have up-to-date antivirus or the latest patches certainly represent more risk to the network than those that do.  A great example of this is actually SQL Slammer, even today five years after the patch for this vulnerability it is still negatively effecting customer environments.

Analysts have given solutions in this space the label of Network Admissions Control (NAC), these systems rely on two basic models one where the behavior of the host is externally monitored and its access is limited if it behaviors out of norm with the networks policy and another where the host makes claims about its configuration that is then evaluated to decide of the host is conformant with the networks policy to see if access should be limited.

Both of these models are examples of what I call “asking the drunk if he is drunk”,  I say this because in both cases the client can control if it displays a behavior (I used to work with a security consultant who would always say “give me your rules and I will comply with them”, this is a example of just that – warning pdf link) or produces a claim that will result in access being given or restricted. The core difference between the two approaches is how rich is the information that is available to make policy decisions on.

The problem with this is if we are to deploy solutions like this we must also accept that it is possible for malware (like root kits or other malicious host software) to change the behavior of the host so that it isn’t “caught” by these checks.

If thats really the case do these systems still have value? I would argue they do, you see they give systems administrators get something they have never really had before, insight into their networks risk profile (how many people on my network do not have this patch, are not running antivirus or have out of date signatures, etc.).

There are other values too, in general users actually “want to do the right thing™” but they don’t necessarily know what that means and these systems help them do just that, in addition these systems often implement concepts of “automatic remediation” so that you can remove the obligation of the user to do anything and instead you just fix it for the user without their interaction.

These reasons underscore why even in computing it’s important to “pull the drunk over and ask him to walk the line”, but does this mean we should accept the status quo? I would say no, although it will never be possible for a running system to evaluate its own state in without potentially being fooled there are lots of things we can do to make the host itself more trustworthy.

As a example many of the technologies in VISTA do just that (and I thinkSymantec would agree), some of the technologies in VISTA that help here today include:

  • Bitlocker – Protects the host from offline attacks of trusted software components and cryptographic keys.
  • Secure Startup – Increases the trustworthiness of the OS by checking to see that core components are authentic as part of the startup sequence.
  • Code Integrity – Increases the trustworthiness of the running state of the OS by checking to see if they are authentic before being loaded.
  • UAC – Reduces the negative impact that can be caused by careless users and/or malware by giving these processes a restricted token.
  • ASLR – Makes it more difficult for malware to attack running code by randomizing where code is running in memory.
  • DEP – Makes it more difficult for malicious code to get and keep running.

This represents just the beginning through, future versions of these conditional network access solutions will also leverage hardware that runs before the operating system (TPM) to verify that the components that implement the above are not tampered with before the operating system even is invoked.

Later systems will even be able to use virtual machines to add even more protection from these sorts of attacks, again that’s not to say that when we get here we will have made these systems fool proof; it’s just not possible(my friend Cem always has great analogies, one of my favorites is “Security is often a exercise of re-arranging deck chairs on a sinking ship” this problem is a example of just that) but that doesn’t mean there is not value in raising the bar and I think the industry is well on its way to do just that.

As a side note, we have deployed the Microsoft solution in this space (Network Access Protection) to over 75,000 hosts over the last few months pretty cool if you ask me.