The weaknesses of fingerprint based authentication

I love security people, were paranoid, were fun, we think outside of the box but we are also human and often times get distracted by dogma and shiny objects. I think this happens a lot when talking about stuff like fingerprint based authentication.

The technology has a very big brother feel to it after all for most people only ever get fingerprinted when you’re registering with the government – hopefully because you want to go on a trip, not because you’re being sent up the river.

Unfortunately attackers don’t usually have this problem, the reason is their goals are more specific – they want in and if they are any good they quickly identify when a path they are perusing isn’t going to be effective so they move on to the next attack vector until they are in or exhaust all their options.

When we look objectively at a fingerprint based authentication solution like is being provided by Apple what are the weakness an attacker is going to go after?

First its useful to understand how these systems usually work, for those of you who have not read my last post here is a quick recap:

A picture is taken of the finger, the picture is converted to the set of features that are extractable from the picture, and those features are stored in something called a template. On use this process is repeated and the features are compared if enough match it must be you. Since its you your password or pin is released to a process to use it on your behalf.

The attacker here starts on the tail end of this process, he sees that in the end we are just taking about a password or pin – if this biometric mechanism is just an option and we can still use the password or pin we have not raised the security bar.

But what if the biometric is releasing access to a key or a strong password instead of your regular old password or pin? Then the attacker would probably start to ask the question how strong is a fingerprint compared to a password? Here is a good post summarizing the effective strength of a fingerprint based biometric – long story short it’s about the same as a six character password.

But wait, why go through the front door if the backdoor is open? For the attacker to execute this attack they have to have physical access to the device right? And the first rule of computer security is that if the attacker has physical access to your device its not your device any longer.

With the device in hand some other things the attacker would want to know are:

  • How the matching works, is it done in software or hardware?
  • What about where the template is stored? How is access to read and write the template accomplished?
  • How is the template secured at rest? Is it readable? Can it be modified?
  • What about where the password and pin is stored? Is it readable? Can it be modified?
  • Does all of this happen in a single piece of hardware or is it shared across multiple devices?
  • If its multiple devices, how do they authenticate each other if at all?
  • How is access to the hardware doing the work authenticated? Is the software authenticated in any way?
  • How about the integrity of the operating system, is there some mechanism that provides guarantees about its running state?

This isn’t an exhaustive list but gives you an idea of the problems here, they are core operating system problems which means that if you use this capability on a jail-broken device you are probably reducing your security above and beyond where you would have been otherwise.

As you can probably see its likely that the security of this solution isn’t going to be based on how good the sensor is at detecting fakes but how the overall system itself was architected.

So what do I think about the use of biometrics given the issues discussed here? I actually believe its possible to design, build and deploy a system that is reasonably secure based on this technology and that if done right you can measurably improve the security for the user.

It doesn’t need to be perfectly secure to do that either, you simply have to get the user to the point where he you can use long and strong passwords and not the sort of thing that they normally use:

clip_image002

For example in Windows one might rely on the following:

  1. UEFI secure boot ensures the boot loader is authenticated and non-tampered.
  2. Windows Secure boots only authenticated non-tampered code is loaded.
  3. Driver signing ensures all drivers are authenticated and non-tampered.
  4. Bitlocker ensures the disk is encrypted and cannot be modified outside the scope of the machine.
  5. The TPM ensures the Bitlocker keys are secured from the average attacker.
  6. Password Policies ensure passwords are sufficiently long and complicated.
  7. Reversibly encrypted passwords are stored using a strong key hierarchy where the root of the hierarchy is protected by the TPM.
  8. Fingerprints make it possible for the user to successfully log on with the long complicated passwords efficiently.
  9. Use sensors with reasonable fake finger and liveness tests.

There are still attacks in the above configuration but the low hanging fruit have been nailed and deploying a solution like this likely raises the bar an attacker has to cross to successfully exploit your environment.

Is it as good as other mechanisms (for example smart cards) certainly not, but better than what people actually do in production environments today? Yes, very likely.

Is this for everyone? Certainly not; but for many it would be a step up.

I am not familiar enough with the security architecture of the iPhone and OSX to say if the same sort of protections exist, they may – but without them one thing is for sure the “gummy bear attack” is probably the least of your worries.

Thoughts on Apple and fingerprint based biometrics

Yesterday Apple finally confirmed what their acquisition of Authentec was all about, integrating fingerprint based biometrics in the iPhone. This was not exactly a surprise but its one thing to know they were going to do it and another to see how they went about it.

Details are still a little light on the implementation and there appears to be a lot of speculation about how they did things but I have not seen any one provide a reasonable write up of how this technology works, what its limitations are and the value it has.

Let’s start with what these things actually do, plain and simple a fingerprint sensor is a camera. It takes a picture of the structure of your finger, some sensors look just at the surface, some look a little below the surface. Different sensors use optical capture, while others use capacitance and other mechanism but regardless of all of this they all take a picture of your finger.

This picture is then processed looking for “minutia”, the little details that make that image unique (ridges, valleys, swirls, etc.). These are then mapped into something commonly called a template, it is this template that is stored.

clip_image001

The important thing to take away here is when you are enrolling the biometric sample itself isn’t actually stored, it’s simply not needed.

Once you are enrolled the same process happens (capture image, identify minutia, create template) except instead of storing it this time it is compared to the stored template. Now each time you log in you present your finger slightly differently, this means that not all of the same minutia will be seen in every captured image.

As a result the matcher has to guess if the person is you or not based on how many minutia it sees in common with the stored template.

This works fairly well when doing what is referred to as verification, this is when the sample is compared to just one sample as is probably the case when dealing with a device like the iPhone. When doing identification though (the one to many variant) there are a number of other problems to consider; I won’t discuss that here.

Now each of the image capture approaches used by sensors have different security properties; for example with optical sensors I have seen people lift the fingerprint from the sensor glass itself and re-use it.

According the press conference sensor used by the iPhone looks sub-dermally, the primary thing this helps with is resilience to small cuts and scrapes that could push the threshold authentication done with biometrics over the edge making it impossible to match you – it also does provide some security value in that the characteristics are not exactly the same ones you leave everywhere.

Now the good sensors also have logic in them to detect fake fingers, some of these are simple live-ness tests while others look at the characteristics of the flesh itself. For example a swipe sensor may look at the elasticity of the finger as it is dragged across the sensor.

One of the real problems here is that when you are buying a device with one of these sensors on them you have zero clue how good the mechanisms (if any) they use are. In my case I went and bought several fingerprint handgun boxes that had biometrics cracked them open to see who manufactured the sensor and contacted the heads of the engineering departments at the sensor companies (that I happened to know due to my work) and I had them help me figure out which device had the best fake finger detection so I knew which one to use.

In the case of the Apple sensor again we have no clue what kind of fake finger / live-ness tests they have implemented. I am sure thanks to security researchers once the devices ship we will found out how effective they are dealing with this in short order.

But what happens when the matcher decides there’s a good chance it’s you? It releases a “secret” and what is that secret you ask? Well in most systems it’s actually the password the user would have entered had the sensor not been there.

That’s right, all of the above magic to make entry of the password easier. This isn’t actually a bad thing, but again it depends on how it was implemented.

As a practical matter people can remember 7-9 character passwords, these passwords get re-used or trivially modified which greatly reduces their effectiveness. By using biometrics to gate access to them we can replace that short junky password with a longer key, in the process we can greatly increase the security of a system as a result.

Here is the thing — it doesn’t sound like that’s what they have done here, it seems they have applied the biometric to the four digit numeric pin and made the application store password protected by that pin. I say this because (according to the New York Times) which you will have to set the pin still for recovery purposes. If this is the case (and until we see them we will not know) the biometric is no more secure than the four digit pin its gating access to.

It could still have some value though, for example according to the press conference around 50% of iPhone users set the pin if this can meaningfully increase that number on aggregate were in a better world.

One more troubling thing for me as I think about the Apple integration is that they are one of the most secretive tech companies out there and were not likely to hear the answers of how they have handled the above issues or any others.

The origin of the Windows Biometric Framework

One of the projects I was responsible for when I was at Microsoft was the design and delivery of the Windows Biometric Framework. This was actually one of my favorite projects during my entire tenure at the company.

What most people don’t know is the feature was a very late addition to Windows 7, an analysis of the crash data sent by clients to Windows showed that right after Video Cards these fingerprint readers were one of the largest sources of the “Blue Screen of Death”.

Something else most people don’t know that this was not the first attempt to provide a platform for biometrics into Windows, the prior attempts failed for various reasons but they did not have the same data we had – we knew these devices that were barley used were creating a bad experience for anyone just because the OEM decided to include these sensors as a differentiator.

To fix this a stellar team of senior engineers were re-assigned from other projects and began work almost halfway through the Windows 7 release cycle. Our goal was to make it possible for these devices (and in the future other forms of Biometrics) to work well with the minimal amount of vendor code.

You see these devices need to respond really quickly, especially swipe sensors because they take a picture of your finger as you drag it across the sensor and that data needs to get to a number of components very quickly to make a workable user experience. For this reason the sensor manufacturers all thought they needed to be in kernel, and since most driver developers have only been doing driver development for a few years they did a really bad job of it which resulted in the blue screens.

The value proposition of this project for the vendors was that ultimately they wouldn’t need to write so much software to get their sensors to work, for some this was seen as a positive – especially the new entrants since they didn’t have as much software. For those vendors with complete software suites, they were less thrilled in that a driver framework of any kind places architectural restrictions on the way you build things and since they didn’t get the support calls they didn’t really care that much about the crashes their software caused.

Despite this nearly every biometric vendor begun work on new driver’s client software based on this platform that was being developed at almost the same time they were writing their components. Every vendor who was in the first round of development made significant time investments to make the platform a reality.

The software team at Microsoft also executed amazingly, I am confident that I will allways think of this group of individuals as one of the best I worked with at the company.

In the end the platform made it out the door with more features than we had originally hoped with all the major vendors on board and by the general availability to OEMs people were ready to support real customers.

The late start did mean OEMs had to work very hard to get the new device drivers in their initial images, they had very little runway to do so but several did and those that did not at first got their revised images out shortly later.

This new model delivered a kernel mode and user mode driver model that significantly reduced the amount of software that was needed to get one of these devices working in Windows, we delivered a test suite that helped ensure the devices and their drivers worked reliably. And we were even able to deliver a framework for use, management, group policy, and a hook for enrollment.

The combination of the above meant that we essentially eliminated the historic problems these devices caused, improved the security of how they worked when integrating with windows, made them more supportable and simply work well.

The team went on to work on some very cool additions, many of the team changed but some remained and the work that we began after the “freeze” in the Windows 7 release cycle ultimately found their way out in Windows 8/8.1 – I hope the rest of the things this team were doing find their way out someday soon also.

The (soon to be) not-so Common Name

If you are reading this post you are probably familiar with the use of digital certificates in SSL even if you are you may not be familiar with their history. Before we go there though we should start with what, at its core a digital certificate actually is.

Fundamentally a digital certificate is a binding of entitlements and constraints to a key, in other words they say things like “The holder of the private key associated with this certificate can rightfully use the name Ryan Hurst when signing emails”.

When originally conceived they were to be used to help bind subjects (people and resources) to their representations in directories. This is why the Subject Name in a certificate is structured as a Distinguished Name (DN) as this is how a directory uniquely identify a subject.

This makes sense when looking up an encryption key for a user in an enterprise directory but not so well on the Internet where there is no global directory of users.

This brings us to SSL, it was introduced in the mid 1990s and at the time nearly every large enterprise was already deploying Directories and Certificate Authorities as part of their identity management frameworks. The technology of X.509 was tested, broadly accepted and fit the bill for the problem the designers of the protocol had so they included it as is.

At the time there was only one way to represent concept of a subject of a certificate and that was the Common Name (CN) so they chose to put the DNS name of the SSL server there. This was technically acceptable but was a re-purposing of a field that was really intended for a users actual name.

After SSL was finalized the IETF released their profile of X.509 for use on the Internet this standard introduced the concept of a Subject Alternative Name (SAN) where you can put names that are not associated with a directory. The problem is that ship had sailed, by the late 90s when this was standardized everyone had already settled on using the Common Name.

This led us down a bad path, first of all many servers (especially today) have multiple DNS names and application that supported only the Common Name field couldn’t work with a single certificate with more than one DNS name in it. This was addressed in the short term by using a single certificate for each DNS name but this came at a high cost, we also needed to use a single IP address for each domain name.

Another problem with this approach is applications never really knew what to expect in the Common Name field. Is the value in that field a person’s name or is it a DNS name? This is a problem because often times there are rules you need to use to validate a piece of data before using it and this is especially true for DNS names.

For these reasons (and more) since at least 1999 (when RFC 2459 was standardized) we have been on a slow path to moving away from the use of Common Names for domain names to using Subject Alternative Names.

Fast forward to 2012 some Stanford researchers publish a paper titled “The most dangerous code in the world: validating SSL certificates in non-browser software” which identifies a bunch of applications who fail to do the most basic certificate validation tasks correctly and as a result are the source of a bunch of security vulnerabilities.

These applications gave their users a false sense of security not out of malice but as a result of a lack of understanding of the technology they used to deliver on that promise. A big part of that is the complexity 18 years of technological evolution carries with it.

To address this a number of things need to change but one of the most immediate changes is what that the definition of what constitute a “valid” SSL certificate is changing to make the rule-set a little simpler for the application developer and to rule out options that are no longer considered good practice.

We see this happening in a few ways. First the CA/Browser Forum has worked with Browsers to define a set of Baseline Practices that all Certificates must meet, we are also seeing Browsers doing sanity checks to ensure these practices are in-fact followed.

These baseline requirements mandate that certificate authorities always include at least one Subject Alternative Name in the SSL certificates they issue, this means that today an application doesn’t need to look in both the Common Name and the Subject Alternative Name they only need to check the latter.

Currently most Certificate Authorities will include the first DNS Name from the Subject Alternative Name in the Common Name field also but this is done primarily for legacy reasons and at some point in the not so distant future will stop.

When it does certificates will be a little smaller and developers lives will be a little easier.

Ryan

Resources

· Baseline Requirements

· Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile

· Microsoft Security Advisory: Update for minimum certificate key length

Strategies for Deploying 2FA

 

Two Factor Authentication (2FA) enables applications and services to tie the usage of a two things (typically something you know and something you have) to user accounts so that both factors must be proven prior to the use of the service. Normally the second factor is a device (either physical or virtual); a great example of this is how Google and Microsoft have added support for One Time Passwords (OTP) in their online services.

With that said there are many forms of two factor authentication and OTP is simply one of them. When we look at deploying 2FA, before you get to choosing what kind of technology to adopt you first need to understand “how” we will you use it and where? The most common answers to the “how” question are:

  • Optional usage.
  • Conditional usage.
  • Mandatory usage.
  • Account / password recovery.

You will notice I have crossed out the first option (optional usage); the reasoning behind this is that optional usage provides no security benefit. This is because the attacker can still choose to attack the weaker mechanism.

Next is conditional usage, an example to of this model would be how some solutions prompt you to authenticate with a stronger mechanism when performing a privileged operation such as modifying another users account.

Then there is mandatory usage, this is of course what we all envision when we think of using Two Factor Authentication. Unfortunately it is seldom used as it has significant barriers do adoption, the three most common issues I see preventing this model being used are:

  1. The usability of these solutions is normally considered too poor for the serviced user community to be expected to use all the time.
  2. Accessing the same account from multiple devices and locations where the second factor is not available or a viable option.
  3. Not all services are able to be enabled for multi-factor authentication. The canonical example here is that of POP/IMAP services offered by Google, by default they allow the user to log in via POP/IMAP with a password.

This leaves many sites coming to the conclusion that leveraging 2FA for account recovery (a special case of conditional usage) is the most deployable solution for their user bases.

But what if you want to actually achieve a world where the usage of 2FA is mandatory? The answer is buried in risk assessment, planning and having a technology strategy that includes acceptable authentication technologies.

Organizations should pick what technologies they will utilize for authentication and incorporate these as requirements into the procurement and technology adoption processes. While its natural (and even beneficial) to standardize on a single technology the business requirements and technological realities mean that you will have a suite of standards you will need to support.

Usually the process of establishing those standards begins with an inventory of what technologies you are already using. If you are an enterprise and running Windows that list will include Kerberos with passwords as well as NTLM.

Next one assesses the how extensive the use of each of these protocols are in your environment, what platforms/applications support those protocols, how business critical/sensitive those systems are, what are the constituency of users that use them and finally getting a solid understanding of the risks those protocols represent in your environment.

Armed with this information you now have sufficient information to build a plan, most of the time the conclusion will be something along the following:

  1. You are leveraging some legacy authentication mechanism (possibly NTLM) that you need to aggressively deprecate.
  2. There are small (relatively speaking) set of users who perform business critical / sensitive operations and if those users were to step up to a stronger authentication mechanism the business would benefit.
  3. If will take numerous budgetary cycles to standardize the organization on the desired core authentication technologies.

Armed with this information you are now prepared to evaluate the various approaches to Two Factor Authentication and build a practical plan on how you get it ubiquitously deployed. That deployment will likely involve all of the approaches called out above:

  1. Optional usage – Piloting the use of the chosen technologies with users across your targeted user segments, you ask them to use the new authentication solution as their primary authentication technique.  You also ask them to track their experiences, report their problems and recommendations to increase chances of successful adoption. During this time it is very valuable to measure their actual usage of the technology use audit logs if available.
  2. Conditional usage – Usually it is not possible to jump directly to mandatory usage so the most sensitive users and scenarios are tackled first, for reasons called out above this also typically requires starting by only requiring the usage when possible or appropriate. This allows you to quickly begin to realize the benefits of deploying this technology while not negatively effecting productivity.
  3. Mandatory usage – Once you feel prepared to support the usage as the primary authentication approach for your identified application and scenarios you move to making the usage you’re your identified users / scenarios mandatory. This is typically done one user segment at a time, each business has different operational requirements and as you deploy to each community of users you learn something new that can be used to ensure the project is a business success.

Hopefully you found this post useful, let me know if you have any questions and would like to discuss your particular situation and how one can approach eliminating or at least reducing your organizations dependency on passwords.

Windows Certificate Enrollment Protocols

Windows has two sets of certificate enrollment protocols; these protocols are used to enable a client to acquire and maintain certificates with little to no manual interaction.

The first implementation of certificate enrollment in Windows was introduced with Windows XP, it was was based on DCOM; this DCOM dependency limited where you could deploy it. The solution could never be practically or securely could be used over the internet and in segmented corporate networks it in essence required each island of trust to deploy their own certificate authority which was both costly and impractical.

The second implementation was introduced in Windows 7, it is based on SOAP based web services and has two core APIs one for determining what certificates a client should enroll for and another for acquiring those certificates. This newer approach addresses the security and deployability concerns of its predecessor and enables the deployment of CA as a pure Software As A Services (SAAS) solution.

Normally I don’t write about GlobalSign products here but when we decided how we would build our own solution in this space we decided the security and deployment limitations of the DCOM approach were too limiting to enable our goals of reducing TCO and enabling broader use of certificates so we went with the more flexible and secure approach.

DCOM and your Firewall

DCOM (Distributed Component Object Model) is a framework used by Windows to allow COM components to work over the network.

Unlike your traditional TCP/IP and UDP/IP services where a single protocol has a fixed port DCOM dynamically assigns ports for the COM objects it remotes.

Any client wishing to communicate with objects owned by the server will always connect to the same TCP or UDP port. Clients discover the port associated with a particular object by connecting to and using the services provided by DCOM’s Service Control Manager (SCM).

The SCM always operates at a fixed network port on every computer; this is always port 135 for both TCP and UDP.

DCOM is a great technology for what it was designed for but unfortunately that design did not include the internet or firewalls.

If a firewall exists between the client and the DCOM endpoint you must open up a large range of ports that can be used for virtually any traffic between those two endpoints. By default, those ports port range from 1024 to5000.

The nature of this model means that one port is allocated per process, so the number of available ports equates to the number of simultaneous DCOM processes allowed through the firewall.

This design basically negates the value proposition of the firewall; as such deployment of services based on DCOM across firewall boundaries is not something that should ever be done.

Enterprise logon with challenge response (OTP) tokens

So from time to time I am asked about how to add support for a new authentication method for Windows Logon, the answer to that question is buried in what authentication methods Windows natively supports.

For the purpose of this blog post lets scope that question to the Windows Kerberos Implementation as this is the “modern” authentication platform in Windows (in other words Kerberos as they do not want you to be using NTLM any further), additionally it supports most authentication methods.

So what are those methods? Essentially there are two passwords and public/private keys. What about the Windows Native implementation of Biometrics and Picture Passwords? These like nearly every other modality of authentication in Windows is simply a layer on top of the native support of passwords that is built into Windows.

I should note I was the Lead Program Manager for the Windows Biometric Framework and I do think it is a good solution for what it was designed for.

Let me explain; let’s use Biometric as an example. In this solution there is a service that works with the biometric sensors to perform the biometric match. When the match occurs this service releases a clear text copy of your password which in turn is stuffed into the Windows logon path just as if the user entered the password by hand.

This means that somewhere in the file-system there is a clear text copy of the password stored in a reversibly encrypted form. Specifically the encryption done in this case is performed using Windows subsystem called the Data Protection API (DPAPI). DPAPI is used by applications when they need keep a secret — secret, you see this is a Catch 22 scenario unless a password (or key) is entered there is no secret to encrypt with which means ultimately there is a secret that is stored in the clear. This means if you can get at the root secret you can get at the clear text values encrypted with DPAPI.

There are ways to mitigate some of the associated risks; one example being the use of Bitlocker and a TPM protector but unfortunately this is not broadly deployed and doesn’t address the full risk profile; this is why Microsoft positions the Biometric feature as one designed Windows as a convince. That is not to say it cannot be used in a secure way or to suggest it can not be used in an enterprise but unless thought is given to the threat model and the right mitigations are deployed it actually weakens your security.

So what about those enterprise OTP and challenge response solutions that we keep hearing about, surely they must be different right? Unfortunately no they are not. In fact they are probably worse because unlike the Windows Biometric solution they create a central repository of all clear text passwords.

You see for them to work they introduce a client (once called a GINA, now called a Logon Provider) that collects the challenge and likely has a service component or SSPI provider that implements a protocol that interacts with their product’s service which then validates the challenge and returns a clear text password which they then stuff on the users behalf.

No matter how good the design and implementation of this system is they are reliant on that singular repository of clear text / reversibly encrypted passwords. Things get even worse when you look at how these systems have to deal with offline and local logons, you know the scenario where a user takes their laptop on a plane and still needs to do work? For this to work they need to store the clear text password on the clear on the local machine just like the Biometrics solution.

What about the public / private key approach? There are essentially two variants of this in windows PKINIT and PKU2U. The most common example is the use of a smart card, this is what is used by many Fortune 500s and governments to secure their logons. It is also possible to add these other modalities into Windows by emulating this approach with a software virtual smartcard but they too fall prey to the same attack vectors, namely what key do you use to keep the private key protected (the answer is you use DPAPI).

So does this mean OTP is a bad solution? No in fact it’s much better than passwords if a system was designed to use it and it makes a great additional factor, the thing is Windows was not.

How to redirect POST’s based on their Content-Type in Nginx

Time, it gives everything context especially in Public Key Infrastructure (PKI).

There a few way time comes into play with PKI the most obvious is that internal to a Certificate Authorities own infrastructure they use highly accurate and secure time sources to ensure any statements about time they make are accurate that all systems agree on the time.

Then there is the question of notarization, when a subscriber (a certificate holder) signs something how does a relying party (the person looking at the signature) know it has been signed just now or  a year ago – that’s where time stamping comes in.

These same concepts come into play in financial systems; remember the movie Entrapment? They stole a few seconds of time during a system update and netted millions. OK its just a movie and surely sensationalizes the concept of time but in reality these are real problems – you need trusted time.

Some of you have heard me discuss aspects of us building our new datacenter, one of the first services we are deploying to it is trusted time and time-stamping. The two most common protocols used for time stamping are Authenticode and RFC3161, the protocols are similar in nature, in both a client posts a binary blob to the server which is then time stamped; each protocol uses a different Content-Type in the case of Authenticode it is “application/octet-stream” and for RFC3161 it is “application/timestamp-query”.

If you look at most time stamping services (ours included today) they require to use different URIs (/scripts/timestamp.dll for Authenticode and /tsa for RFC3161 for example) but this just makes things more difficult for the user unnecessarily. To address this in our new service we will be using the posted Content-Type to ensure the right back-end service gets the timestamp request (though our old URLs will continue to work as well).

We use Nginx to do this remapping as it is our edge proxy server in this environment; I thought it might be useful for others to see how one can implement this sort of remapping as I did not see any great examples on the internet, here is what this might look like if you have a similar problem.

 

server {
listen       80;
server_name  timestamp.example.com;

 

location / {
if ($request_method = GET ) {
rewrite ^ http://www.example.com/timestamping;
}

 

if ($http_content_type = “application/octet-stream”) {
set $args “worker=authenticode”;
}

 

if ($http_content_type = “application/timestamp-query”) {
set $args “worker=rfc3161”;
}

 

proxy_set_header Host $http_host;
proxy_next_upstream error timeout http_500 http_502 http_503 http_504;
proxy_connect_timeout 3s;
proxy_read_timeout 3s;
proxy_pass http://timestamp1/process?$args;
}

}

With this configuration, no matter what URI the timestamp is sent to and whatever protocol it is sent via it will get routed to the appropriate timestamp server.

I should note I excluded a number of items like rate limiting and error handling to keep the post simple, these are also important concepts for you to consider in such a deployment.

Good Luck!

Deploying forward secrecy on RedHat, Centos or Fedora based systems

If you want to deploy perfect forward secrecy and you use a RedHat, Centos or Fedora based system you likely won’t be able to do so without building your own OpenSSL. This is because by default the OpenSSL packages for these systems do not include ECC or ECDH and when web-servers like apache and Nginx are built against libraries that do not support them they obviously omit support for the algorithms.

Hopefully these distributions and the community repositories that support them will start including support for these algorithms by default soon but in the meantime you can add support by building your own.

You can check if your OpenSSL supports these algorithms easily enough, just run this command:

> openssl ciphers

If you see ciphers like “ECDHE-RSA-AES256-GCM-SHA384” then you have a version of OpenSSL that was built with ECC and ECDHE support enabled which is required if you want forward secrecy today. If you don’t see such a suite don’t worry, just check out this blog post and it will step you through how to build a new OpenSSL.

Unfortunately building a software package on each Linux distribution is slightly different, I will share what worked for me on Centos 6.3.

1. We don’t want to break your current Nginx deployment so lets start by finding out what options your Nginx was built with, you do this with the following command:

> nginx -V

This will return something like this:

nginx version: nginx/1.5.1

built by gcc 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC)

TLS SNI support enabled

configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 --with-cc-opt='-O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic'

To ensure we don’t omit something your using we will use the same options when we build our Nginx.

 

2. Next let’s make a backup of everything you have in your current Nginx installation just in case something doesn’t go right:

root> cp /usr/sbin/nginx /usr/sbin/nginx.orig

root> cp -r /etc/nginx /etc/nginx.orig

3. Now let’s make sure we get our dependencies installed, download our source and extract it so we can do our build.

root> cd /tmp

root> yum install pcre

root> yum install pcre-devel

root> yum install zlib

root> yum install zlib-devel

root> wget http://nginx.org/download/nginx-1.5.1.tar.gz

root> tar -xvzf nginx-1.5.1.tar.gz

root> cd nginx-1.5.1

4. Then we need to configure our build. Lets start by taking the options you saw when you ran “nginx -V” and add the following to the end of them:

opt='-L/usr/local/ssl/lib' --with-cc-opt='-I/usr/local/ssl/include'

This is where your OpenSSL libraries and headers should be, just append the combined set of parameters to the “./configure” command, this will look like this:

root> ./configure --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 --with-cc-opt='-O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --with-ld-opt='-L/usr/local/ssl/lib' --with-cc-opt='-I/usr/local/ssl/include'

5. Make and install your Nginx

root> make
root> make install

6. Restart Nginx

services nginx restart

Now if you are using the SSL configuration I recommend here you already have set your server to prefer the commonly supported forward secret algorithms and if you visit https://sslcheck.x509labs.com you will see these ECDH based algorithms at the top of the list of negotiated suites.

 

For more information see:

1. How to botch TLS forward secrecy, AGL
2.
Getting the Most Out of SSL Part 1: Choose the Right Certificate, CA Security
3. SSL Configuration Checker, X509 Labs
4. High Performance Browser Networking, Ilya Grigorik
5. Bulletproof SSL/TLS and PKI, Ivan Ristic