Tag Archives: TLS

How to measure the WebPKI ecosystem

The web is dependent on there being a robust, secure, and scalable set of CAs being able to provide TLS certificates. It is unhealthy for there to be a single provider because if for any reason they have an operational or security issue they could become unavailable leaving the web in a world of hurt.

Beyond that in the name of TLS reliability TLS certificate consumers should be relying on multiple CAs for their certificates. For example, to reduce exposure to outages your certificate lifecycle management solution should support failover from one CA to the next. 

Another example of why you should use multiple CAs is to help ensure relying on party agility to changes in CAs, for example, if a CA changes which root key material they use you may lose (or gain) device compatibility, or if an issuing CA changes and someone is pinning you might break them. By to ensure device compatibility long term one should use multiple CAs to help ensure the relying party ecosystem you support is agile to these changes.

For this to work though you need to have an ecosystem of CAs you can use interchangeably, ACME (RFC 8555) helps here substantially because it provides a normalized way to interact with CAs to get these certificates. That is only helpful if there are multiple CAs that implement the protocol and if those CAs are able to scale to meet the needs of those who rely on them.

This is particularly important when you look at SaaS-like offerings the larger ones will often demand millions of certificates that need to be able to be revoked and re-issued in less than 24 hours in some cases so the scalability of the CA becomes particularly important.

Assessing the scalability of a CA is hard but one of the closest proxies you have is their overall market share.

In the US, according to the Google Transparency Report, 97% of all web traffic is protected with TLS. To put that in context there were 366.8 million registered domain names as of 2022.

Certificates can represent more than one domain name so depending on what you are measuring certificate count may not be the best metric to asses CA market share. With that said in the context of scalability, it’s probably a good metric.

What are some ways to evaluate the CA impact and market share?

  • How many certificates are issued by the CA and are unexpired.
  • How many domains are contained within the unexpired certificates issued by a CA.
  • What percentage of web traffic would be covered by the certificates issued by a CA.
  • What percentage of certificates issued by the CA are unexpired and actively in use.

Each of these answers different questions, and they progressively get harder to measure as you go down the list. The easiest by far is how many certificates are issued and still unexpired. This is because all CAs log what is called a pre-certificate to the Certificate Transparency ecosystem before issuance.

NOTE: Publication of a pre-certificate is not required by the rules of the ecosystem however not doing so would mean that users relying on that certificate would get an error.

While the existence of a pre-certificate doesn’t promise the certificate is in use it does signal that someone who controlled that domain wanted to use a certificate for that domain. They wouldn’t have bothered going to the trouble of doing that if there was not an intent to use the certificate in some way.

The easiest way to look at this data is to use the excellent https://crt.sh/cert-populations report. While it does go down from time to time it also provides very fresh views into the un-expired pre-certificate count.

NOTE: Since not all CAs publish what is referred to as the “final certificate” you can safely ignore the Certificate count data on this report.

So what does this data look like (As of July 29th, 2022)?

CertificatesPrecertificates
ALLUnexpiredALLUnexpired% of Unexpired Population
Internet Security Research Group2,834,892,521264,685,3352,553,476,280228,023,48050.18%
Sectigo109,399,9847,245,014373,669,758106,119,71323.35%
DigiCert560,740,35744,640,273497,448,38945,475,97210.01%
GoDaddy6,371,9601,874,81252,669,26031,293,3046.89%
Google Trust Services LLC17,28417828,112,66215,443,3063.40%
Amazon13,540,55698,980104,887,85914,757,6503.25%
GlobalSign nv-sa16,729,66393723,636,7786,893,7281.52%
Actalis55233,236,4931,691,7440.37%
Asseco Data Systems S.A. (previously Unizeto Certum)6,298,4726209,375,7421,571,8520.35%
Start Commercial (StartCom) Ltd.1,495,580982,866,004883,0220.19%
?1,241,4632143,924,285567,4800.12%
Entrust739,9015242,304,521554,4310.12%
SECOM Trust Systems CO., LTD.156,234-112,217,668242,8150.05%
WoSign CA Limited88,6607250,823110,1010.02%
Certainly LLC31,361205240,103101,5330.02%
Buypass186,2002702,127,22898,8350.02%
QuoVadis53,636432236,06397,4540.02%
SecureTrust311,226227301,19778,4890.02%
Microsoft Corporation Core Services Engineering & Operations ( “Microsoft CSEO”)216,44873,560212,90574,6970.02%
Deutsche Telekom Security GmbH57,57032147,94949,5560.01%
JPRS15,7383482,64236,5120.01%
SwissSign AG237,88668,27283,50426,9750.01%
Government of Spain, Fábrica Nacional de Moneda y Timbre (FNMT)86,87223,86657,20623,7720.01%

What you will see is the top 5 CAs out of 233 issue 98.59% of all TLS certificates. While I would like to see this distribution be more normalized to ensure that the ecosystem is not overly dependent on any one entity as far as health goes it does show there are several large providers out there that support the Web who have demonstrated they can scale to meet large certificate consumption needs.

One thing you will notice in this data is that the variability in the pre-certificate “ALL” and “Unexpired” count is quite large in some cases. This is because some CAs like Let’s Encrypt and Google Trust Services either predominantly, or exclusively issue shorter-lived certificates. This results in the certificate count in “All” being much higher than the “Unexpired” case.

So what can we take away from this data? I think there are four key takeaways:

  1. Support of certificate issuance via ACME has made shorter-lived certificates viable and they now represent the large majority of certificates on the web.
  2. Support of ACME has helped grow the percentage of the web that is encrypted from about half of the web to nearly 100% of the web.
  3. 2.15% of CAs issue 98.59% of all TLS certificates on the web.

Deploying SSL – Beyond the certificate and cipher suites

If you were to go do a search on the internet for “configuring SSL” you would find a ton of references on configuring your favorite web server to do SSL some of it good and some of it not so good. But what you don’t see a lot of content on is how to deploy it successfully.

What do I mean by successfully? These articles ignore the larger picture, for example:

  1. Are there changes to your content you will need to make?
  2. What about external content and script references?
  3. Are there any SEO considerations?
  4. Are there other related considerations?

To some these things may be common-sense but even for those a refresher never hurt so lets go over them again briefly.

 

Are there changes to your content you will need to make?

Probably, lots of content I encounter explicitly references a protocol serving it (aka href=”http://…” and src=”http://…”) and if that’s the way your content looks then yes you will want to update your content to use relative references, for example

href=”//{hostname}/{uri}”

src=”/{uri}”

This way your content is independent of what protocols are used to transport it, it will also help prevent your users from encountering “mixed content” warnings.

 

What about external content and script references?

Another scenario that causes mixed content warnings is when sites use of scripts and content hosted on other servers that is explicitly referenced over HTTP. The two most common I encounter are YouTube Embeds and Google Analytics but there are lots of different third-party content and scripts out there and each one you embed will also need to support SSL.

Thankfully I have never encountered one that does not support SSL and in most cases you will just need to make the reference relative (“//”) and let the browser decide what protocol to use to get the reference. In the very rare cases where this does not work a quick email to support at the content/script provider will get you the URL to the SSL version of the content/script.

Though this has always been the case one thing to keep in mind is that the perceived performance and actual security of your site is dependent on the performance and security of the providers you include in it. I strongly recommend you check their performance and SSL configuration and ask them to make any changes necessary to address issues this might identify.

 

Are there any SEO considerations?

Aren’t there always? So to achieve all of security benefits of SSL you have to deploy SSL across your entire site (this is commonly referred to as Always On SSL). This means that as far as a search engine is concerned there could be two copies of the same content. This is treated as a negative condition in most page ranking schemes, we address this in a few ways:

1. Tell the search engine which content is authoritative (aka which one we want them to index), we do this using:

    • Updating <link rel=”canonical”> to point to the HTTPS version.
    • Updating the XML Sitemap to refer to the HTTPS version of the content.

Making these two changes ensures the search engine will index the SSL version of the site so the first link the user visits will be your HTTPS version.

These things not only improve the users experience by making them get at the content quicker (instead of relying on a rewrite rule to get them to the HTTPS content) but also help to mitigate MITM attacks that would be possible for organic traffic based on your HTTP urls.

2. Ensure the robots.txt is available over SSL.

3. Redirect all HTTP requests to your site to the HTTPs version using a permanent redirect (a HTTP 301), this will transfer your PageRank to the SSL url.

4. Update the search engine webmaster tools to refer to the HTTPS url instead of the HTTP URL.

 

Are there other considerations?

There are a few, for one there is performance. There is a myth that SSL is computationally expensive, it’s simply not true (at least today) but that doesn’t mean you don’t need to be concerned with performance.

There are several settings you care about, for example it’s common for websites to use domain sharding means when you’re using SSL is each one of those requests represents a new SSL negotiation and the negotiation is the most costly part of the SSL session. While we can’t eliminate this cost we can ensure that the servers terminating our SSL sessions implement session caching and reuse to reduce the impact of the SSL overhead. We can also try to limit the number of domains we use when sharding so reduce the number of SSL sessions needed to finish rendering a site.

You may also want to look at deploying a forward proxy in front of your web servers where all SSL would be terminated; this can give you performance benefits beyond SSL and can simplify key and SSL management in your environment at the same time.

Then there is the question of cookies, while all sensitive cookies should already be marked “secure” so they won’t get sent over non-secure sessions you should consider marking all cookies as “secure” since the whole site is now supposed to be served over SSL.

Depending on how you have authored your rewrite rules there may be static references to HTTP buried in there, you will want to review your rewites to ensure they are protocol independent (where appropriate) so that you don’t end up forcing users through an unnecessary redirect.

And finally setting the HTTP Strict Transport Security header means browsers will visit you over HTTPS the every time, even if not from search results; this will improve relative perceived performance and help protect from MITM attacks.

 

Ryan

 

Resources

1. Choose the Right Certificate, CA Security

2. Deploying SSL – How to get your server configuration right, Ryan Hurst

3. SSL Configuration Checker, X509 Labs

4. SSL Pulse, Trustworthy Internet Movement

5. Bulletproof SSL/TLS and PKI, Ivan Ristic

6. High Performance Browser Networking, Ilya Grigorik

7. How to get the latest stable OpenSSL, Apache and Nginx, Ryan Hurst

8. Always On SSL, OTA

9. Revocation Report, X509 Labs

10. SSL/TLS Deployment Best Practices, Qualys Labs

11. Transport Layer Security, WikiPedia

12. How to botch TLS forward secrecy, AGL

Deploying SSL – How to get your server configuration right

They say the most complicated skill is to be simple; despite SSL and HTTPS having been around for a long time, they still are not as simple as they could be.

One of the reasons for this is that the security industry is constantly learning more about how to design and build secure systems; as a result, the protocols and software used to secure online services need to continuously evolve to keep up with the latest risks.

This situation creates a moving target for server administrators, creating a situation where this year’s “best practice” may not meet next year’s. So how is a web server administrator to keep up with the ever-changing SSL deployment best practices?

There is, of course, a ton of great resources on the web that you can use to follow industry trends and recent security research, but it’s often difficult to distill this information into actionable and interoperable SSL configuration choices.

To help manage this problem there are tools like the X509labs SSL Configuration Checker which look your server’s configuration and makes recommendations on what you should change to address current industry best practices. This tool makes recommendations that are based on current and past security research, trends, and both client and server behavior and capability.

The tool performs over 33 different tests on your server configuration and, based on the results, recommends specific changes you should make to address its findings.

In general, the guidance the tool provides can be categorized as follows:

 

Support latest versions of TLS protocol

Often organizations are slow to pick up newer versions of their web server and SSL implementations.  This is normally a conscious decision attributed to the old adages of “if it’s not broken don’t fix it.”

The problem is that these older versions are plagued with security issues. In many cases, these organizations pick up security patches, but these patches do not include the more recent (and more secure) versions of the protocols.

It is important that all sites add support for TLS 1.2 as this new version of the protocol offers security improvements over its predecessors and lays the groundwork for addressing future security concerns.

Disable older known insecure versions of the SSL protocol

SSL was defined in 1995 and has evolved significantly since then, SSL 2.0 in particular has been found to have a number of vulnerabilities. Thankfully these issues have been resolved in later version of the protocol.

Unfortunately at least 28% of sites today still support it (based on SSL-pulse data); when I speak to server administrators about why they enable this older version they commonly mention concerns over client interoperability. Thankfully browser statistics show us that TLS 1.0 support is ubiquitous and it is no longer necessary to support the older insecure version of the protocol.

 

Choose secure and modern cipher suites

This is one of the more confusing parts of configuring SSL; it’s also one of the most important. No matter how strong the cryptographic key material that goes into your certificate, the strength of your SSL is only as secure as the cryptography used to encrypt the session.

You don’t need to be a cryptographer or security researcher to make the right choices though, the X509Labs SSL configuration checker will help you keep on top of current recommendations. Based on current research, the following would be solid choices for you to go with:

Apache

SSLProtocol -ALL +SSLv3 +TLSv1 +TLSv1.1 +TLSv1.2;
SSLCipherSuite "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS"
SSLHonorCipherOrder on;

Nginx

ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS";
ssl_prefer_server_ciphers on;

These settings were chosen based on several factors including strength of the cryptography, interoperability and support for forward secrecy whenever it is supported by both the client and the server.

What is forward secrecy? You have forward secrecy when an attacker needs more than the encrypted traffic from your server and its private key to decrypt the traffic.

For you to be able to use the cipher suites that support forward secrecy here you will need to be using a version of OpenSSL and your web server that was built with ECDHE support. If you’re not you can still use these settings you just won’t offer forward secrecy to your users.

 

Disable insecure options in SSL and HTTP

As a general rule, protocols have options; these options can have unforeseen side-effects.

A great example of this is the option of SSL compression. Compression was added to SSL to improve performance of the protocol but it had a side effect – it enabled attackers to perform cryptanalysis on the cryptographic keys used in SSL. This attack was called CRIME (Compression Ratio Info-leak Made Easy) and, as such, this option is disabled today in secure SSL configurations.

Ensuring your configuration does not enable any such options is key to having a secure SSL configuration.

 

Enable performance optimizing options in SSL

To truly benefit from deploying SSL you need to apply it to your whole site—not doing so exposes sessions to attacks. The most common reason I hear from organizations as to why they are not deploying SSL across their whole site concerns performance.

This is a legitimate concern, according to Forester Research “The average online shopper expects your pages to load in two seconds or less, down from four seconds in 2006; after three seconds, up to 40% will abandon your site”.

And while it is true that an improperly configured web server will perform notably different than a properly configured one, it’s not difficult to configure your servers so that performance is not a major concern.

 

Ryan

 

Resources

  1. Getting the Most Out of SSL Part 1: Choose the Right Certificate, CA Security
  2. SSL Configuration Checker, CA Security
  3. SSL Pulse, Trustworthy Internet Movement
  4. Bulletproof SSL/TLS and PKI, Ivan Ristic
  5. High Performance Browser Networking, Ilya Grigorik
  6. How to get the latest stable OpenSSL, Apache and Nginx, Ryan Hurst
  7. Always On SSL, OTA
  8. Revocation Report, X509 Labs
  9. Transport Layer Security, WikiPedia
  10. Perfect forward secrecy , Wikipedia
  11. SSL Labs: Deploying Forward Secrecy, Qualys
  12. Intercepted today, decrypted tomorrow, Netcraft
  13. How to Build Your Own OpenSSL, Ryan Hurst
  14. Deploying forward secrecy on RedHat, Centos or Fedora based systems, Ryan Hurst

Is SSL Broken?

[ This is a re-post of a article I wrote for the GlobalSign corporate blog, you can find it here]

It seems every month a new flaw is identified in SSL, and while that’s a slight exaggeration, after a while one starts to ask the question – is SSL broken? My answer would to that question would be no, but the protocol is nearly twenty years old and even though it now carries a new name (TLS) it also carries much of the baggage of the past in its design.

Despite this fact, my faith in TLS is stronger today than it ever was. My reasoning is simple – today we understand the strengths and weaknesses of this protocol better than we ever have. It is continuously reviewed by the world’s best engineers and cryptographers, trying to find the bad assumptions their predecessors made, strengthening it in response to identified weaknesses, and modernizing it to use the strongest forms of cryptography available.

This continuous investment in this foundational technology gives me faith.

Today another attack on TLS was made public.  “Lucky Thirteen” is a derivative of the work of French cryptographer Serge Vaudenay (Padding Oracles against CBC based ciphers – 2010), though unlike Vaudenay’s attack, Lucky Thirteen uses a known Timing Attack previously believed to be impractical. A successful application of this attack enables an attacker to decrypt your SSL communications.

Unlike other recent attacks, such as BEAST,  Lucky Thirteen requires a server-side fix. This means that complete and effective protection against this attack will require all webservers to be updated or patched.

That said, it is possible to mitigate the attack by removing CBC cipher suites, since the attack is against SSL/TLS’s use of CBC. But what to use in its place? The consensus of security researchers is to adopt suites based on AES-GCM, and while I agree, this has one problem – the large population of clients that do not yet support it.

This recommendation is complicated slightly by the BEAST attack from last year, the resolution of which required a client side fix which has, in all likelihood, not yet been deployed ubiquitously. As such, I still recommend prioritizing the older and less secure RC4 based suites above AES-GCM since it addresses both issues.

But should you be worried? It depends. If you are using TLS (and not its little brother DTLS) I would say your best bet is to walk calmly to the nearest exit, and use this as an excuse to ensure you are following industry Best Practices when deploying SSL – if  you’re not, this attack is the least of your worries.  Specifically I would recommend visiting the SSL Configuration Checker and make the critical (red) and important (yellow) configuration changes it suggests.

I would also encourage you to deploy HTTP Strict Transport Security  on your site since the attack this mitigates (SSL stripping) is much easier for an attacker to execute.

The good news is that if you were already following the advice of the SSL Configuration Checker you were prioritizing RC4 over other ciphers and most sessions to your server were resistant to this attack. This doesn’t mean you should not be deploying the patch to this issue, you just don’t need to do so in a crazed rush.

So are there any lessons we can take away from this? Of course there are. As a server operator, I would say this finding underscores the importance of regularly reviewing your server configuration to ensure that it follows industry best, and that you are always operating the most recent and stable release of your web server.

If you want a more technical walk through of this attack, I highly recommend this post by Mathew Green on TLS Timing Oracles or this one by Adam Langly.

SSL 3.0 Usage in the Wild

Recently I had an opportunity to look at some logs that showed the cipher suites and protocol versions being negotiated for a large cross-section of websites.

I have always wanted to look at data like this and as such have instrumented my own sites to look at it but let’s face it some uber geek blog or security product company website just isn’t going to have representative traffic for the internet at large.

One of the easiest and most useful things to gleam from this data is that the impact of disabling SSL 3.0 is actually quite small.

So of the sampling 2.48% of all SSL/TLS sessions were done with SSL 3.0, if we look at (and believe) the User Agents that negotiated these sessions we see 74.98% of these were Windows clients, the next biggest chunk was Gecko at 16.39%.

Browser %
Internet Explorer

74.98%

Gecko

16.39%

Apple

4.12%

Playstation

2.85%

Chrome

1.36%

Other

0.30%

100.00%

 

Of these Windows clients 45.45% of them were Windows 2000 or XP but only 6.67% of them were running versions of Internet Explorer that did not support TLS 1.0; this basically boils down the the IE versions before version 7 as this was the first to enable TLS by default. So why did we see the remaining 68.31% of the 2.48% negotiating SSL when they support TLS?

There are a few possible explanations:

  1. Some TLS implementations will fall back to SSL in the event of a failure, one common example of a failure would be an intermittent TCP connection problem. Basically if this is the case the client had a problem reaching the server and thought it might be related to TLS and so it tried again. In this case its likely that if it had tried with TLS it probably would have succeeded also.  It also seems that its likley in this case the user did not get a working experience — the assumption here is that the TCP problems they are experiencing were not a one time thing.
  2. Some old TLS implementations had problems with TLS extensions as such some TLS implementations added logic to fall back to SSL when they encountered a this extension intolerance, again falling back to TLS (without extensions) would have likely also worked.
  3. Some enterprises may have used group policy to disable the use of TLS due to the TLS extension intolerance problems (see #2).
  4. Some clients are lying; they may be crawlers, bots and other such automated agents looking to profile these websites.

So what can we do with this data?

Well for one we can understand what interoperability implications we may encounter by disabling SSL 3.0 on our servers – on the surface the answer is up to 2.48% of clients will not be able to get to our servers.

The real answer is that it’s likely that figure is much smaller, probably half that if not even less than.

OK, so we understand the interoperability impact but why should I care? Well there are a few reasons:

  1. NIST 140-2 compliance requires disabling SSL 3 ciphers and by disabling SSL 3 you do just that.
  2. The browsers that only support this decade old protocol are nearly as old and a have a litany of issues of their own.
  3. TLS has a number of security, performance and deploy-ability enhancing  features such such as stronger cipher suites, Session Tickets and SNI that you will benefit from.

Another thing you should ask yourself is did you design your site for these old browsers? If not by leaving SSL 3 enabled you really are not getting much if any benefit since those users who require it would likely not be able to use your site effectively anyways.

When we consider this data I believe the natural conclusion is that disabling SSL 3.0 it is the right thing to do.

Ryan

Using ECC keys in X509 certificates

Recently the CAB Forum published a document called the Baseline Requirements for the Issuance and Management of Publicly Trusted Certificates.

This document was authored by both browsers and public CAs and is used by the browser vendors to mandate what minimum technical requirements need to be met for inclusion into their “Root Programs”.

One of the changes specified in this document is that subscriber certificates (aka SSL certificates) containing RSA keys must have a bit length of 2048. This is a change for a lot of CAs (GlobalSign had made this change some time ago) one that has implications to server operators.

Just take a look at the Crypto Plus Plus Benchmarks to see how much more expensive 2048 bit RSA. For most users this additional computational cost won’t be an issue but in some cases customers may need to increase the computing power they allocate for SSL establishment.

But what alternatives do you have? Well there is one, certificates with ECC keys; using these have the potential to significantly decrease the computational costs for SSL negotiations (even over your old 1024bit RSA certificate) but they come at a significant penalty – compatibility.

ECC was not supported in Windows until VISTA which was released in 2009, this basically means 100% of the XP clients out there (around 29% of the browsers on the internet as of July 2012) would be unable to establish a session with your website if you switched exclusively to ECC.

This is important for more than just Internet Explorer users since even Chrome and Safari use CryptoAPI for certificate validation when on Windows.

This would mean these users would see something like this:

 

That is pretty scary, so how long until we can use this more broadly? It’s hard to say there is a good article titled “The developers guide to browser adoption rates” that sheds some light, that and the historic gs.statcounter.com results. Based on these unless there is a sudden change (which is possible these machines are getting pretty old) I would assume that we have around 4-5 years of XP out there yet.

Hope this helps,

Ryan

Using SHA2 based signatures in X509 certificates

It’s been an exciting decade for cryptography; as a result we see smaller key sizes and weaker algorithms getting deprecated.

One driver of such things is the U.S. Federal Government, specifically NIST.

One example of this would be NIST Special Publication 800-131A which disallows the use of SHA1 after December 2013. What this means is if you are in the U.S. Federal Government or you work with them you may have to revise your technology strategy to use SHA2 in its place.

But what if you don’t have any policy mandate forcing you to do this switch? Well it’s a good idea but it has consequences too, namely compatibility.

You see SHA2 was published in 2001 so anything produced before then will not support it. The most notable example is Windows XP which as of July 2012 has about 29% presence on the Internet.

This is important for more than just Internet Explorer users since even Chrome and Safari use CryptoAPI for certificate validation when on Windows.

The good news is that XP SP3 which was released in 2008 added support for this new suite of hash algorithms, that begs the question how many of those XP machines have XP SP3?

Unfortunately I don’t have any public references that can answer this question but let’s that 85% of all XP machines on the Internet have gotten this update (I have good confidence in this number) that means that 15% of those 29% would not be able to connect to your server over SSL if you used SHA2.

This would mean these users would see something like this:

 

 

That is pretty scary, so how long until we can use this more broadly? It’s hard to say there is a good article titled “The developers guide to browser adoption rates” that sheds some light, that and the historic gs.statcounter.com results. Based on these unless there is a sudden change (which is possible these machines are getting pretty old) I would assume that we have around 4-5 years of XP out there yet.

Hope this helps,

Ryan