Has identity verification on the web become a glass ceiling?

As of 2013 here are 7.125 billion people in the world (World Bank) 39% of which are using the Internet (ITU). 318.9 million of these people live in the United States where as many as 74% use the Internet (Census).

Increasingly these people are accessing services that require them to prove their identity over the internet. This manifests itself in many ways, commonly in the United States this is done through use of Knowledge Based Authentication (KBA) where knowledge of details from users credit reports are leveraged to authenticate users. This approach has several serious problems:

  • In the United States alone 29% of people have no credit history at all (Gallup) making this approach inaccessible for these users,
  • A number likely much larger than this have such limited credit histories this approach to authentication is ineffective for them,
  • Numerous studies show the usability characteristics of these solutions are poor and result in user abandonment,
  • The limited data available in these credit reports and the way KBA is integrated into these services reduces both the security and privacy each time the information is used.

As a result services often times attempt to leverage a person’s pre-existing relationships with other services such as banks. This approach also have serious failings:

  • In the United States 7.7% of people are unbanked (FDIC) and 20% are underbanked,
    World-wide the number of unbanked is 35%,
  • For liability and business interest reasons almost no financial services organizations offer federated identity services for their customers,
  • When banks are used a concept of a “penny-test” is often used requiring disclosing sufficient information to enable them to potentially draw electronic checks from the persons account,
  • The infrequent nature of this transaction and inherent complexity of the task again has poor usability characteristics and results in transaction abandonment,
  • This leaves services attempting to rely on binding multiple social “identities” together to authenticate the user. Unfortunately these social “identities” are often no more than pseudonyms which do not meet the regulatory obligations that many businesses and agencies must meet. Additionally the binding of these identities together reduces the users privacy significantly in that it becomes trivial to track activities of that user across services.

This situation creates a socioeconomic glass ceiling where those who can not participate in these authentication systems do not have access to the lower cost and generally higher value services available on the Internet.

Additionally there is still a class of transactions where the existing mechanisms do not work (such as a person establishing their first bank account) and others that require the disclosure of more information than necessary to meet the authentication requirements (for example age verification).

Outside the United States the situation is even more grim where the the numbers of the unbanked are significantly higher and often privacy regulations prevent the use of many of the above approaches. As a result many services can not be brought online and those that can commonly rely on the lowest common denominator – proof of control of a simple email address.

This problem is made even more complicated when services need to verify professional accreditations or roles within an organization.

What do you think? Is this a real problem?

I think it is. I also think this is a solvable problem (for some value of solvable) but as of yet I do not see anyone building solutions that address this problem of initial identity verification effectively.

How did I get involved in PKI?

In the mid 90s I was a security consultant, I principally worked on authentication systems (Smart cards, One Time Passwords, Kerberos, PKI, etc.).

Back then the only people who cared about these things were organizations concerned with protecting lives or money. This meant most of our contracts were with governments, banks, and fortune 50s. This was an amazing experience that I would not trade for the world — it gave me the chance to work with some amazing people in some of the most paranoid and security conscious environments in the world.

While not my first exposure to PKI the first time “it was all I did” was when I worked for a company called ValiCert. The founders saw a problem:

Who was watching the certificate authorities and who would make sure that the revocation infrastructure would scale to meaningfully work in the event miss-issuances or key compromises happened?

We had developed technologies that were intended to address these problems. This technology looked very similar to Certificate Transparency, OCSP stapling and certificate pinning which are again all-the-rage these days.

Unfortunately the Certificate Authorities did not like the the idea of being “watched” by a third-party; the largest CA went so far to threaten with lawsuits and modified their Relying Party Agreements to state that third parties could not re-distribute any information about what certificates they had revoked or issued.

Another entity had patents they claimed covered some of our optimizations and given the browsers were minimally investing in this area we did not get adequate traction so we pivoted into other areas.

For personal reasons I ultimately ended up at Microsoft where I was responsible for a number of security technologies and one of the “little things” I ran was the Microsoft Root Program.

When this was assigned to me I was told it was the least important thing on my plate and that I could measure my success through the number of escalations we got relating to it — basically I was told to invest as little as possible to keep things quiet. The root program was a necessity but shipping software was what we were all about.

The first thing I did for the root program was review its requirements and try to understand who were its participants and what agreements we had with them. I was surprised to see there were in-essence no requirements, no authoritative list of contacts at each of the organizations and no contracts with any of its members. I felt marginally better when I found that Netscape had only one requirement and that was your check for $250,000 USD cleared, the upside of which also meant they probably had contracts with each CA but there were no technical or audit requirements in their program either.

To remedy I began to work with my AWSOME paralegal and lawyer on defining the first “root program” with both technical and audit requirements. We did not want to approach this as a profit center like Netscape but instead establish a set of requirements that were technically sound that encouraged CAs to spend on improving their infrastructure and having it reviewed by others

To this end I picked up a project that had been begun by my predecessor to work with the American Institute of Public Accountants (AICPA) to help define and adopt what is WebTrust for CAs today.

We were the first root program to adopt this new audit. I remember being interviewed by the AICPA for a video on their website on how excellent it was to work with them – they must have taken 50 cuts during that session because of my bumbling.

With these new requirements in hand we set out to get contractual agreements with each of the CAs where they would commit to meet these new requirements and make clear conditions on which we could kick them out for not complying. Given this required them to make operational changes to their practices as well as budget and manage a third-party audit it took a complete product release cycle to get all of this in place.

At the end of the operating system release we had an audited set of CAs and contractual agreements with each one of them. Now our goal was to get these CAs into one room so we could encourage them to adopt common issuance practices.

This was important for a number of reasons, one of the most obvious was that each one of the CAs used a different taxonomy to describe what they did. The simplest example of this was that one CAs in-person verified certificate would be called a Class 1 and another’s was a Class 3.

To top things almost all of the CAs wanted to see the browser “chrome” differentiate between their weakly authenticated certificates and those that were strongly authenticated. This of course was not possible without a common practices  and means of marking certificates to make it clear what practices were used in the vetting of the subscriber.

The internal consensus was that there would be value to users to be able to tell the difference so we decided to try to make this happen. To do that we arranged to get these CAs in one room so we could talk about standardizing practices and certificate formats.  To make this happen I reached out to my contact at the AICPA and asked him to work with me to arrange what was the very first gathering of publicly trusted CAs and trust store providers. We met in Washington DC because I felt we could leverage the work done by the US Government to accelerate the standardization of these things.

Unfortunately one of the newest CAs who only issued low assurance certificates saw adopting common standards for vetting and labeling a risk to their business and as a result they through a wrench in the my plan. They filed a claim with the FTC that what the event an attempt to create anti-competitive marketplace and as a result I was deposed by the DOJ. Ultimately the issue was closed and I understand the disposition was that the claim was baseless.

At this point I was instructed by management and our legal council to stop pushing for this standardization as it represented too much legal risk for the company.

As an aside a few months later the largest CA acquired the troublemaker.

About a year and a half later the CAs self-organized and attempted to agree on a smaller set of standardization, the definition of what is called Extended Validation today. This was effectively a new label for what most CAs were offering in their “high assurance” certificates. The CABFORUM was now born.

At this point I had moved onto another team at Microsoft. During my time at Microsoft I worked on a number of very cool projects with some great people. Several of the projects I worked on used PKI but my involvement was much more on the peripheral to the industry at that point.

Years later I decided to leave Microsoft — the Diginotar incident was a big contributor to this decision. I felt that the industry was a mess, they were under investing in their infrastructure, not supporting the open source community they were dependent on and not actively working to improve adoption of SSL. I wanted to change this, I had decided I would start my own Certificate Authority and set an example for the industry on how a CA should approach these things.

This is when GlobalSign approached me and asked me to join as their CTO, I really liked the team, they were principled, hard working and looking to change the way things were done. I spent nearly three years in this role and we accomplished a great deal, I also still work with them on technical research / direction  but I have since moved onto a startup doing work on Bitcoin related technologies.

I did not accomplish all of the things I wanted to but I still have hopes that these systemic issues will be resolved as I do believe trusted-third parties are needed on the internet.

Anyway this is how I got into PKI.

Ryan

T-Mobile : How very “carrier” of you.

“Don’t let your mouth write checks you’re a** can’t cash” — that captures my experience with T-Mobile thus far.

Ever since I saw John Legere announce the T-Mobile “un-carrier” campaign I have been anxiously watching T-Mobile with the hope they will instigate positive changes in the mobile telecom space.

AT&T on the other hand has proven to me over the last two decades as a customer they have agility and customer service of ol’ Ma Bell. The silly games they play, even with their most lucrative customers, are abhorrent, and the pricing strategies they apply are nothing less than usurious.

That’s why when I saw the latest round of the “un carrier” campaign I decided to switch; conveniently this announcement was aligned with the release of the iPhone 6 which I wanted to get anyways.

I, like many got up early to place my pre-order online, setting my alarm to go off right when the pre-orders began, I tried for an hour and a half to place an order but the T-Mobile site kept timing out.

Writing this experience off to poor capacity planning, I went to bed and woke up a few hours later to try again – things were no better.

Over the next several days I continued to attempt to place an order getting to various points in the order workflow before the site would time-out and I would have to start over again.

I managed twice to sign the IUP with them via DocuSign, the last time actually completing the order. This was literally the fourth day and who knows how many attempts later.

I knew I would not be in the first rounds of the iPhone deliveries but I was not in any big hurry, I was just relieved my order was placed and soon I would no longer be under the thumb of the AT&T. To top things off I was going to save money each month!

Two days later my excitement was crushed as I received an email from T-Mobile instructing me to call them as there was a problem with my order. The next day I found the time to call them back (after holding for about 30 minutes) and was told the mail was sent in error and my order was fine — in-fact I could expect my phones within the week!.

A week passes and I get a phone call from T-Mobile, apparently there is in-fact a problem with my order and I need to cancel my order and place a new one. The woman I spoke to quickly cancels my existing order and begins to place a new one, after about an hour of problems with the ordering system she informs me she will not be able to place my order and will transfer me to someone else, I am told two of several of phones are in stock and that once my order is placed they will be held until the remaining ones are in stock, I am told more phones are a week out and that this new person will flag my order for overnight shipping once it is placed.

The new person also struggles with their ordering system, it takes him about 45 minutes to place the order, he does not put over night shipping on the order and informs me my order will be fulfilled in 3 weeks.

At this point I am not thrilled but as long as I get the phones before my sons birthday in the end of October I will be satisfied. After I sign the IUP for this new order several days pass with no confirmation of my order so I call to try to verify the status of my order, apparently they can see my order but can not give me any status.

The next day I get an email with an order number and a link where I can check my order, I begin checking this page almost every day.

On the first day I see in-fact two phones are in and they are waiting for the last one; the page indicates these two phones will be held until the remaining phone is in. Several days later those two phones have apparently been re-allocated since all are now marked as expected on or before the end of the next week.

A day or two later I see one phone is in and the next none.

This process continues for a few days, shipping dates moving in and out occasionally the order having one or more phones in stock to having none.

Nearly a month has passed, the current date of delivery is now the day after my sons birthday. My sons birthday arrives and now the final date moves into November, again a single phone is showing in stock and the order is being held until the rest arrive. Again the next day they all show as being expected in November.

To be clear this is not a post from a customer complaining about not having the iPhones they ordered, this is a customer complaining about how disorganized, under prepared and apparently under invested T-Mobile is in their internal software systems let alone their network which everyone knows is not as good as AT&Ts.

In their defense they are the cheaper solution, in my case I would save $20 / mo or $240 / yr by switching to them but is it worth it? As a general rule the absolute best experience you have with a service provider is the one you have before you’re a customer and this has been a miserable experience.

Realistically based on the way they are handling fulfillment it also seems the only way I will get these phones is if I go into a T-Mobile or Apple store and get them myself as their fulfillment system as implemented will keep me at the end of the line since I have several phones on order.

As for the un-carrier campaign; I think John Legere gets where the future is at and the direction he is pushing his company is the right one but unless this is paired with significant improvements in the technology the company is based on its not much more than a marketing message.

[UPDATE 11/7/14] Two months later I have the phones I ordered but they sent the wrong colors, very cool T-Mobile.

[UPDATE 11/8/14] Even though they sent the wrong phones I decided to go on with the switch. Unfortunately the box containing the phones did not contain any instructions on how to get started so I had to drive to a T-mobile store for their help. When I arrived they looked up my order and apparently there was no lines associated with it, additionally it was not associated with my equipment order. My plan has been to take advantage of the T-mobile offer pay termination fees but the agent now says that since the order was not setup with lines we could not take advantage of their promotion. He said if I want to take advantage of it I will have to call T-mobile, get and RMA and place a new order making sure that they do it right next time. That’s right I need to make sure they do their job right next time.

[UPDATE 11/8/14] Called T-Mobile, Got RMA number, will just return these phones and stay with AT&T. They may be evil but at-least they are semi-competent.

[UPDATE 11/10/14] I just went and purchased iPhones at the Apple Store and registered them with AT&T. I pay a little more each month than I would have with T-Mobile but at least they are semi-competent.

[UPDATE 11/25/14] Today I got an e-mail from T-Mobile confirming they received my returns, that is they claim to have only received 2 of 3 of the phones we sent back; There were 3 phones in the box when it was sealed up (in the original packing using their shipping label) they have lost this third phone and are apparently intent on me paying for it.

[UPDATE 12/04/14] On the 25th I wrote T-Mobile asking for them to refund the cost of the third-phone and I have still heard nothing, they owe me $171.23 and more importantly their mess up will surely come back to them claiming I owe them for service or even worse the full unsubsidized price of the phone they lost. I have just emailed them once more and will try the twitter account.

[UPDATE 12/06/14] I finally got ahold of someone, they claim they have now refunded me the amount they owe to a Visa debt card and that I will see it soon. Let’s see if they can manage to do this right.

[UPDATE 12/17/14] I just got another email from t-mobile asking me to “verify my account” over email or twitter along with a comment sayting the partial refund is expected for up to 30 days and implying the assurances the prior person gave me this has been resolved are not correct. Sigh. Now to start this sillyness all over again.

The first web-server and the meaning of the SSL lock

The first web-server was developed by CERN in 1990 — that is twenty-four years ago!

Back then the web was a much simpler place. For the most part web pages were static files hosted by a single server owned and operated by the same entity that managed core network infrastructure, and DNS. In many cases they even owned the building where the systems were located.

As the web became more popular the architecture of these systems needed to evolve. At first that was done by bolting on basic search capabilities.  The database-backing search was simply another process running in background indexing the documents.

1

Around 1994 SSL came about. It was used almost exclusively in e-commerce scenarios.

These systems e-commerce systems were really the beginning of the complex n-tier deployments we have today. With that said they were still very simple by today’s standards. These new systems were essentially made up of a cluster of web servers sitting in front of a shared database one-network port away.

The processes of getting a SSL certificate back then was also quite onerous even when compared to what we do for Extended Validation today. To get a certificate in many cases you had to actually visit a public notary with documentation that proved your affiliation with the business you were getting a certificate for; the notary would then attest that they saw the originals of your identification as well as those documents. I even know of cases where company’s executive staff were required to visit the CA in person.

This complexity was because businesses identity was inherently part of what that certificate was about. As a consumer if you were dealing with a online business you knew they had a clue about technology (relatively) and because this online presence was an extension of their brick-and-mortar business you already knew — they were a known quantity and knowing it was them  gave you confidence they would be applying the same diligence and practices from their online business to their online transactions.

At this point the certificates used in SSL cost as much as $1,500 each and while this slowed the adoption of SSL it also gave a signal to visitors of that the that sites that had them were not some fly-by-night operation as they were willing to spend “real” money to ensure people knew who they were.

Above and beyond that when users saw the “SSL” lock users knew their users their sessions were encrypted end-to-end and as a result their data was not going to be stolen in transit.

2

Given the kinds of organizations that would operate these sites (at the time this was banks and large e-commerce businesses) there was also an element of “these guys get security” – after all they knew how to do all of the above and had their existing brick-and-mortar reputations they were building on.

Over the next decade those small server clusters that ran these websites became more and more complicated. For these site to scale what used to run on one or two boxes got moved across many. At the edge dedicated systems were used to terminate SSL and forward clear text to back-end systems that were sometimes owned and operated by different entities and often spanned multiple networks.

3

The mega-sites like those run by Google and Microsoft still are designed in this way because it is the only way to cost effectively scale and be agile enough to meet market needs for systems of this size.
For the rest of the Internet this model just isn’t used that much anymore – its just not cost effective for small sites and most organizations don’t have access to the skills or resources to deploy the kind of networks and systems that these larger sites do. For this reason most sites have moved from deploying onto hardware and networks they own to those owned and operated by other people.

It is now the norm and not the exception to have numerous service providers embedded in a single website, the physical hardware being used by these the site and the service providers are almost always multi-tenant, even the databases backing a them are likely shared.

Expectations of users about how the web performs has changed as well, for this reason an entire industry developed to provide yet another shared service — high-end networking services that logically sit in front of these machines to ensure timely delivery wherever the user is at (AKA CDNs).

To keep pace with the demand for SSL the way certificates were priced and are validated changed as well. Today around 70% of all SSL certificates are Domain Validated (DV) and in many cases they can be had for free.

For entrepreneurs this means they can build an online business more quickly and cost-effectively than ever before. For users it means that these online businesses are fast and more professional looking but it also makes it harder to understand the security assumption behind the operational practices of the site.

The “site” as the user sees it may literally be operated by a half dozen an entities such as the network provider, CDN, DNS, hosting provider, analytics, data providers and the site it amongst others.

You don’t know what agreements the site has with these providers, how any of the entities store your data, which they share it with, or if they attempt to use good security practices in the development and operations of the services.
These things were broadly speaking inferable in the 1990’s given how few sites were on the network and the kind of investment necessary to even get online. Today some college kid in a garage could be operating your favorite site, he is motivated not by protecting his current business but by getting to market quick enough to grow his new one.

To me this means it is more important than ever before to understand whom it is your dealing with and what their security practices are. This isn’t a change that happened over night but something that has happened slowly over the last twenty-five years.

This is why its great sites publish their security and privacy practices, even if we must take them at their word. This is why it is also important to understand whom it is you are doing business with, without this how can you make an informed decision on the credibility of their word?

5

In a perfect world these things would not be items to be concerned with but as my father always told me we have to see the world the way it is and not the way we want it to be if we ever want to change it.

What are some upsides of googles’s SHA1 deprecation plan?

NOTE: Google has since adopted a more gradual plan for migration which will addresses the potential false sense of urgency the prior plan represented. Personally I think the new plan is a good one. The upsides in this post are still accurate and it is my hope people switch to SHA256 based certificates as quickly as possible.

The Internet is about to embark on another Heartbleed-esq certificate migration. This time there is no immediate danger (which was certainly not the case with Heartbleed) and there is a proposed twelve weeks to plan and respond.

During this time (unless that plan changes) a large majority of the SSL secured Internet will need to swap out their SSL certificates or the users of these sites will see a little scarier user experience. To be fair some of these certificates will be expiring regardless and need to be replaced anyway but this still represents a large number of additional sites that will need to replace certificates sooner than they had planned.

That said there are upsides, for example given how many of the top sites now use SSL the users of these sites will need to move to modern browsers not dependent on platform crypto or update to a newer version of Windows in the process gaining access  to modern web technologies and security fixes.

Another benefit is that CAs that are not active participant in the CABFORUM and who do not follow the root program requirements closely will be sure to stop their use of SHA1 based signatures as soon as they see the user experience impacted.

The same thing will be true of device companies and enterprises who do not as of today have the option to participate in the CABFORUM and even if they did are frankly unlikely to. That is when they see their support calls go up they will change their products and/or processes so that such certificates are not used.

The net of which is by the end of 2017 we will most likely see the complete end-of-life of SHA1 as part of signature suites and we may see an above average increase in modern browser adoption.

Ryan

 

 

Why might you have a certificate with a SHA1 based signature in its chain that is valid beyond 2016/1/1?

NOTEGoogle has updated the plan they will be using to deprecate SHA1 based certificates. The content in this post is still mostly accurate but for dates please see the thread. Personally I think the new plan is a good one. The upsides in this post are still accurate and it is my hope people switch to SHA256 based certificates as quickly as possible.

So there is a plan under discussion to “degrade” the user experience for SSL sessions protected with certificates (or chains) that contain a SHA1 based signature that are valid beyond 2016/1/1.

This 2016/1/1 date was apparently discussed at a CAB Forum meeting six months ago, prior to that the “sunset date” for SHA2 was considered to be 2017/1/1.

Given Chrome represents such a large percentage of the browser ecosystem and they appear to be unwaveringly marching towards this new date I think its fair to refer to this date as the “new sunset date”.

There have been lots of conversations about this topic from the perspective of a CA and that of a browser but not so much from a perspective of a certificate holder.

There are a few cases why you might have such a certificate:

  1. Your certificate was issued before the new sunset date was specified.
  2. When the new sunset date was specified your certificate authority did not update their system to restrict use of that algorithm to expire by that new date.
  3. Your certificate authority gave you the option of choosing which signature suite (and hash algorithm) and expiration dates to use and you chose SHA1.

Some might ask why CAs did not simply stop issuing certificates that utilize SHA1 based signatures all together when Microsoft issued their goal to deprecate by 2017. The answer to this is simple; there is a large number of XP machines out there (15% of the Internet and over 35% of browsers in China) and its unclear how many of them have Service Pack 3 which is necessary to support certificates with SHA2. There are also concerns about the number of mobile and embedded devices that also do not support SHA2.

So how big of a risk is the interoperability impact? It’s hard to say; some numbers i have seen suggest it is less than 1% of traffic but honestly it doesn’t appear possible to measure  the number of XP machines without SP3 and if it were it still wouldn’t take into consideration the devices that do not support SHA2 and we know such devices were shipping as recently as two years ago.

So that takes me to the main reason for this post; it’s my guess that the primary reason you have a certificate that will be effected by this change is that the CAs honestly did not realize google was moving the sunset date forward and were adopting migration plans that they felt balanced interoperability, usability and security.

With that said I believe google sincerely feels this change is in the best interest of the internet and that the user interface changes they are proposing are subtle enough that it wont be noticed by most (see : A Large-Scale Field Study of Browser Security Warning Effectiveness [pdf]).

Unfortunately this leaves you the server administrator stuck somewhat in the middle. You will have to choose to give up views and revenue from these clients that do not support SHA2 or all of your users who use Chrome will see a degraded user experience.

What will Chrome’s SHA1 early warning look like?

NOTEGoogle has since revised its plan to enable a more gradual migration to SHA256, this post is no longer accurate.

For the last few weeks there has been an ongoing discussion on the Chromium security-dev mailing list on how Google intends to implement a user interface change to warn users that a SHA1 certificate is in use.

I wont talk to the reasoning behind this change or to the current and future security properties of SHA1 in this post but I thought some folks might be interested in what this might ultimately look like. I say might because right now there is only a mail thread and who knows how things will evolve and what the copy would be in such user interfaces.

With that said the thread does describe what affordances they intend to use when a site has a certificate where it or the corresponding certificate chain has SHA1 based signature in it (excluding the root) that expires after 2016/1/1 the user interface may be “degraded” for these sessions.

At this time it seems the “red x” that is used for mixed content will be used; if so this will look something like this:

 1

 

 

 

 

For the SHA1 certificates that expire after 2017/1/1 if that page contains active content such as JavaScript and CSS that is served over a SSL session with such a certificate they will not be loaded unless the user explicitly chooses to approve their execution, this would look something like this:

2

 

 

 

 

 

Again for SHA1 certificates that expire after 2017/1/1 if the page contains passive content (such as images) that is served over a SSL session with such a certificate it will not be loaded unless the user chooses to do so and the lock will get a yellow arrow, which will look something like this:

3

 

 

 

 

 

 

 

 

Which combinations of these things one will see would be dependent on the specific combination of conditions but this will give you some idea on what these changes may look like.

Ryan

Capital One Venture – the card NOT to travel with

Those of you who know me know that I do a fair bit of travel; in the last year and a half I flew at least 250,000 miles through Asia and Europe for work.

As part of being a regular traveler I wanted a credit card with travel benefits. After doing some research I settled on the Venture Card from Capital One primarily because of its decent interest rate and its competitive points system. With that said I still cannot recommend this as a travel card.

Why you ask? Well there is more to a travel credit card than the interest rate and points. To explain let me tell you about my trip to Russia.

First to be honest I did not explicitly tell any of my cards that I was going to Russia. That said when I know I am going to be a heavy traveler like I was in this time  of my life I will notify any cards (which I did in this case) I intend to use them while traveling that I will essentially be living on the road.

Though it is a good practice to notify your card companies every time you travel internationally calling them every 3 weeks isn’t a reasonable thing to do — and in my defense until this event it was never a problem.

The first part of the trip was in Belarus where almost no one took cards and I ended up paying for everything with cash — even Internet access. The second part of this trip we went to Moscow and the first time I tried to use the Venture card it was denied.

This is the same card I had previously used throughout the rest of Europe and Asia with no problem. Assuming it was an attempt to “protect” me from card fraud I calmly called support reaching what was apparently a Philippines call center where I was instructed that my card had been flagged as stolen by someone in Russia.

I explained this was the first use of the card in Russia and the suspected fraud was me. The agent informed me that despite this fact in the name of my best interests she would be canceling my credit card.

I of course protested; I was after all in another country for another month and had planed to use the points I earned to cover some of the costs of the trip and more importantly I had left my backup travel cards in Belarus. Without this card I was in essence dependent on the limited amount of cash I had left.

I explained my situation to the agent and was told not to worry that she would have a card to me at my home within 24 hours. I explained again that I was in Russia and that sending card to the states wouldn’t be of any use.

The agent then offered to mail me the card in Russia but couldn’t guarantee when it would arrive. I explained that this could take weeks — when I ship items via the fastest choice to Russia they typically get to the country within two days but don’t get delivered for three or more weeks. The agent responded that that this was the best they could offer but after some pushing I managed to get escalated to someone in the US where I hoped I might get a better answer.

It turned out that the US office was closed at that time but a few days later I did get a call back — unfortunately though it was clear this office at least understood the situation (the agent in Philippines office was very poorly trained) I was informed that since the other agent had already canceled my card there was nothing else they could do other than send me a replacement to my home in Seattle.

This is the core of why I wouldn’t recommend Capital One for a travel card — at least to an international traveler; when your traveling your credit card is your safety net, it is how you handle currency conversions, make sure you can feed yourself, have a place to stay and can handle the surprises you may encounter. More than the points, more than the interest rate this is what a travel card is. American Express built its reputation on being that card and when I have had issues in the past they have been there to help – Capital One on the other hand left me stranded.

Anyway I was so dissatisfied with Capital One’s handling of this when I got home I paid off the balance and did not activate the new card they sent.

Fast forward to over 6 months later and I get an email saying they have charged me the renewal fee for this card that in my mind was closed. I was a little disgusted by them charging me a renewal fee for an account they in-essence took from me when I needed it most but I was going to open a card anyway and decided to activate the card they had sent previously and pay the fee.

When I activated the card the automated system told me the card was ready for use but when I tried to use the card the first time it was denied. Frustrated I set the card aside until I had enough time to mess with their support again.

When I called to resolve this I was treated like someone who was avoiding paying a long standing balance and not someone who was trying to just resolve them miss-handling an issue so I just canceled the card.

Long story short — a good travel card has to have good customer service, they have to be your partner and look out for you and Capital One just doesn’t do that.

Though in my new role I don’t do much if any international travel I do a ton of domestic and have been using the Barclay Arrival card. I have had the occasion to talk to their customer service several times, each time they were professional and helpful. While I have not had a similar situation happen while using them as my primary travel card I suspect based on these experiences they would handle things differently.

Smart cards, PC/SC and Chrome

Smart cards have been around since 1974 and as a technology while they have expanded their capabilities they still work in very much the same way they did back then.

These cards expose a protocol represented in Application Data Units (APDUs), the devices themselves are typically connected to computers via smart card readers (either embedded or external) that communicate via PC/SC.

Shortly after PC/SC was defined a class interface for USB PC/SC devices was defined called CCID with devices that conform to this specification one does not need vendor specific drivers to interact with the PC/SC device.

Since Chrome 26 Google has supported an interface that allows plug-ins to interact with USB devices. While I have not looked at this interface in detail I do know that the Google Gnubby (aka FIDO U2F) uses this interface to interact with their devices. I also understand that the U2F devices are in simple JCOP cards with a Gnubby applet on them.

Based on the above it seems rational to believe a third-party (aka someone other than Google) should also be able to create a Chrome plug-in (which is nothing more than Javascript) that allows a web-page to interact with smart-cards.

This would when paired with a reasonable card-edge that supports secp256k1 enable Multi-signature Bitcoin transactions leveraging smart cards without the need for a “fat” client.

Smart cards, PIV, Bitcoin and secp256k1

I am thrilled to see Multi-sig (P2SH) getting adopted across Bitcoin industry it has the potential to significantly reduce the risks involve with storing Bitcoin online. With that said it is still dependent on software keys, which can be trivially stolen via malware and other attack vectors.

One way to address this risk is to move the keys off of the host and into a isolated computing environment like a smart card.

Hardware devices like the Trezor do this by creating a Bitcoin specific computing environment, which has a many benefits (like being able to enforce policy on the card and get trusted implementations of the whole Bitcoin stack) but they turn into single use devices as a result of this approach also.

An alternate or really complimentary approach would be to have a smart card (or USB token) that supports the same cryptography used in Bitcoin as well as other more commonly used algorithms.

The thing to understand about smart cards is that for the most part every one you see is a proprietary non-interoperable mess. This is a function card industry attempting fend off the race to the bottom by differentiating at the card protocol layer which resulted in devices that are based on “standards” yet are totally non-interoperable.

Where they do “interoperate” it is because middleware has been written to mask these cross-vendor idiosyncrasies. The largest case where this has not happened is in the PIV card-edge, which was defined by the US government as their standard for logical and physical access control.

This card-edge explicitly supports only two ECC curves ansip256r1 and ansip384r1. That said the mechanism the caller has to specify which curve to use is via numbers in IDs specified in SP800-78 that map to the algorithm to be used (see table 6-2 for those algorithms) one could squat on un-used IDs and have a card that could also support secp256k1. This means it is possible to extend a standard PIV applet to support storing and protecting Bitcoin keys also.

PIV has other limitations that make it not ideal for these scenarios, specifically the default ACL set for the cards are such that users can not create keys themselves. GoldKey a smart card vendor who’s product uses the PIV card-edge works around this by adjusting those ACLs and embedding an administrative key within their “driver” that enables key generation to happen when their driver is used.

The net of all of this is that one can reasonably create a smart card that supported all of the rich capabilities that are available to users of a PIV device and also support protecting Bitcoin wallet keys.