Monthly Archives: July 2025

When Automation Becomes Bureaucracy

How well-intentioned automation traps people in frustrating loops, and what we can do to stop it.

My wife is from Belarus. On one of my first visits there, I had my first real exposure to what extreme bureaucracy looked like.

Each time I visited a new city, if I stayed more than a certain number of days, I had to register with the police. The process could take an entire day and involved going to a bank to deposit money into the police branch’s account, then returning with a receipt.

One time, we tried to withdraw the remaining cash and close a bank account. We spent the entire day waiting in line after line, at one bank location after another. In the end, we gave up because the opportunity cost was greater than the amount of money we were trying to reclaim.

Need to pay for passport photos? You could not pay in cash, I assume due to fear of fraud and graft; you had to go to the bank, transfer money to the photo shop, and bring back a receipt to prove it.

What struck me was that I was the only one who found this painful. Everyone else accepted it as normal. Endless lines, paperwork, and procedural steps that seemed arbitrary and counterproductive.

So why am I writing about this? This morning I was reflecting on recent experiences changing flights and helping my parents with their Comcast subscription. Over and over, I ran into automation that was supposed to make things easier but actually made things worse.

I tried to change a flight from London to Seattle on Delta. Since it was a codeshare with Virgin, the website couldn’t handle it. I called the support line and got pushed through a phone tree that aggressively tried to send me back to the website. The site still didn’t work. I called back and asked to speak with someone and was routed to a virtual assistant that did nothing but run keyword searches on the help site. Eventually, I got connected to a lower-tier agent who told me the $350 fare difference I saw online wasn’t correct and that it would be $3,000. I pushed back until I reached someone who could actually help. They made the change. The entire process took nearly two hours.

Then there was Comcast. My aging parents have been living with me, and I’ve been helping with their bills. I noticed their TV and internet service had crept up to $350 per month. It was the result of expired deals, supposed discounts that added phone lines they never used, and a long list of tactics designed to get people to pay more for services they didn’t want. Fixing it took well over an hour, and once again I had to fight through automation before talking to someone who would do anything.

Not all automation becomes bureaucracy. My USAA mobile app lets me deposit checks instantly, transfer money in two taps, and reach a human agent with a button press when something goes wrong. It avoids the bureaucracy trap because it was designed around what I actually need to do, with seamless escalation when the automation isn’t enough.

There’s a saying that comes to mind: don’t attribute to malice what can be explained by ignorance. The people who built these systems were probably trying to help. But they were judged by what they shipped and time saved on support calls, not by whether their systems improved user experience.

So what does this mean? When we build systems like these, we need to start by deciding how we will define and measure success. That question should come at the beginning, not the end. It needs to shape how the system is designed, not just how it is reported.

Too often, we optimize for metrics that are easy to measure, like time on call or tickets closed, rather than the experience we’re actually trying to create. Instead, consider measuring success by how empowered customers feel, not how fast they hang up. Once the system is live, we have to come back and test our assumptions. That means checking whether it actually helps users, not just whether it saves time or reduces support volume. One way to do this is to regularly review customer satisfaction and compare it to the experience we intended to create. If it isn’t working, we need to change how the system behaves and what we measure.

This is especially important as we start building with AI. These systems can develop unexpected behaviors. Take Air Canada’s chatbot, which confidently told a customer he could buy a full-price ticket to his grandmother’s funeral and apply for a bereavement discount within 90 days after travel. This was completely wrong. When the customer tried to get the promised refund, the airline refused and even argued the chatbot was a “separate legal entity responsible for its own actions.” Unlike a phone tree that just frustrates you, the AI gave authoritative-sounding but fabricated policy information. The airline probably measured success by how many conversations the AI handled without escalating to humans, not realizing that customers who got wrong answers often just give up rather than keep fighting.

What we choose to measure and how fast we respond when something goes wrong matters more than ever. Once these systems are deployed, they don’t just carry our assumptions forward. They reinforce them. They make it harder to see when the original design was flawed, because the automation itself becomes the norm.

The goal should always be to reduce friction and make life easier for real people, not just to make things more efficient for the teams who built the system. The best systems I’ve used made it easy to talk to a human when I needed to, and didn’t treat automation as a wall to hide behind.

If we lose sight of that, we risk recreating the same kind of bureaucracy I saw years ago, only now it will be faster, more rigid, and much harder to argue with.

How a $135 Billion Fraud Bootstrapped America’s Digital Identity System

I was doing some work on readying a launch for our integration with mDL authentication into one of our products when I realized I finally had to deal with the patchwork of state support. California? Full program, TSA-approved, Apple Wallet integration. Texas? Absolute silence. Washington state, practically ground zero for tech, somehow has nothing.

At a glance the coverage made no sense until I started thinking deeper. Turns out we accidentally ran the largest identity verification stress test in history, and only some states bothered learning from it.

Between 2020-2023, fraudsters systematically looted $100-135 billion from unemployment systems using the most basic identity theft techniques. The attack vectors were embarrassingly simple: bulk-purchased stolen SSNs from dark web markets, automated claim filing, and email variations that fooled state systems into thinking [email protected] and [email protected] were different people.

The Washington Employment Security Department was so overwhelmed that they had computers auto-approve claims without human review. Result? They paid a claim for a 70-year-old TV station being “temporarily closed” while it was broadcasting live.

California got hit for $20-32.6 billion. Washington lost $550-650 million. The fraud was so widespread that one Nigerian official, Abidemi Rufai, stole $350,763 from Washington alone using identities from 20,000+ Americans across 18 states.

What nobody anticipated, this massive failure would become the forcing function for digital identity infrastructure. Here’s the thing about government security. Capability doesn’t drive adoption, pain does. The Real ID Act passed in 2005. Twenty years later, we’re still rolling it out. But lose a few billion to Nigerian fraud rings? Suddenly digital identity becomes a legislative priority.

The correlation is stark:

StateFraud LossesmDL Status
California$20-32.6BComprehensive program, Apple/Google integration
Washington$550-650MNothing (bill stalled)
Georgia$30M+ prosecutedRobust program, launched 2023
TexasUnder $1B estimatedNo program
New YorkAround $1-2BLaunched 2025

States that got burned built defenses. States that didn’t, didn’t. This isn’t about technical sophistication. Texas has plenty of that. It’s about the political will created by public humiliation. When your state pays unemployment benefits to death row inmates, legacy approaches to remote identity verification stop being defensible.

Washington is the fascinating outlier. Despite losing over $1 billion and serving as the primary target for international fraud rings, they still have no mDL program. The bill passed the Senate but stalled in the House. This tells us something important: crisis exposure alone isn’t sufficient. You need both the pain and the institutional machinery to respond.

The timeline reveals the classic crisis response pattern. Fraud peaked 2020-2022, states scrambled to respond 2023-2024, then adoption momentum stalled by mid-2024 as crisis memory faded. But notice the uptick in early 2025—that’s Apple and Google entering the game.

In December 2024, Google announced its intent to support web-based digital ID verification. Apple followed with Safari integration in early 2025. By June, Apple’s iOS 26 supported digital IDs in nine states with passport integration. This shifts adoption pressure from crisis-driven (security necessity) to market-driven (user expectation).

When ~30% of Americans live in states with mDL programs and Apple/Google start rolling out wallet integration this year, that creates a different kind of political pressure. Apple Pay wasn’t crisis-driven, but became ubiquitous because users expected it to work everywhere. Digital identity in wallets will create similar pressure. States could rationalize ignoring mDL when it was ‘just’ about fraud prevention. Harder to ignore when constituents start asking why they can’t verify their identity online like people in neighboring states.

We’re about to find out whether market forces can substitute for crisis pressure in driving government innovation. Two scenarios. Consumer expectations create sustainable political pressure, and laggard states respond to constituent demands. Or only crisis-motivated states benefit from Apple/Google integration, creating permanent digital divides.

From a risk perspective, this patchwork creates interesting attack surfaces. Identity verification systems are only as strong as their weakest links. If attackers can forum-shop between states with different verification standards, the whole federation is vulnerable. The unemployment fraud taught us that systems fail catastrophically when overwhelmed.

Digital identity systems face similar scalability challenges. They work great under normal load, but can fail spectacularly during a crisis. The states building mDL infrastructure now are essentially hardening their systems against the next attack.

If you’re building anything that depends on identity verification, this matters. The current patchwork won’t last; it’s either going to consolidate around comprehensive coverage or fragment into permanent digital divides. For near-term planning, assume market pressure wins. Apple and Google’s wallet integration creates too much user expectation for politicians to ignore long-term. But build for the current reality of inconsistent state coverage.

For longer-term architecture, the states with robust mDL programs are effectively beta-testing the future of government digital services. Watch how they handle edge cases, privacy concerns, and technical integration challenges.

We accidentally stress-tested American federalism through the largest fraud in history. Only some states learned from the experience. Now we’re running a second experiment: can consumer expectations accomplish what security crises couldn’t?

There’s also a third possibility. These programs could just fail. Low adoption rates, technical problems, privacy backlash, or simple bureaucratic incompetence could kill the whole thing. Government tech projects have a stellar track record of ambitious launches followed by quiet abandonment.

Back to my mDL integration project: I’m designing for the consumer pressure scenario, but building for the current reality. Whether this becomes standardized infrastructure or just another failed government tech initiative, we still need identity verification that works today.

The criminals who looted unemployment systems probably never intended to bootstrap America’s digital identity infrastructure. Whether they actually succeeded remains to be seen.