Monthly Archives: December 2025

The Impossible Equation

From the Eurodollar to the Splinternet: How the Race to Regulate the World Broke It

“History does not repeat itself, but it often rhymes.”

“You cannot solve an exponential complexity problem with linear bureaucracy.”

“Power tends to corrupt, and absolute power corrupts absolutely.”

I grew up in a house where reading was not optional. Being dyslexic, dysgraphic, and dysnumeric made it painful, but my parents had a simple rule: read, explain, defend. No written reports. Just me, standing there, trying to make sense of something complicated. One of the books they handed me was Plato’s Republic. What stayed with me was not the philosophy. It was the realization that people have been struggling to govern complexity for thousands of years. The names of the problems change, but the core tension between power, understanding, and human nature does not.

That early lesson was not about Plato. It was about learning how to think. And it is why the unraveling of the global internet feels so familiar. We built something wildly complex, assumed it would stay coherent, and then stopped paying attention to whether anyone still understood how it worked.

For a long time, growth hid the cracks. It looked like the system might harmonize on its own.

How it started

There was a stretch from the early 2000s to the mid-2010s when the internet felt weightless. Borders mattered less. Companies operated everywhere at once. We acted as if we had finally built a global commons.

But the system only worked because the cracks had not widened yet. Growth covered sins. Neutrality was taken for granted. Enforcement was sparse. And most governments did not yet understand the power they were holding.

Once they did, everything changed.

Where the cracks first appeared

If you want to understand the present, imagine a marketplace in Lyon around the year 600.

A Roman trader sells a diseased cow to a Gothic warrior. A dispute erupts. Which rules apply? Roman law? Gothic law? Salic law? The merchant across the stall follows a different code entirely.

Nothing works because everything overlaps.

That world did not collapse from stupidity. It collapsed because complexity made ordinary life too brittle. People retreated into smaller circles with clearer rules.

Today, a single smartphone tap in Brazil may be governed by US law because the data touched a server in Virginia, EU law because a European might use the service, Brazilian law because the user is in Brazil, and sometimes Chinese or Indian law depending on where the packets travel.

One action. Four sovereigns. Zero clarity.

When history repeated itself

Europe solved this once already. In 1648, after decades of war, it settled on a blunt rule: your authority ends at your border.

It was not wise. It was not elegant. It was enough.

Trade flourished. Science accelerated. Industry emerged. A patchwork of boundaries replaced the chaos of overlapping claims.

The internet quietly tossed that lesson aside. When data crossed your border, you assumed your rules crossed with it. If a foreign company touched your citizens, you claimed jurisdiction over it. Everyone became a king claiming the same territory.

This worked until it did not.

Power learns to travel

For centuries, strong states found ways to project authority outward. The tactics changed, but the impulse remained. Merchants judged under their own laws abroad. Empires exporting their courts. The United States using market access to enforce its rules. The dollar turning sanctions into global tools. GDPR and the CLOUD Act pulling data into competing gravitational fields.

Eventually the boomerang returned. China, Russia, India, Brazil, Nigeria, Turkey, and others built their own outward-facing systems.

Everyone learned the trick. Everyone decided to use it.

We even see the revival of cultural jurisdiction. Putin claims authority wherever Russian speakers live. Western regulators now claim authority wherever their citizens’ data flows. Jurisdiction is no longer about where you are. It is about who you are and what language you speak. It is a formula for endless conflict.

The hidden glue that held globalization together

Globalization did not succeed because nations resolved their differences. It succeeded because they tolerated spaces where the rules did not apply cleanly.

Eurodollar markets. The early internet. Loose data practices. Informal restraint.

These buffers allowed incompatible systems to trade without resolving contradictions. When governments realized they could weaponize cloud providers, app stores, and platforms, the restraint vanished. The buffers collapsed. The contradictions rushed in.

The quiet expansion of authority

Governments rarely ask for power directly. They cite terrorism, child protection, organized crime, money laundering. The public nods. The tools are built.

And the uses expand.

A system designed to track extremists becomes a system used for tax compliance. A privacy rule becomes a lever for geopolitical influence. A regulation meant to protect users becomes a tool to pressure foreign companies.

The shift also targets citizens. Under laws like the UK Online Safety Act, platforms must scan for harmful content, while older public order laws are used to arrest individuals for what they write. The platform becomes the informant. The citizen becomes the suspect.

This ignores a simple reality. A forum is not a corporate broadcast. It is an aggregate conversation. When you treat a forum like a publication, you do not just fine a company. You criminalize the people using it.

No one announces the shift. It simply arrives.

The traveler’s trap

This expansion destroys the concept of safe passage. In the old world, if I wrote a pamphlet in Ohio, I was subject to Ohio law. If I traveled to Germany, I followed German law while in Germany.

The internet erases that distinction. Regulators now argue that if my post in Ohio is visible in Germany, it is subject to German law.

Where does that end? We see visitors to Turkey detained for content that offends local authorities. Tourists in Dubai face jail time for reviews written at home. If I criticize a monarch in an American forum, can I be arrested during a layover in the UAE years later?

If jurisdiction follows the data, every traveler walks through a minefield of laws they never consented to and cannot vote on.

Regulatory colonialism

Europe did not win the platform wars, but it mastered administration. GDPR, the DMA, the DSA, and the AI Act form a regulatory architecture that shapes global behavior by raising compliance costs.

There is an economic lie buried here. Regulators claim they are policing Big Tech, not individuals. But if you fine a company for carrying my speech, you are placing a tariff on my words. It is protectionism masquerading as safety. You are taxing the import of ideas you cannot compete with.

To be clear, not all of this is wrong. The United States needs a federal privacy law. GDPR got the big picture right: data rights are human rights. But the implementation covered the open web in the digital graffiti of cookie banners. It is a global pixel tax that wastes millions of hours while solving nothing.

The problem is not the desire to regulate. The problem is the arrogance of applying your local preferences—good, bad, or merely annoying—to the entire planet without consent.

We would never allow a foreign court to cut the phone line of a citizen in Ohio because their conversation violated a speech rule in Paris. Yet we accept that logic when the conversation happens on a server.

True governance requires consent. A mutual treaty is legitimate. A company operating on your soil is legitimate. But when a regulator bypasses a foreign government to police a foreign citizen directly, it breaks the compact between a citizen and their own state.

It comes down to standing. If my own government overreaches, I have recourse. If a foreign regulator erases my content, I have no voice and no remedy. That is not law. That is subjugation.

When politics becomes math

Up to this point the problem looks political. Now it becomes mathematical.

If only a few jurisdictions make rules, contradictions are rare. If dozens do, contradictions are certain. The number of potential conflicts rises faster than any human institution can track.

You get impossible requirements where one state demands disclosure and another forbids it.

No optimization fixes a logical impossibility. Not with lawyers. Not with AI.

This also creates a global heckler’s veto. If 195 countries all enforce their local laws globally, the cost of compliance destroys the platform in its own home market. Foreign censorship does not just silence me abroad. It destroys the tools I use at home.

If the UK wants to weaken encryption for its own citizens, that is its choice. But it cannot demand that a global platform weaken encryption for everyone else.

When the cost of compliance becomes an existential threat, the only option is to leave.

Google left China. Meta and Apple withheld advanced AI models from Europe. Apple went further: threatening in 2023 to pull iMessage entirely, and in 2025, disabling Advanced Data Protection for British users rather than breaking encryption.

It is no longer a negotiation tactic. It is a strategy.

This is how the Splinternet arrives. As Hemingway wrote about bankruptcy, it happens two ways: “Gradually, then suddenly.”

Rules that refuse to settle

Some laws require removal of harmful content in hours. But the definitions shift constantly. A system cannot stabilize if the rules never settle.

Platforms chase the strictest interpretation of the broadest rule from the most aggressive regulator. That is not governance. It is noise.

A world dividing into stacks

The internet is not collapsing. It is dividing into spheres. A Western stack. A Chinese stack. A European regulatory arc. An Indian sphere rising quickly.

They will touch at the edges but will not integrate. Companies will build parallel products. Users will move between digital worlds the way people in Belarus once carried two SIM cards because no single system works everywhere.

This leads to hard realities. China will have a Chinese internet. North Korea will have a hermit intranet. Western observers may see rights violations. But in a sovereign world, the ultimate check on digital power is the physical right to leave.

The moral line is not whether a firewall exists. It is whether the citizen can walk away from behind it.

The Eurodollar paradox

I do not welcome this fracture. I spent a career building systems meant to bridge these gaps, arguing that a unified network is more resilient than a divided one. The Splinternet is fragile. It is inefficient. It is a retreat.

But we must acknowledge what held the old world together.

It was not global government. It was interoperability without permission.

The Eurodollar was the archetype. Dollars held in banks outside the United States, beyond direct regulation. Messy. Uncomfortable. Essential. It kept the global economy moving.

The early internet played the same role. A neutral zone where data could flow even when nations disagreed.

We are dismantling that neutral zone. We are replacing interoperability without permission with compliance by permission.

We may gain sovereignty. But we are destroying the mechanism that allowed a divided world to function as one.

The GRANITE shift

There is one final signal of how far the pendulum has swung. Jurisdictions like Wyoming have begun experimenting with laws such as the GRANITE Act, which create penalties for complying with certain foreign mandates. It is a poison pill strategy. If adopted widely, it would make it illegal for a company to obey another country’s extraterritorial demands.

The meaning is clear. The era of a single global ruleset is ending. Regions are not just drifting apart. They are beginning to defend the separation.

The conclusion most people avoid

We did not lose a shared internet because of malice. We lost it because the assumptions behind it stopped being true. The system became too interconnected for local rules to govern and too political for global rules to be accepted.

What comes next will not be universal or seamless or even fair.

But it will be stable.

Sometimes the only way to solve an impossible equation is to stop pretending there is a single answer.

Attestation, What It Really Proves and Why Everyone Is About to Care

Attestation has become one of the most important yet misunderstood concepts in modern security. It now shows up in hardware tokens, mobile devices, cloud HSMs, TPMs, confidential computing platforms, and operating systems. Regulations and trust frameworks are beginning to depend on it. At the same time people talk about attestation as if it has a single, universally understood meaning. It does not.

Attestation is not a guarantee. It is a signed assertion that provides evidence about something. What that evidence means depends entirely on the system that produced it, the protection boundary of the key that signed it, and the verifier’s understanding of what the attestation asserts and the verifier’s faith in the guarantees provided by the attestation mechanism itself.

To understand where security is heading, you need to understand what attestation can prove, what it cannot prove, and why it is becoming essential in a world where the machines running our code are no longer under our control.

Claims, Attestations, and the Strength of Belief

A claim is something a system says about itself. There is no protection behind it and no expectation of truth. A user agent string is a perfect example. It might say it is an iPhone, an Android device, or Windows. Anyone can forge it. It is just metadata. At best it lets you guess what security properties the device might have, but a guess is not evidence.

Here is a typical user agent string:

Mozilla/5.0 (iPhone; CPU iPhone OS 15_2 like Mac OS X)

AppleWebKit/605.1.15

Mobile/15E148

Safari/605.1.15

If you break it apart it claims to be an iPhone, running iOS, using Safari, and supporting specific web engines. None of this is verified. It is only a claim.

Attestation is different. Attestation is a signed statement produced by a system with a defined protection boundary. That boundary might be hardware, a secure element, a trusted execution environment, a Secure Enclave, a hypervisor-isolated domain, or even an operating system component rooted in hardware measurements but not itself an isolated security boundary. Attestation does not make a statement true, but it provides a basis to believe it because the signing key is protected in a way the verifier can reason about.

Attestation is evidence. The strength of that evidence depends on the strength of the protection boundary and on the verifier’s understanding of what the attestation actually asserts.

Why Attestation Became Necessary

When I worked at Microsoft we used to repeat a simple rule about computer security. If an attacker has access to your computer it is no longer your computer. That rule made sense when software ran on machines we owned and controlled. You knew who had access. You knew who set the policies. You could walk over and inspect the hardware yourself.

That world disappeared.

A classic illustration of this problem is the evil maid attack on laptops. If a device is left unattended an attacker with physical access can modify the boot process, install malicious firmware, or capture secrets without leaving obvious traces. Once that happens the laptop may look like your computer but it is no longer your computer.

This loss of control is not limited to physical attacks. It foreshadowed what came next in computing. First workloads moved into shared data centers. Virtualization blurred the idea of a single physical machine. Cloud computing erased it entirely. Today your software runs on globally distributed infrastructure owned by vendors you do not know, in data centers you will never see, under policies you cannot dictate.

The old trust model depended on physical and administrative control. Those assumptions no longer hold. The modern corollary is clear. If your code is running on someone else’s computer you need evidence that it is behaving the way you expect.

Vendor promises are claims. Documentation is a claim. Marketing is a claim. None of these are evidence. To make correct security decisions in this environment you need verifiable information produced by the platform itself. That is the role attestation plays. The standards community recognized this need and began defining shared models for describing and evaluating attestation evidence, most notably through the IETF RATS architecture.

The IETF RATS View of Attestation

The IETF formalized the attestation landscape through the RATS architecture. It defines three roles. The attester produces signed evidence about itself or about the keys it generates. The verifier checks the evidence and interprets its meaning. The relying party makes a decision based on the verifier’s result.

This separation matters because it reinforces that attestation is not the decision itself. It is the input to the decision, and different attesters produce different types of evidence.

Two Families of Attestation

Attestation appears in many forms, but in practice it falls into two broad families.

One family answers where a key came from and whether it is protected by an appropriate security boundary. The other answers what code is running and whether it is running in an environment that matches expected security policies. They both produce signed evidence but they measure and assert different properties.

Key Management Attestation: Provenance and Protection

YubiKey PIV Attestation

YubiKeys provide a clear example of key management attestation. When you create a key in a PIV slot the device generates an attestation certificate describing that key. The trust structure behind this is simple. Each YubiKey contains a root attestation certificate that serves as the trust anchor. Beneath that root is a device specific issuing CA certificate whose private key lives inside the secure element and cannot be extracted. When a verifier asks the device to attest a slot the issuing CA signs a brand new attestation certificate for that session. The public key in the certificate is always the same if the underlying slot key has not changed, but the certificate itself is newly generated each time with a different serial number and signature. This design allows verifiers to confirm that the key was generated on the device while keeping the blast radius small. If one token is compromised only that device is affected.

Cloud HSMs and the Marvell  Ecosystem

Cloud HSMs scale this idea to entire services. They produce signed statements asserting that keys were generated inside an HSM, protected under specific roots, bound to non exportability rules, and conforming to certification regimes. Many cloud HSMs use Marvell hardware, and other commercial and open HSMs implement attestation as well. The Marvell based examples are used here simply because the inconsistencies are illustrative, not because they are the only devices that support attestation. Many vendors provide their own attestation formats and trust chains. AWS CloudHSM and Google Cloud HSM share that silicon base, but their attestation formats differ because they use different firmware and integration layers.

This inconsistency creates a real challenge for anyone who needs to interpret attestation evidence reliably. Even when the underlying hardware is the same the attestation structures are not. To make this practical to work with we maintain an open source library that currently decodes, validates, and normalizes attestation evidence from YubiKeys and Marvell based HSMs, and is designed to support additional attestation mechanisms over time. Normalization matters because if we want attestation to be widely adopted we cannot expect every verifier or relying party to understand every attestation format. Real systems often encounter many different kinds of attestation evidence from many sources, and a common normalization layer is essential to make verification scalable.

https://github.com/PeculiarVentures/attestation

Hardware alone does not define the attestation model. The actual evidence produced by the device does.

Mobile Key Attestation: Android and iOS

Mobile devices are the largest deployment of secure hardware anywhere. Their attestation mechanisms reflect years of lessons about device identity, OS integrity, and tamper resistance.

Android Keymaster and StrongBox

Android attestation provides information about the secure element or TEE, OS version, patch level, verified boot state, device identity, downgrade protection, and key properties. It anchors keys to both hardware and system state. This attestation is used for payments, enterprise identity, FIDO authentication, and fraud reduction.

Apple Secure Enclave Attestation

Apple takes a similar approach using a different chain. Secure Enclave attestation asserts device identity, OS trust chain, enclave identity, and key provenance. It supports Apple Pay, iCloud Keychain, MDM enrollment, and per app cryptographic isolation.

Confidential Computing Attestation: Proving Execution Integrity

Confidential computing attestation solves a different problem. Instead of proving where a key came from, it proves what code is running and whether it is running in an environment that meets expected security constraints.

Intel SGX provides enclave reports that describe enclave measurements. AMD SEV-SNP provides VM measurement reports. AWS Nitro Enclaves use signed Nitro documents. Google Confidential VMs combine SEV-SNP with Google’s verification policies.

This evidence asserts which measurements the hardware recorded, whether memory is isolated, and whether the platform is genuine.

Why the Distinction Matters

Key management attestation cannot answer questions about code execution. Confidential computing attestation cannot answer questions about where keys were created. The evidence is different, the claims are different, and the trust chains are different.

If you do not understand which form of attestation you are dealing with you cannot interpret its meaning correctly.

Regulatory and Policy Pressure

Attestation is becoming important because the bar for trust has been raised. The clearest example is the CA or Browser Forum Code Signing Baseline Requirements, which mandate hardware protected private keys and increasingly rely on attestation as the evidence of compliance.

Secure development frameworks including the EU Cyber Resilience Act push vendors toward demonstrating that firmware and update signing keys were generated and protected in secure environments. Enterprise procurement policies frequently require the same assurances. These rules do not always use the word attestation, but the outcomes they demand can only be met with attestation evidence.

The Lesson

Attestation is evidence. It is not truth. It is stronger than a claim because it is anchored in a protection boundary, but the strength of that boundary varies across systems and architectures. The meaning of the evidence depends on the attester, the verifier, and the assumptions of the relying party.

There are two major forms of attestation. Key management attestation tells you where a key came from and how it is protected. Confidential computing attestation tells you what code is running and where it is running.

As computing continues to move onto systems we do not control and becomes more and more distributed, attestation will become the foundation of trust. Secure systems will rely on verifiable evidence instead of assumptions, and attestation will be the language used to express that evidence.