Monthly Archives: October 2025

Beyond Gutenberg: How AI Is Teaching Us to Think About Thinking

At breakfast the other day, I was thinking about those old analogy questions: “Hot is to cold as light is to ___?” My kids would roll their eyes. They feel like relics from standardized tests.

But those questions were really metacognitive exercises. You had to recognize the relationship between the first pair (opposites) and apply that pattern to find the answer (dark). You had to think about how you were thinking.

I was thinking about what changes when reasoning becomes abundant and cheap. It hit me that this skill, thinking about how you think, becomes the scarcest resource.

Learning From Nature

A few years ago, we moved near a lake. Once we moved in we noticed deer visiting an empty lot next to us that had turned into a field of wildflowers. A doe would bring her fawn and, with patient movements, teach it where to find clover, when to freeze at a scent, and where to drink. It was wordless instruction: demonstration and imitation. Watch, try, fail, try again. The air would still, the morning light just breaking over the field. Over time, that fawn grew up and brought its own young to the same spot. The cycle continued until the lot was finally developed and they stopped coming.

That made me think about how humans externalized learning in ways no other species has. The deer’s knowledge would die with her or pass only to her offspring. Humans figured out how to make knowledge persist and spread beyond direct contact and beyond a single lifetime.

We started with opposable thumbs. That physical adaptation let us manipulate tools precisely enough to mark surfaces, to write. Writing captured thought outside of memory. For the first time, an idea could outlive the person who had it. Knowledge became persistent across time and transferable without physical proximity. But writing had limits. Each copy required a scribe and hours of work, so knowledge stayed localized.

Then came printing. Gutenberg’s press changed the economics. What took months by hand took hours on a press. The cost of reproducing knowledge collapsed, and books became locally abundant. Shipping and trade moved that knowledge farther, and the internet eventually collapsed distance altogether. Local knowledge became globally accessible.

Now we have LLMs. They do not just expose knowledge. They translate it across levels of understanding. The same information can meet a five-year-old asking about photosynthesis, a graduate student studying chlorophyll, and a biochemist examining reaction pathways. Each explanation is tuned to the learner’s mental model. They also make knowledge discoverable in new ways, so you can ask questions you did not know how to ask and build bridges from what you understand to what you want to learn.

Each step in this progression unlocked something new. Each one looked dangerous at first. The fear is familiar. It repeats with every new medium.

The Pattern of Panic

Socrates worried that writing would erode memory and shallow thinking (Plato’s Phaedrus). He was partly right about trade-offs. We lost some oral tradition, but gained ideas that traveled beyond the people who thought them.

Centuries later, monks who spent lifetimes hand-copying texts saw printing as a threat. Mass production, they feared, would cheapen reading and unleash dangerous ideas. They were right about the chaos. The press spread science and superstition alike, fueled religious conflict, and disrupted authority. It took centuries to build institutions of trust: printers’ guilds, editors, publishers, peer review, and universities.

But the press did not make people stupid. It democratized access to knowledge. It expanded who could participate in learning and debate.

We hear the same fears about AI. LLMs will kill reasoning. Students will stop writing. Professionals will outsource thinking. I understand the worry. I have felt it.

History suggests something more nuanced.

AI as Our New Gutenberg

Gutenberg collapsed the cost of copying. AI collapses the cost of reasoning.

The press did not replace reading. It changed who could read and how widely ideas spread. It forced literacy at scale because there were finally enough books to warrant it.

AI does not replace thinking. It changes the economics of cognitive work the same way printing changed knowledge reproduction. Both lower barriers, expand access, and demand new norms of verification. Both spread misinformation before society learns to regulate them. The press forced literacy. AI forces metacognitive literacy: the ability to evaluate reasoning, not just consume conclusions.

We are in the messy adjustment period. We lack stable institutions around AI and settled norms about what counts as trustworthy machine-generated information. We do not yet teach universal AI fluency. The equivalents of editors and peer review for synthetic reasoning are still forming. It will take time, and we will figure it out.

What This Expansion Means

I have three kids: 30, 20, and 10. Each is entering a different world.

My 30-year-old launched before AI accelerated and built a foundation in the old knowledge economy.

My 20-year-old is in university, learning to work with these tools while developing core skills. He stands at the inflection point: old enough to have formed critical thinking without AI, young enough to fully leverage it.

My 10-year-old will not remember a time before you could converse with a machine that reasons. AI will be ambient for her. It is different, and it changes the skills she needs.

This is not just about instant answers. It is about who gets to participate in knowledge work. Traditional systems reward verbal fluency, math reasoning, quick recall, and social confidence. They undervalue spatial intuition, pattern recognition across domains, emotional insight, and systems thinking. Many brilliant minds do not fit the template.

Used well, AI can correct that imbalance. It acts as a cognitive prosthesis that extends abilities that once limited participation. Someone who struggles with structure can collaborate with a system that scaffolds it while preserving original insight. Someone with dyslexia can translate thoughts to text fluidly. Visual thinkers can generate diagrams that communicate what words cannot.

Barriers to entry drop and the diversity of participants increases. This is equity of potential, not equality of outcome.

But access without reflection is noise.

We are not producing too many answers. We are producing too few people who know how to evaluate them. The danger is not that AI makes thinking obsolete. It is that we fail to teach people to think about their thinking while using powerful tools.

When plausible explanations are cheap and fast, the premium shifts to discernment. Can you tell when something sounds right but is not? Can you evaluate the trustworthiness of a source? Can you recognize when to dig deeper versus when a surface answer suffices? Can you catch yourself when you are being intellectually lazy?

This is metacognitive literacy: awareness and regulation of your own thought process. Psychologist John Flavell first defined metacognition in the 1970s as knowing about and managing one’s own thinking, planning, monitoring, and evaluating how we learn. In the AI age, that skill becomes civic rather than academic.

The question is not whether to adopt AI. That is already happening. The question is how to adapt. How to pair acceleration with reflection so that access becomes understanding.

What I Am Doing About This

This brings me back to watching my 10-year-old think out loud and wondering what kind of world she will build with these tools.

I have been looking at how we teach gifted and twice-exceptional learners. These are kids who are intellectually advanced but may also face learning challenges like ADHD or dyslexia. Their teachers could not rely on memorization or single-path instruction. They built multimodal learning, taught metacognition explicitly, and developed evaluation skills because these kids question everything.

Those strategies are not just for gifted kids anymore. They are what all kids need when information is abundant and understanding is scarce. When AI can answer almost any factual question, value shifts to higher-order skills.

I wrote more detail here: Beyond Memorization: Preparing Kids to Thrive in a World of Endless Information

The short version: question sources rather than absorb them. Learn through multiple modes. Build something, draw how it works, explain it in your own words. Reflect on how you solved a problem, not only whether you got it right. See connections across subjects instead of treating knowledge as isolated silos. Build emotional resilience and comfort with uncertainty alongside technical skill.

We practice simple things at home. At dinner when we discuss a news article: How do we know this claim is accurate? What makes this source trustworthy? What would we need to verify it? When my 10-year-old draws, writes or builds things: I ask what worked? What did not? What will you try differently next time, and why?

It is not about protecting her from AI. That is impossible and counterproductive. It is about preparing her to work with it, question it, and shape it. To be an active participant rather than a passive consumer.

I am optimistic. This is another expansion in how humans share and build knowledge. We have been here before with writing, printing, and the internet. Each time brought anxiety and trade-offs. Each time we adapted and expanded who could participate.

This time is similar, only faster. My 20-year-old gets to help harness it. My 10-year-old grows up native to it.

They will not need to memorize facts like living libraries. They will need to judge trustworthiness, connect disparate ideas, adapt as tools change, and recognize when they are thinking clearly versus fooling themselves. These are metacognitive skills, and they are learnable.

If we teach people to think about their thinking as carefully as we once taught them to read, and if we pair acceleration with reflection, this could become the most inclusive knowledge revolution in history.

That is the work. That is why I am optimistic.


For more on this thinking: AI as the New Gutenberg

Compliance at the Speed of Code

Compliance is a vital sign of organizational health. When it trends the wrong way, it signals deeper problems: processes that can’t be reproduced, controls that exist only on paper, drift accumulating quietly until trust evaporates all at once.

The pattern is predictable. Gradual decay, ignored signals, sudden collapse. Different industries, different frameworks, same structural outcome. (I wrote about this pattern here.)

But something changed. AI is rewriting how software gets built, and compliance hasn’t kept up.

Satya Nadella recently said that as much as 30% of Microsoft’s production code is now written by AI. Sundar Pichai put Google’s number in the same range. These aren’t marketing exaggerations; they mark a structural change in how software gets built.

Developers no longer spend their days typing every line. They spend them steering, reviewing, and debugging. AI fills in the patterns, and the humans decide what matters. The baseline of productivity has shifted.

Compliance has not. Its rhythms remain tied to quarterly reviews, annual audits, static documents, and ritualized fire drills. Software races forward at machine speed while compliance plods at audit speed. That mismatch isn’t just inefficient. It guarantees drift, brittleness, and the illusion that collapse comes without warning.

If compliance is the vital sign, how do you measure it at the speed of code?

What follows is not a description of today’s compliance tools. It’s a vision for where compliance infrastructure needs to go. The technology exists. The patterns are proven in adjacent domains. What’s missing is integration. This is the system compliance needs to become.

The Velocity Mismatch

The old world of software was already hard on compliance. Humans writing code line by line could outpace annual audits easily enough. The new world makes the mismatch terminal.

If a third of all production code at the largest software companies is now AI-written, then code volume, change velocity, and dependency churn have all exploded. Modern development operates in hours and minutes, not quarters and years.

Compliance, by contrast, still moves at the speed of filing cabinets. Controls are cross-referenced manually. Policies live in static documents. Audits happen long after the fact, by which point the patient has either recovered or died. By the time anyone checks, the system has already changed again.

Drift follows. Exceptions pile up quietly. Compensating controls are scribbled into risk registers. Documentation diverges from practice. On paper, everything looks fine. In reality, the brakes don’t match the car.

It’s like running a Formula 1 car with horse cart brakes. You might get a few laps in. The car will move, and at first nothing looks wrong. But eventually the brakes fail, and when they do the crash looks sudden. The truth is that failure was inevitable from the moment someone strapped cart parts onto a race car.

Compliance today is a system designed for the pace of yesterday, now yoked to the speed of code. Drift isn’t a bug. It’s baked into the mismatch.

The Integration Gap

Compliance breaks at the integration point. When policies live in Confluence and code lives in version control, drift isn’t a defect. It’s physics. Disconnected systems diverge.

The gap between documentation and reality is where compliance becomes theater. PDFs can claim controls exist while repos tell a different story.

Annual audits sample: pull some code, check some logs, verify some procedures. Sampling only tells you what was true that instant, not whether controls remain in place tomorrow or were there yesterday before auditors arrived.

Eliminate the gap entirely.

Policies as Code

Version control becomes the shared foundation for both code and compliance.

Policies, procedures, runbooks, and playbooks become versioned artifacts in the same system where code lives. Not PDFs stored in SharePoint. Not wiki pages anyone can edit without review. Markdown files in repositories, reviewed through pull requests, with approval workflows and change history. Governance without version control is theater.

When a policy changes, you see the diff. When someone proposes an exception (a documented deviation from policy), it’s a commit with a reviewer. When an auditor asks for the access control policy that was in effect six months ago, you check it out from the repo. The audit trail is the git history. Reproducibility by construction.

Governance artifacts get the same discipline as code. Policies go through PR review. Changes require approvals from designated owners. Every modification is logged, attributed, and traceable. You can’t silently edit the past.

Once policies live in version control, compliance checks run against them automatically. Code and configuration changes get checked against the current policy state as they happen. Not quarterly, not at audit time, but at pull request time.

When policy changes, you immediately see what’s now out of compliance. New PCI requirement lands? The system diffs the old policy against the new one, scans your infrastructure, and surfaces what needs updating. Gap analysis becomes continuous, not an annual fire drill that takes two months and produces a 60-page spreadsheet no one reads.

Risk acceptance becomes explicit and tracked. Not every violation is blocking, but every violation is visible. “We’re accepting this S3 bucket configuration until Q3 migration” becomes a tracked decision in the repo with an owner, an expiration date, and compensating controls. The weighted risk model has teeth because the risk decisions themselves are versioned and auditable.

Monitoring Both Sides of the Gap

Governance requirements evolve. Frameworks update. If you’re not watching, surprises arrive weeks before an audit.

Organizations treat this as inevitable, scrambling when SOC 2 adds trust service criteria or PCI-DSS publishes a new version. The fire drill becomes routine.

But these changes are public. Machines can monitor for updates, parse the diffs, and surface what shifted. Auditors bring surprises. Machines should not.

Combine external monitoring with internal monitoring and you close the loop. When a new requirement lands, you immediately see its impact on your actual code and configuration.

SOC 2 adds a requirement for encryption key rotation every 90 days? The system scans your infrastructure, identifies 12 services that rotate keys annually, and surfaces the gap months ahead. You have time to plan, size the effort, build it into the roadmap.

This transforms compliance from reactive to predictive. You see requirements as they emerge and measure their impact before they become mandatory. The planning horizon extends from weeks to quarters.

From Vibe Coding to Vibe Compliance

Developers have already adapted to AI-augmented work. They call it “vibe coding.” The AI fills in the routine structures and syntax while humans focus on steering, debugging edge cases, and deciding what matters. The job shifted from writing every line to shaping direction. The work moved from typing to choosing.

Compliance will follow the same curve. The rote work gets automated. Mapping requirements across frameworks, checklist validations, evidence collection. AI reads the policy docs, scans the codebase, flags the gaps, suggests remediations. What remains for humans is judgment: Is this evidence meaningful? Is this control reproducible? Is this risk acceptable given these compensating controls?

This doesn’t eliminate compliance professionals any more than AI eliminated engineers. It makes them more valuable. Freed from clerical box-checking, they become what they should have been all along: stewards of resilience rather than producers of audit artifacts.

The output changes too. The goal is no longer just producing an audit report to wave at procurement. The goal is producing telemetry showing whether the organization is actually healthy, whether controls are reproducible, whether drift is accumulating.

Continuous Verification

What does compliance infrastructure look like when it matches the speed of code?

A bot comments on pull requests. A developer changes an AWS IAM policy. Before the PR merges, an automated check runs: does this comply with the principle of least privilege defined in access-control.md? Does it match the approved exception for the analytics service? If not, the PR is flagged. The feedback is immediate, contextual, and actionable.

Deployment gates check compliance before code ships. A service tries to deploy without the required logging configuration. The pipeline fails with a clear message: “This deployment violates audit-logging-policy.md section 3.1. Either add structured logging or file an exception in exceptions/logging-exception-2025-q4.md.”

Dashboards update in real time, not once per quarter. Compliance posture is visible continuously. When drift occurs (when someone disables MFA on a privileged account, or when a certificate approaches expiration without renewal) it shows up immediately, not six months later during an audit.

Weighted risk with explicit compensating controls. Not binary red/green status, but a spectrum: fully compliant, compliant with approved exceptions, non-compliant with compensating controls and documented risk acceptance, non-compliant without mitigation. Boards see the shades of fragility. Practitioners see the specifics. Everyone works from the same signal, rendered at the right level of abstraction.

The Maturity Path

Organizations don’t arrive at this state overnight. Most are still at Stage 1 or earlier, treating governance as static documents disconnected from their systems. The path forward has clear stages:

Stage 1: Baseline. Get policies, procedures, and runbooks into version-controlled repositories. Establish them as ground truth. Stop treating governance as static PDFs. This is where most organizations need to start.

Stage 2: Drift Detection. Automated checks flag when code and configuration diverge from policy. The checks run on-demand or on a schedule. Dashboards show gaps in real time. Compliance teams can see drift as it happens instead of discovering it during an audit. The feedback loop shrinks from months to days. Some organizations have built parts of this, but comprehensive drift detection remains rare.

Stage 3: Integration. Compliance checks move into the developer workflow. Bots comment on pull requests. Deployment pipelines run policy checks before shipping. The feedback loop shrinks from days to minutes. Developers see policy violations in context, in their tools, while changes are still cheap to fix. This is where the technology exists but adoption is still emerging.

Stage 4: Regulatory Watch. The system monitors upstream changes: new SOC 2 criteria, updated PCI-DSS requirements, revised GDPR guidance. When frameworks change, the system diffs the old version against the new, identifies affected controls, maps them to your current policies and infrastructure, and calculates impact. You see the size of the work, the affected systems, and the timeline before it becomes mandatory. Organizations stop firefighting and start planning quarters ahead. This capability is largely aspirational today.

Stage 5: Enforcement. Policies tie directly to what can deploy. Non-compliant changes require explicit exception approval. Risk acceptance decisions are versioned, tracked, and time-bound. The system makes the right path the easy path. Doing the wrong thing is still possible (you can always override) but the override itself becomes evidence, logged and auditable. Few organizations operate at this level today.

This isn’t about replacing human judgment with automation. It’s about making judgment cheaper to exercise. At Stage 1, compliance professionals spend most of their time hunting down evidence. At Stage 5, evidence collection is automatic, and professionals spend their time on the judgment calls: should we accept this risk? Is this compensating control sufficient? Is this policy still appropriate given how the system evolved?

The Objections

There are objections. The most common is that AI hallucinates, so how can you trust it with compliance?

Fair question. Naive AI hallucinates. But humans do too. They misread policies, miss violations, get tired, and skip steps. The compliance professional who spent eight hours mapping requirements across frameworks before lunch makes mistakes in hour nine.

Structured AI with proper constraints works differently. Give it explicit sources, defined schemas, and clear validation rules, and it performs rote work more reliably than most humans. Not because it’s smarter, but because it doesn’t get tired, doesn’t take shortcuts, and checks every line the same way every time.

The bot that flags policy violations isn’t doing unconstrained text generation. It’s diffing your code against a policy document that lives in your repo, following explicit rules, and showing its work: “This violates security-policy.md line 47, committed by [email protected] on 2025-03-15.” That isn’t hallucination. That’s reproducible evidence.

And it scales in ways humans never can. The human compliance team can review 50 pull requests a week if they’re fast. The bot reviews 500. When a new framework requirement drops, the human team takes weeks to manually map old requirements against new ones. The bot does it in minutes.

This isn’t about replacing human judgment. It’s about freeing humans from the rote work where structured AI performs better. Humans hallucinate on routine tasks. Machines don’t. Let machines do what they’re good at so humans can focus on what they’re good at: the judgment calls that actually matter.

The second objection is that tools can’t fix culture. Also true. But tools can make cultural decay visible earlier. They can force uncomfortable truths into the open.

When policies live in repos and compliance checks run on every PR, leadership can’t hide behind dashboards. If the policies say one thing and the code does another, the diff is public. If exceptions are piling up faster than they’re closed, the commit history shows it. If risk acceptance decisions keep getting extended quarter after quarter, the git log is evidence.

The system doesn’t fix culture, but it makes lying harder. Drift becomes visible in real time instead of hiding until audit season. Leaders who want to ignore compliance still can, but they have to do so explicitly, in writing, with attribution. That changes the incentive structure.

Culture won’t be saved by software. But it can’t be saved without seeing what’s real. Telemetry is the prerequisite for accountability.

The Bootstrapping Problem

If organizations are already decaying, if incentives are misaligned and compliance is already theater, how do they adopt this system?

Meet people where they are. Embed compliance in the tools developers already use.

Start with a bot that comments on pull requests. Pick one high-signal policy (the one that came up in the last audit, or the one that keeps getting violated). Write it in Markdown, commit it to a repo, add a simple check that flags violations in PRs. Feedback lands in the PR, where people already work.

This creates immediate value. Faster feedback. Issues caught before they ship. Less time in post-deployment remediation. The bot becomes useful, not bureaucratic overhead.

Once developers see value, expand coverage. Add more policies. Integrate more checks. Build the dashboard that shows posture in real time. Start with the point of maximum pain: the gap between what policies say and what code does.

Make the right thing easier than the wrong thing. That’s how you break equilibrium. Infrastructure change leads culture, not the other way around.

Flipping the Incentive Structure

Continuous compliance telemetry creates opportunities to flip the incentive structure.

The incentive problem is well-known. Corner-cutters get rewarded with velocity and lower costs. The people who invest in resilience pay the price in overhead and friction. By the time the bill comes due, the corner-cutters have moved on.

What if good compliance became economically advantageous in real time, not just insurance against future collapse?

Real-time, auditable telemetry makes compliance visible in ways annual reports never can. A cyber insurer can consume your compliance posture continuously instead of relying on a point-in-time questionnaire. Organizations that maintain strong controls get lower premiums. Rates adjust dynamically based on drift. Offer visibility into the metrics that matter and get buy-down points in return.

Customer due diligence changes shape. Vendor risk assessments that take weeks and rely on stale SOC 2 reports become real-time visibility into current compliance posture. Procurement accelerates. Contract cycles compress. Organizations that can demonstrate continuous control have competitive advantage.

Auditors spend less time collecting evidence and more time evaluating controls. When continuous compliance is demonstrable, scope reduces, costs drop, cycles shorten.

Partner onboarding that used to require months of back-and-forth security reviews happens faster when telemetry is already available. Certifications and integrations move at the speed of verification, not documentation.

The incentive structure inverts. Organizations that build continuous compliance infrastructure get rewarded immediately: lower insurance costs, faster sales cycles, reduced audit expense, easier partnerships. The people who maintain strong controls see economic benefit now, not just avoided pain later.

This is how you fix the incentive problem at scale. Make good compliance economically rational today.

The Choice Ahead

AI has already made coding a collaboration between people and machines. Compliance is next.

The routine work will become automated, fast, and good enough for the basics. That change is inevitable. The real question is what we do with the time it frees up.

Stop there, and compliance becomes theater with better graphics. Dashboards that look impressive but still tell you little about resilience.

Go further, and compliance becomes what it should have been all along: telemetry about reproducibility. A vital sign of whether the organization can sustain discipline when it matters. An early warning system that makes collapse look gradual instead of sudden.

If compliance was the vital sign of organizational decay, then this is the operating system that measures it at the speed of code.

The frameworks aren’t broken. The incentives are. The rhythms are. The integration is.

The technology to build this system exists. Version control is mature. CI/CD pipelines are ubiquitous. AI can parse policies and scan code. What’s missing is stitching the pieces together and treating compliance like production.

Compliance will change. The only question is whether it catches up to code or keeps trailing it until collapse looks sudden.

Gradually, Then Suddenly: Compliance as a Vital Sign of Organizational Decay

“How did you go bankrupt?” a character asks in Hemingway’s The Sun Also Rises.
“Two ways,” comes the reply. “Gradually, then suddenly.”

That is how organizations fail.

Decay builds quietly until, all at once, trust evaporates. The surprise is rarely the failure itself. The surprise is that the warning signs were ignored.

One of the clearest of those warning signs is compliance.

Compliance Isn’t Security

Security practitioners like to say, “compliance isn’t security.” They are right. Implementing a compliance framework does not make you secure.

SOC 2 shows why. It is a framework for attesting to controls, not for proving resilience. Yet many organizations treat it as a box-checking exercise: templated policies, narrow audits, point-in-time snapshots.

The result is an audit letter and seal that satisfies procurement but says little about how the company actually manages risk.

That is why security leaders often overlook compliance’s deeper value.

But doing so misses the point. Compliance is not proof of security. It is a vital sign of organizational health.

Compliance as a Vital Sign

Think of compliance like blood pressure. It does not guarantee health, but when it trends the wrong way, it signals that something deeper is wrong.

Organizational health has many dimensions. One of the most important is reproducibility, the ability to consistently do what you say you do.

That is what compliance is really about. Not proving security, but proving reproducibility.

Security outcomes flow from reproducible processes. Compliance is the discipline of showing those processes exist and can be repeated under scrutiny.

If you are not using your compliance program this way, as a vital sign of organizational health, there is a good chance you are doing it wrong.

Telemetry vs Point-in-Time Theater

Compliance only works as a vital sign if it is measured continually.

A one-time audit is like running an EKG after the patient has died. It may capture a signal, but it tells you nothing about resilience.

If your compliance telemetry only changes at audit time, you do not have telemetry at all. You have theater.

Healthy organizations use frameworks as scaffolding for living systems. They establish meaningful policies, connect them to real procedures, and measure whether those procedures are working. Over time, this produces telemetry that shows trends, not just snapshots.

Hollow organizations optimize for paperwork. They treat audits as annual fire drills, focus on appearances, and let compliance debt pile up out of sight.

On paper they look fine. In reality they are decaying.

Distrust Looks Sudden, but Never Is

The certificate authority ecosystem makes this pattern unusually visible.

Every distrusted CA had passing audit reports. Nearly all of them showed years of compliance issues before trust was revoked. Audit failures, unremediated findings, vague documentation, repeat exceptions. All accumulating gradually, all while auditors continued to issue clean opinions.

When the final decision came, it looked sudden. But in reality it was the inevitable climax of a long decline.

The frameworks were there: WebTrust, ETSI, CA/Browser Forum requirements. What failed was not the frameworks, but the way those CAs engaged with them.

Independent Verification, Aligned Incentives

The auditor problem mirrors the organizational one, and it appears across every regulated industry.

Auditors get paid by the organizations they audit. Clean reports retain clients. Reports full of findings create friction. The rational economic behavior is to be “reasonable” about what constitutes a violation.

Audits are scoped and priced competitively. Deep investigation is expensive. Surface verification of documented controls is cheaper. When clients optimize for cost and auditors work within fixed budgets, depth loses.

Auditors are often competent in frameworks and attestation but lack deep technical or domain expertise. They can verify a policy exists and that sampled evidence shows it was followed. They are less equipped to evaluate whether the control actually works, whether it can be bypassed, or whether the process remains reproducible under stress.

In the WebPKI, WebTrust auditors issued clean opinions while CA violations accumulated. In finance, auditors at Wirecard and Enron missed or downplayed systemic issues for years. In healthcare, device manufacturers pass ISO audits while quality processes degrade. The pattern repeats because the incentive structure is the same.

The audit becomes another layer of theater. Independent verification that optimizes for the same outcomes as the organization it is verifying.

The Pattern Repeats Everywhere

This dynamic is not limited to the WebPKI. The same pattern plays out everywhere.

Banks fined for AML or KYC failures rarely collapse overnight. Small violations and ignored remediation build up until regulators impose billion-dollar penalties or revoke licenses.

FDA warning letters and ISO 13485 or IEC 62304 violations accumulate quietly in healthcare and medical devices. Then, suddenly, a product is recalled, approval is delayed for a year, or market access is lost.

Utilities cited for NERC CIP non-compliance often show the same gaps for years. Then a blackout, a safety incident, or a regulator penalty makes the cost undeniable.

SOC 2 and ISO 27001 in technology are often reduced to checklists. Weak practices are hidden until a breach forces disclosure, the SEC steps in, or customers walk away.

For years, auditors flagged accounting irregularities and opaque subsidiaries at Wirecard. The warnings were dismissed. Then suddenly €1.9 billion was missing and the company collapsed.

Enron perfected compliance theater, using complex structures and manipulated audits to look healthy. The gradual phase was tolerated exceptions and “creative” accounting. The sudden phase was exposure, bankruptcy, and a collapse of trust.

In security, the same pattern shows up when breaches happen at firms with repeat compliance findings around patching or access control. To outsiders the breach looks like bad luck. To insiders, the vital signs had been flashing red for years.

Different industries. Different frameworks. Same structural pattern: gradual non-conformance, ignored signals, sudden collapse.

Floor or Facade

The difference comes down to how organizations engage with frameworks.

Healthy compliance treats frameworks as minimums. Organizations design business-appropriate and system-appropriate security controls on top. Compliance provides evidence of real practices. It is reproducible.

Hollow compliance treats frameworks as the ceiling. Controls are mapped to audit templates. Documentation is produced to satisfy the letter of the requirement, not to reflect reality. It is performative.

Healthy compliance is a floor. Hollow compliance is a facade.

Which one are you building on?

Why Theater Wins

Compliance theater is not a knowledge problem. It is an incentive problem with a structural enforcement mechanism.

The people who bear the cost of real compliance (engineering time, operational friction, headcount) rarely bear the cost of compliance failure. By the time collapse happens, they have often moved on: promoted, departed, or insulated by organizational buffers.

Meanwhile, the people who face immediate consequences for not having a an audit letter and seal (sales cannot close deals, partnerships stall, procurement rejects you) have every incentive to optimize for the artifact, not the reality.

The rational individual behavior at every level produces collectively irrational outcomes.

Sales needs SOC 2 by Q3 or loses the enterprise deal. Finance treats compliance as overhead to minimize. Engineering sees security theater while facing pressure to ship. The compliance team, caught between impossible demands, optimizes for passing the audit. Executives get rewarded for revenue growth and cost control, not for resilience that may only matter after they are gone.

Even when individuals want to do it right, organizational structure fights them.

Ownership fragments across the organization. Security owns controls, IT owns implementation, Legal owns policy, Compliance owns audits, Business owns risk acceptance. No one owns the system. Everyone optimizes their piece.

Organizations compound this with contradictory approaches to security and compliance. Security gets diffused under the banner that “security is everyone’s responsibility,” which sounds collaborative but becomes an excuse to avoid investing in specialists, dedicated teams, or proper career paths. When security is everyone’s job, it becomes no one’s priority.

Compliance suffers the opposite problem. Organizations try to isolate it, contain the overhead, keep it from interfering with velocity. The compliance team becomes a service function that produces audit artifacts but has no authority over the processes they are attesting to. They document what should happen while having no power to ensure it does.

Both patterns distribute responsibility without authority, then act surprised when accountability evaporates.

Time horizons misalign. Boards and executives operate on quarterly cycles. Compliance decay compounds over 3 to 5 year horizons. By the time the bill comes due, the people who made the decisions have harvested their rewards and moved on.

At the top, executives rarely see true compliance health. Success is presented as green dashboards and completed audits. In the middle, compliance leaders want to be seen as delivering, so success is redefined as passing audits and collecting audit letters and seals. At the ground level, practitioners know the processes are brittle, but surfacing that truth conflicts with how success is measured. Everyone looks successful on their own terms, but the system as a whole decays.

Accountability diffuses. When collapse happens, it is framed as a “perfect storm” rather than the predictable outcome of accumulated decisions. Causation is plausibly deniable, so the individuals who created the conditions face no consequences.

The CA distrust pattern reveals this clearly. WebTrust audits happen annually. CA/B Forum violations accumulate gradually. But the CA’s business model rewards sales, not security or compliance.

The compliance team knows there are issues but lacks authority to halt issuance. Engineering knows the processes are brittle but gets rewarded for features. Leadership knows there are findings but faces pressure to maintain market share.

Everyone is locally rational. The system is globally fragile.

What Compliance Actually Predicts

Compliance failures do not directly cause security failures. But persistent compliance decay strongly correlates with organizational brittleness.

The specifics change: financial reporting, PKI audits, safety inspections. The pattern does not.

Gradual decay. Ignored signals. Then sudden collapse.

Compliance does not predict the exact failure you will face. But it does predict whether the organization has the culture and systems to sustain discipline when it matters.

That is why it is such a reliable leading indicator.

Organizations that suffer “sudden” compliance collapse are not unlucky. They are optimally designed for that outcome. The incentives reward short-term performance. The structure diffuses accountability. The measurement systems hide decay.

The surprising thing is not that it happens. It is that we keep pretending it is surprising.

Building Systems That See

Ignore your blood pressure long enough and the heart attack looks sudden. The same is true for organizations.

Compliance frameworks should not be dismissed as paperwork. They should be treated as telemetry, imperfect on their own but invaluable when tracked over time.

They are not the whole diagnosis, but they are the early warning system.

At its best, compliance is not about passing an audit. It is about showing you can consistently reproduce the controls and practices that keep the organization healthy.

If compliance is a vital sign, then what matters is not the paperwork but the telemetry. Organizations need systems that make compliance observable in real time, that prove reproducibility instead of just certifying it once a year, and that reveal patterns of decay before they turn into collapse.

Until we build those kinds of systems, most compliance programs will remain theater. Until compliance is treated as reproducibility rather than paperwork, incentives and structure will always win out.

The frameworks are fine. What is missing is the ability to see, continuously, whether the organization is living up to them.

Ignore the vital signs, and collapse will always look sudden.