Category Archives: Security

Compliance at the Speed of Code

Compliance is a vital sign of organizational health. When it trends the wrong way, it signals deeper problems: processes that can’t be reproduced, controls that exist only on paper, drift accumulating quietly until trust evaporates all at once.

The pattern is predictable. Gradual decay, ignored signals, sudden collapse. Different industries, different frameworks, same structural outcome. (I wrote about this pattern here.)

But something changed. AI is rewriting how software gets built, and compliance hasn’t kept up.

Satya Nadella recently said that as much as 30% of Microsoft’s production code is now written by AI. Sundar Pichai put Google’s number in the same range. These aren’t marketing exaggerations; they mark a structural change in how software gets built.

Developers no longer spend their days typing every line. They spend them steering, reviewing, and debugging. AI fills in the patterns, and the humans decide what matters. The baseline of productivity has shifted.

Compliance has not. Its rhythms remain tied to quarterly reviews, annual audits, static documents, and ritualized fire drills. Software races forward at machine speed while compliance plods at audit speed. That mismatch isn’t just inefficient. It guarantees drift, brittleness, and the illusion that collapse comes without warning.

If compliance is the vital sign, how do you measure it at the speed of code?

What follows is not a description of today’s compliance tools. It’s a vision for where compliance infrastructure needs to go. The technology exists. The patterns are proven in adjacent domains. What’s missing is integration. This is the system compliance needs to become.

The Velocity Mismatch

The old world of software was already hard on compliance. Humans writing code line by line could outpace annual audits easily enough. The new world makes the mismatch terminal.

If a third of all production code at the largest software companies is now AI-written, then code volume, change velocity, and dependency churn have all exploded. Modern development operates in hours and minutes, not quarters and years.

Compliance, by contrast, still moves at the speed of filing cabinets. Controls are cross-referenced manually. Policies live in static documents. Audits happen long after the fact, by which point the patient has either recovered or died. By the time anyone checks, the system has already changed again.

Drift follows. Exceptions pile up quietly. Compensating controls are scribbled into risk registers. Documentation diverges from practice. On paper, everything looks fine. In reality, the brakes don’t match the car.

It’s like running a Formula 1 car with horse cart brakes. You might get a few laps in. The car will move, and at first nothing looks wrong. But eventually the brakes fail, and when they do the crash looks sudden. The truth is that failure was inevitable from the moment someone strapped cart parts onto a race car.

Compliance today is a system designed for the pace of yesterday, now yoked to the speed of code. Drift isn’t a bug. It’s baked into the mismatch.

The Integration Gap

Compliance breaks at the integration point. When policies live in Confluence and code lives in version control, drift isn’t a defect. It’s physics. Disconnected systems diverge.

The gap between documentation and reality is where compliance becomes theater. PDFs can claim controls exist while repos tell a different story.

Annual audits sample: pull some code, check some logs, verify some procedures. Sampling only tells you what was true that instant, not whether controls remain in place tomorrow or were there yesterday before auditors arrived.

Eliminate the gap entirely.

Policies as Code

Version control becomes the shared foundation for both code and compliance.

Policies, procedures, runbooks, and playbooks become versioned artifacts in the same system where code lives. Not PDFs stored in SharePoint. Not wiki pages anyone can edit without review. Markdown files in repositories, reviewed through pull requests, with approval workflows and change history. Governance without version control is theater.

When a policy changes, you see the diff. When someone proposes an exception (a documented deviation from policy), it’s a commit with a reviewer. When an auditor asks for the access control policy that was in effect six months ago, you check it out from the repo. The audit trail is the git history. Reproducibility by construction.

Governance artifacts get the same discipline as code. Policies go through PR review. Changes require approvals from designated owners. Every modification is logged, attributed, and traceable. You can’t silently edit the past.

Once policies live in version control, compliance checks run against them automatically. Code and configuration changes get checked against the current policy state as they happen. Not quarterly, not at audit time, but at pull request time.

When policy changes, you immediately see what’s now out of compliance. New PCI requirement lands? The system diffs the old policy against the new one, scans your infrastructure, and surfaces what needs updating. Gap analysis becomes continuous, not an annual fire drill that takes two months and produces a 60-page spreadsheet no one reads.

Risk acceptance becomes explicit and tracked. Not every violation is blocking, but every violation is visible. “We’re accepting this S3 bucket configuration until Q3 migration” becomes a tracked decision in the repo with an owner, an expiration date, and compensating controls. The weighted risk model has teeth because the risk decisions themselves are versioned and auditable.

Monitoring Both Sides of the Gap

Governance requirements evolve. Frameworks update. If you’re not watching, surprises arrive weeks before an audit.

Organizations treat this as inevitable, scrambling when SOC 2 adds trust service criteria or PCI-DSS publishes a new version. The fire drill becomes routine.

But these changes are public. Machines can monitor for updates, parse the diffs, and surface what shifted. Auditors bring surprises. Machines should not.

Combine external monitoring with internal monitoring and you close the loop. When a new requirement lands, you immediately see its impact on your actual code and configuration.

SOC 2 adds a requirement for encryption key rotation every 90 days? The system scans your infrastructure, identifies 12 services that rotate keys annually, and surfaces the gap months ahead. You have time to plan, size the effort, build it into the roadmap.

This transforms compliance from reactive to predictive. You see requirements as they emerge and measure their impact before they become mandatory. The planning horizon extends from weeks to quarters.

From Vibe Coding to Vibe Compliance

Developers have already adapted to AI-augmented work. They call it “vibe coding.” The AI fills in the routine structures and syntax while humans focus on steering, debugging edge cases, and deciding what matters. The job shifted from writing every line to shaping direction. The work moved from typing to choosing.

Compliance will follow the same curve. The rote work gets automated. Mapping requirements across frameworks, checklist validations, evidence collection. AI reads the policy docs, scans the codebase, flags the gaps, suggests remediations. What remains for humans is judgment: Is this evidence meaningful? Is this control reproducible? Is this risk acceptable given these compensating controls?

This doesn’t eliminate compliance professionals any more than AI eliminated engineers. It makes them more valuable. Freed from clerical box-checking, they become what they should have been all along: stewards of resilience rather than producers of audit artifacts.

The output changes too. The goal is no longer just producing an audit report to wave at procurement. The goal is producing telemetry showing whether the organization is actually healthy, whether controls are reproducible, whether drift is accumulating.

Continuous Verification

What does compliance infrastructure look like when it matches the speed of code?

A bot comments on pull requests. A developer changes an AWS IAM policy. Before the PR merges, an automated check runs: does this comply with the principle of least privilege defined in access-control.md? Does it match the approved exception for the analytics service? If not, the PR is flagged. The feedback is immediate, contextual, and actionable.

Deployment gates check compliance before code ships. A service tries to deploy without the required logging configuration. The pipeline fails with a clear message: “This deployment violates audit-logging-policy.md section 3.1. Either add structured logging or file an exception in exceptions/logging-exception-2025-q4.md.”

Dashboards update in real time, not once per quarter. Compliance posture is visible continuously. When drift occurs (when someone disables MFA on a privileged account, or when a certificate approaches expiration without renewal) it shows up immediately, not six months later during an audit.

Weighted risk with explicit compensating controls. Not binary red/green status, but a spectrum: fully compliant, compliant with approved exceptions, non-compliant with compensating controls and documented risk acceptance, non-compliant without mitigation. Boards see the shades of fragility. Practitioners see the specifics. Everyone works from the same signal, rendered at the right level of abstraction.

The Maturity Path

Organizations don’t arrive at this state overnight. Most are still at Stage 1 or earlier, treating governance as static documents disconnected from their systems. The path forward has clear stages:

Stage 1: Baseline. Get policies, procedures, and runbooks into version-controlled repositories. Establish them as ground truth. Stop treating governance as static PDFs. This is where most organizations need to start.

Stage 2: Drift Detection. Automated checks flag when code and configuration diverge from policy. The checks run on-demand or on a schedule. Dashboards show gaps in real time. Compliance teams can see drift as it happens instead of discovering it during an audit. The feedback loop shrinks from months to days. Some organizations have built parts of this, but comprehensive drift detection remains rare.

Stage 3: Integration. Compliance checks move into the developer workflow. Bots comment on pull requests. Deployment pipelines run policy checks before shipping. The feedback loop shrinks from days to minutes. Developers see policy violations in context, in their tools, while changes are still cheap to fix. This is where the technology exists but adoption is still emerging.

Stage 4: Regulatory Watch. The system monitors upstream changes: new SOC 2 criteria, updated PCI-DSS requirements, revised GDPR guidance. When frameworks change, the system diffs the old version against the new, identifies affected controls, maps them to your current policies and infrastructure, and calculates impact. You see the size of the work, the affected systems, and the timeline before it becomes mandatory. Organizations stop firefighting and start planning quarters ahead. This capability is largely aspirational today.

Stage 5: Enforcement. Policies tie directly to what can deploy. Non-compliant changes require explicit exception approval. Risk acceptance decisions are versioned, tracked, and time-bound. The system makes the right path the easy path. Doing the wrong thing is still possible (you can always override) but the override itself becomes evidence, logged and auditable. Few organizations operate at this level today.

This isn’t about replacing human judgment with automation. It’s about making judgment cheaper to exercise. At Stage 1, compliance professionals spend most of their time hunting down evidence. At Stage 5, evidence collection is automatic, and professionals spend their time on the judgment calls: should we accept this risk? Is this compensating control sufficient? Is this policy still appropriate given how the system evolved?

The Objections

There are objections. The most common is that AI hallucinates, so how can you trust it with compliance?

Fair question. Naive AI hallucinates. But humans do too. They misread policies, miss violations, get tired, and skip steps. The compliance professional who spent eight hours mapping requirements across frameworks before lunch makes mistakes in hour nine.

Structured AI with proper constraints works differently. Give it explicit sources, defined schemas, and clear validation rules, and it performs rote work more reliably than most humans. Not because it’s smarter, but because it doesn’t get tired, doesn’t take shortcuts, and checks every line the same way every time.

The bot that flags policy violations isn’t doing unconstrained text generation. It’s diffing your code against a policy document that lives in your repo, following explicit rules, and showing its work: “This violates security-policy.md line 47, committed by [email protected] on 2025-03-15.” That isn’t hallucination. That’s reproducible evidence.

And it scales in ways humans never can. The human compliance team can review 50 pull requests a week if they’re fast. The bot reviews 500. When a new framework requirement drops, the human team takes weeks to manually map old requirements against new ones. The bot does it in minutes.

This isn’t about replacing human judgment. It’s about freeing humans from the rote work where structured AI performs better. Humans hallucinate on routine tasks. Machines don’t. Let machines do what they’re good at so humans can focus on what they’re good at: the judgment calls that actually matter.

The second objection is that tools can’t fix culture. Also true. But tools can make cultural decay visible earlier. They can force uncomfortable truths into the open.

When policies live in repos and compliance checks run on every PR, leadership can’t hide behind dashboards. If the policies say one thing and the code does another, the diff is public. If exceptions are piling up faster than they’re closed, the commit history shows it. If risk acceptance decisions keep getting extended quarter after quarter, the git log is evidence.

The system doesn’t fix culture, but it makes lying harder. Drift becomes visible in real time instead of hiding until audit season. Leaders who want to ignore compliance still can, but they have to do so explicitly, in writing, with attribution. That changes the incentive structure.

Culture won’t be saved by software. But it can’t be saved without seeing what’s real. Telemetry is the prerequisite for accountability.

The Bootstrapping Problem

If organizations are already decaying, if incentives are misaligned and compliance is already theater, how do they adopt this system?

Meet people where they are. Embed compliance in the tools developers already use.

Start with a bot that comments on pull requests. Pick one high-signal policy (the one that came up in the last audit, or the one that keeps getting violated). Write it in Markdown, commit it to a repo, add a simple check that flags violations in PRs. Feedback lands in the PR, where people already work.

This creates immediate value. Faster feedback. Issues caught before they ship. Less time in post-deployment remediation. The bot becomes useful, not bureaucratic overhead.

Once developers see value, expand coverage. Add more policies. Integrate more checks. Build the dashboard that shows posture in real time. Start with the point of maximum pain: the gap between what policies say and what code does.

Make the right thing easier than the wrong thing. That’s how you break equilibrium. Infrastructure change leads culture, not the other way around.

Flipping the Incentive Structure

Continuous compliance telemetry creates opportunities to flip the incentive structure.

The incentive problem is well-known. Corner-cutters get rewarded with velocity and lower costs. The people who invest in resilience pay the price in overhead and friction. By the time the bill comes due, the corner-cutters have moved on.

What if good compliance became economically advantageous in real time, not just insurance against future collapse?

Real-time, auditable telemetry makes compliance visible in ways annual reports never can. A cyber insurer can consume your compliance posture continuously instead of relying on a point-in-time questionnaire. Organizations that maintain strong controls get lower premiums. Rates adjust dynamically based on drift. Offer visibility into the metrics that matter and get buy-down points in return.

Customer due diligence changes shape. Vendor risk assessments that take weeks and rely on stale SOC 2 reports become real-time visibility into current compliance posture. Procurement accelerates. Contract cycles compress. Organizations that can demonstrate continuous control have competitive advantage.

Auditors spend less time collecting evidence and more time evaluating controls. When continuous compliance is demonstrable, scope reduces, costs drop, cycles shorten.

Partner onboarding that used to require months of back-and-forth security reviews happens faster when telemetry is already available. Certifications and integrations move at the speed of verification, not documentation.

The incentive structure inverts. Organizations that build continuous compliance infrastructure get rewarded immediately: lower insurance costs, faster sales cycles, reduced audit expense, easier partnerships. The people who maintain strong controls see economic benefit now, not just avoided pain later.

This is how you fix the incentive problem at scale. Make good compliance economically rational today.

The Choice Ahead

AI has already made coding a collaboration between people and machines. Compliance is next.

The routine work will become automated, fast, and good enough for the basics. That change is inevitable. The real question is what we do with the time it frees up.

Stop there, and compliance becomes theater with better graphics. Dashboards that look impressive but still tell you little about resilience.

Go further, and compliance becomes what it should have been all along: telemetry about reproducibility. A vital sign of whether the organization can sustain discipline when it matters. An early warning system that makes collapse look gradual instead of sudden.

If compliance was the vital sign of organizational decay, then this is the operating system that measures it at the speed of code.

The frameworks aren’t broken. The incentives are. The rhythms are. The integration is.

The technology to build this system exists. Version control is mature. CI/CD pipelines are ubiquitous. AI can parse policies and scan code. What’s missing is stitching the pieces together and treating compliance like production.

Compliance will change. The only question is whether it catches up to code or keeps trailing it until collapse looks sudden.

Beyond the Mathematics, Deploying Advanced Crypto Successfully

Advanced cryptographic systems like Multi-Party Computation (MPC) promise elegant solutions to complex security problems. Cryptography is essential. Every modern system already relies on it through authentication systems, TLS in browsers, disk and database encryption, and APIs connecting services. This means every organization is, by necessity, in the key-management business, whether they recognize it or not.

However, MPC’s mathematical guarantees, like other cryptographic systems, depend entirely on operational security foundations that are often overlooked. This post uses MPC to illustrate a critical principle: cryptographic sophistication cannot compensate for operational weaknesses.

Achieving technical distribution without operational segmentation provides only the illusion of improved security.

I argue for an “outside-in” security model that prioritizes foundational defenses before investing in protection against exotic threats. The trick of course, is understanding which threats are exotic and which are impending threats. The result is a framework for rational security resource allocation that prevents expensive technology deployments from becoming elaborate security theater.

1. The Endless Security Regression

The fundamental flaw becomes apparent when you trace through an actual MPC deployment. You start with elegant mathematics that promise no single party can reconstruct the private key. Then you discover the operational reality.

The MPC shard itself needs protection, so you encrypt it with a key born and stored in a TPM. But TPMs only protect their keys from theft, not their use, and that shard key gets loaded in memory in the clear so you add a Trusted Execution Environment to protect the logic that uses the key. But TEEs have their own vulnerabilities: for example, all are vulnerable to side-channel attacks that can extract secrets from supposedly secure enclaves.

To mitigate side channels, you need dedicated hardware, but if you don’t operate them yourself, you’re now also vulnerable to cloud operators who can access hypervisor memory. So you run your code with encrypted memory and implement hardware attestation to prove the right code is running, but now those attestation keys need adequate protection, and you realize that’s outside of your control. You’ll also need x.509 certificates for network identity for the MPC nodes to communicate, which again have their own private keys, leading you back to key and certificate management again.

At every layer, the “solution” creates new secrets that need the same level of protection you originally sought to avoid. You trade software vulnerabilities for hardware vulnerabilities, single keys for multiple keys, simple operational procedures for complex distributed protocols. Each mitigation introduces new attack surfaces that are often more subtle and harder to debug and audit than the original problem.

If one person, team, or organization ultimately controls all these layers (generates the keys, manages the TPMs, configures the TEEs, handles the attestation, administers the network), then you’ve built an elaborate system that may provide no meaningful security improvement over a single well-protected key.

While not universally true, many MPC deployments are expensive ways to do what a simple, well-secured, and well-governed key can often accomplish similar outcomes with less complexity and operational overhead.

2. We Don’t Hack Crypto, We Hack How It’s Used

Government cybersecurity agencies understand this principle. The NSA explicitly states about Quantum Key Distribution: “NSA does not support the usage of QKD or QC to protect communications in National Security Systems” because “security of QKD and QC is highly implementation-dependent rather than assured by laws of physics.”

The UK’s National Cyber Security Centre takes the same position: “it does not endorse its use in government or military systems and cautions against its sole reliance on networks used by critical infrastructure.” These agencies aren’t anti-cryptography. They understand that mathematical guarantees mean nothing without operational security. As Bruce Schneier observed: ‘If you think cryptography is the answer to your problem, then you don’t understand cryptography and you don’t understand your problem.’

The overwhelming majority of security breaches don’t involve breaking cryptographic protocols or algorithms. Attackers exploit implementation bugs, misconfigured systems, stolen credentials, supply chain compromises, and human error. Advanced cryptographic systems don’t address any of these attack vectors. Instead, they add layers of complexity that introduce new opportunities for the same fundamental failure modes.

This confirms the central truth: we don’t hack cryptography; we hack the way cryptography is used. The attack surface doesn’t disappear with sophisticated protocols; it shifts to deeper, more subtle layers that are often harder to audit and secure.

3. The Resource Allocation Problem

Security budgets and time are finite. Every dollar spent hardening against more exotic threats is a dollar not spent on measures with demonstrably higher expected returns. The ROI calculation is stark: comprehensive code reviews catch implementation bugs that affect every user, penetration testing finds configuration errors that expose entire systems, and security training prevents social engineering attacks that bypass all technical controls.

Organizations consistently invert this priority structure. They deploy MPC to protect against collusion while their systems remain vulnerable to SQL injection a system the MPC implementation reads from. They implement complex attestation protocols while using default passwords. They build elaborate key distribution systems while running unpatched software.

This represents a fundamental misunderstanding of threat modeling. Academic cryptography often promotes an “inside-out” security model that begins with exotic threats: compromised hypervisors, broken cryptographic algorithms, malicious hardware manufacturers, and nation-state attacks on protocols. These are real problems, but this starting point assumes foundational security problems are already solved.

Operational security requires an “outside-in” approach. Master secure design and comprehensive code security first. Establish robust access control with mandatory multi-factor authentication. Implement secure operational procedures for deployment and monitoring. Verify the integrity of the entire supply chain and lifecycle of the services. Only after demonstrating mastery of these fundamentals does it become rational to invest in protection against sophisticated internal threats.

4. When Administrative Segmentation Actually Works

This principle of aligning cryptographic technology with operational structure is not unique to MPC. For decades, mature security programs have used dedicated hardware to achieve similar goals. A Hardware Security Module (HSM), for example, provides its greatest value not just as tamper-resistant hardware, but as a tool to enforce separation of duties.

In a robust deployment, the team managing the HSM is organizationally separate from the team developing the application that uses it. Application developers can request cryptographic operations, but they cannot access key material or manage the HSM’s lifecycle. This creates two distinct operational domains, forcing an attacker to compromise both the application and the separate security team to exfiltrate a key.

Multi-Party Computation should be viewed through the same lens. It is a powerful cryptographic tool for achieving a similar end goal, creating separate, non-colluding operational domains. The mathematical distribution of key shares is the technical mechanism, but the resulting security value is only realized when that distribution is mirrored by genuine administrative and organizational independence. Like an HSM managed by a single administrator, an MPC network controlled by a single entity becomes an exercise in costly ceremony, not a meaningful transfer of risk.

It’s important to acknowledge that the “outside-in” prioritization model must be adapted for specialized providers. For a custodian of high-value digital assets, for instance, insider collusion and key exfiltration represent the primary existential risks, justifying an intense focus on internal threats. However, this is where the conflict between security and complexity becomes most acute. Even for these organizations, each layer of cryptographic defense adds operational intricacy and new, subtle attack surfaces. This reinforces the core thesis, the goal is not merely to layer technology, but to achieve genuine, independent operational segmentation, which remains the most effective defense against those very insider threats.

A common application attempts to create segmentation for a single user by distributing key shares across their personal devices, such as a laptop and mobile phone. The stated goal is to mitigate device compromise threats. While this approach provides technical and physical separation that raises the bar for attackers, it fails to solve the underlying administrative domain problem. In this case, the domain is the single, fallible human user.

The security of the entire system still collapses to the judgment and security hygiene of one person. A sophisticated phishing campaign can compromise both devices, defeating the technical separation. This model trades a simple technical exploit for a more complex social engineering challenge, but it doesn’t achieve the resilient segmentation that comes from genuinely independent administrative control by different people or organizations.

Real value emerges when organizations delegate specific functions to genuinely independent third parties. This is where MPC’s verifiability properties become a critical enabler. Cross-organizational collaborations work when participants have different incentive structures, operate under different regulatory constraints, and maintain separate operational procedures.

The critical question isn’t “Are the keys distributed?” but “Are the operational domains truly independent?” This requires different administrative control, different legal jurisdictions, different physical infrastructure, and different threat models. Without this genuine independence, MPC can become expensive security theater.

MPC also enables unique capabilities that justify complexity when applied correctly. Beyond cryptographic key management, MPC provides genuine value for privacy-preserving collaboration across organizational boundaries. When genuinely distrustful parties need to compute insights from combined datasets, MPC offers capabilities impossible to achieve through traditional security models.

This principle of provable quorums extends to critical internal operations as well. For instance, in a software deployment system requiring M-of-N approvals, MPC can transform the process from one based on a trusted policy engine (like a source control platform) to one that is cryptographically provable, where the deployment signature itself is the non-repudiable proof that the quorum was met. Other examples where MPC shines are cross-organizational collaboration, consider multiple hospitals training medical AI models without sharing patient data, or financial institutions collaborating on fraud detection models while keeping transaction data private. These scenarios involve genuinely independent organizations with different regulatory constraints, administrative controls, business interests, and operational procedures.

In cryptographic applications, MPC is often deployed to protect keys from theft, but the real threat is typically unauthorized signing rather than key exfiltration. An attacker who can trick a system into signing a malicious transaction has achieved their goal without ever seeing the private key. This threat model actually supports our broader argument, the operational security around the signing process, authentication, authorization, and audit trails matter more than the mathematical distribution of the key material itself.

5. The Gap Between Mathematical and Operational Security

This reveals the persistent gap between theoretical and practical security. MPC provides elegant mathematical guarantees against collusion, but these guarantees assume participants have already solved the mundane security problems that cause most breaches.

Similarly, transparency systems can provide complete auditability while remaining fundamentally insecure. A transparent but insecure system allows everyone to watch the breach happen in real time. The visibility doesn’t create security.

MPC follows this pattern. Strong mathematical guarantees against collusion become meaningless if participants are vulnerable to simpler attacks. The verifiability that makes MPC compelling only matters if the systems performing computations are themselves trustworthy. Mathematical elegance doesn’t compensate for operational weaknesses.

This gap between theoretical and practical security explains why sophisticated attackers often ignore advanced cryptographic protections entirely. They find it easier to compromise endpoints, exploit implementation bugs, or target operational procedures than to break mathematical protocols.

6. Building Real Security Foundations

True security comes from systematically eliminating vulnerabilities in order of probability and impact. This requires disciplined resource allocation that resists the allure of sophisticated solutions until fundamentals are mastered.

The foundation must include comprehensive implementation security through rigorous code audits and secure development practices. Access control requires mandatory multi-factor authentication, least privilege principles, and robust identity management. Operational security demands secure deployment procedures, continuous monitoring, and effective incident response capabilities. Supply chain security requires verification of all dependencies and infrastructure components.

Organizations that build this foundation create systems so robust that only extraordinary attackers with exceptional resources pose meaningful threats. For these organizations, advanced cryptographic systems like MPC provide genuine additional value by addressing remaining threats that can’t be mitigated through conventional means.

Organizations that skip these fundamentals in favor of advanced cryptography build impressive technical complexity on unstable foundations. They solve exotic problems while remaining vulnerable to mundane attacks.

The Capstone on the Pyramid

Advanced cryptography represents some of humanity’s most remarkable intellectual achievements. MPC provides mathematically verifiable guarantees that are impossible to achieve through traditional systems. Its ability to enable privacy-preserving collaboration and eliminate single points of failure makes it invaluable for specific use cases.

However, these technologies are tools for the top of the security hierarchy. They should be the final components in mature security programs that have already eliminated implementation vulnerabilities, established robust operational procedures, and built resilience against common attack vectors.

One pragmatic consideration is that the excitement around advanced technologies like MPC can be a strategic lever for security leaders to secure budget for foundational improvements, such as cryptographic inventories or access controls, needed to realize that technology’s value. Post-quantum cryptography illustrates this perfectly: migration is the capstone, but preparation (creating inventories, achieving operational agility) is foundational work that must start immediately. Yet, for most organizations, an outside-in approach delivers larger security improvements per dollar invested.

The goal isn’t to avoid advanced cryptography but to apply it strategically. Organizations that understand the primacy of operational security can deploy MPC effectively as a capstone technology. Those who use it as a substitute for security fundamentals will find themselves with expensive, complex systems that provide little meaningful improvement to their actual security posture.

Defense first, technology second. This principle transforms security into a systematic process of eliminating vulnerabilities, ensuring finite resources target critical threats for maximum impact.

Another Sleeping Giant: Microsoft’s Root Program and the 1.1.1.1 Certificate Slip

This morning (September 3, 2025), someone posted an incident to the Mozilla dev-security-policy list that exposed a serious incident: “Incident Report: Mis-issued Certificates for SAN iPAddress: 1.1.1.1 by Fina RDC 2020.” An obscure CA, Fina RDC 2020, issued certificates containing the IP address 1.1.1.1, used by Cloudflare for encrypted DNS. These certificates should never have been issued.

Reference: https://groups.google.com/a/mozilla.org/g/dev-security-policy/c/SgwC1QsEpvc

Why this matters

1.1.1.1 is a critical bootstrap endpoint

This IP anchors encrypted DNS (DoH, DoT). A mis-issued certificate, combined with a BGP hijack, allows an attacker to intercept traffic before secure tunnels form. Some may argue that an attacker would also need to perform a BGP hijack to exploit such a certificate, but this is no real mitigation. BGP hijacks are a regular occurrence – often state-sponsored — and when combined with a valid certificate, they turn a routing incident into a full man-in-the-middle compromise. Cloudflare documentation:

Mainstream browsers like Chrome and Firefox typically use the domain name cloudflare-dns.com when bootstrapping DoH/DoT, so they would ignore a certificate that only listed IP:1.1.1.1. However, both Google and Cloudflare also support direct IP endpoints (https://8.8.8.8/dns-query and https://1.1.1.1/dns-query). In these cases, an IP SAN certificate would validate just as Alex Radocea’s curl test showed: “subjectAltName: host “8.8.8.8” matched cert’s IP address!”.

The behavior differs slightly between providers:

  • Google accepts raw-IP DoH connections and does not mandate a specific HTTP Host header.
  • Cloudflare accepts raw-IP DoH connections but requires the Host: cloudflare-dns.com header.

This means raw-IP endpoints are real-world usage, not just theoretical. If a mis-issued IP SAN cert chains to a CA trusted by the platform (like in Microsoft’s store), exploitation becomes practical.

A Chronic Pattern of Mis-issuance

The incident report cites two mis-issued certs are still valid as of September 3 2025:

This was not an isolated event. A broader search of CT logs reveals a recurring pattern of these mis-issuances from the same CA, indicating a systemic problem. 

Reference: https://crt.sh/?q=1.1.1.1

CT monitoring failed to trigger remediation

Cloudflare runs its own CT monitoring system: https://developers.cloudflare.com/ssl/ct/monitor/

If these certs surfaced in crt.sh, Cloudflare should have flagged them. Instead, there was no visible remediation.

What’s going wrong?

  • Are IP SANs being monitored effectively enough to detect a pattern?
  • Is there a proper pipeline from alerts to action?
  • Why were these certs still valid after months?

It is easy to focus on where CT monitoring fell short in this case, and clearly it did. But we should not lose sight of its value. Without CT, we would not have visibility into many of these mis-issuances, and we do not know how many attackers were dissuaded simply because their activity would be logged. CT is not perfect, but this incident reinforces why it is worth doubling down: improving implementations, making monitoring easier, and investing in features like static CT and verifiable indices that make it easier to use this fantastic resource.

What this means for users

The impact is scoped to users of browsers and applications that rely on the Windows operating system’s root store, such as Microsoft Edge. Browsers like Chrome and Firefox, which manage their own root stores. However, because both Google and Cloudflare accept raw-IP DoH endpoints, the risk extends beyond pure edge cases.

For an affected user, at a public hotspot, if a BGP hijack redirects their traffic and the client connects directly to 1.1.1.1, their system checks the cert and sees a valid Cloudflare-like certificate. The attacker succeeds not just in breaking DNS, but in controlling secure web sessions.

This enables:

  • Silent proxying of all traffic
  • Token and session theft via impersonation
  • Decryption of DoH queries (for raw-IP clients)
  • In-flight alteration of pages and updates

This is the full man-in-the-middle playbook.

A tiny CA with outsized impact

Certificate issuance across the WebPKI is highly skewed, as of today:

CA Owner% of Unexpired Pre-Certificates
Internet Security Research Group46.52%
DigiCert22.19%
Sectigo11.89%
Google Trust Services8.88%
GoDaddy5.77%
Microsoft Corporation3.45%
IdenTrust0.63%
All others<0.70%

Refference: https://crt.sh/cert-populations and https://docs.google.com/spreadsheets/d/1gshICFyR6dtql-oB9uogKEvb2HarjEVLkrqGxYzb1C4/

Fina RDC 2020 has ~201 unexpired certs, accounting for <0.00002% of issuance. Yet Microsoft trusts it, while Chrome, Firefox, and Safari do not. That asymmetry leaves Edge and Windows users vulnerable.

Fina RDC 2020 is also listed in the EU Trusted List (EUTL) as authorized to issue Qualified Website Authentication Certificates (QWACs): https://eidas.ec.europa.eu/efda/trust-services/browse/eidas/tls/tl/HR/tsp/1/service/14. While no mainstream browsers currently import the EUTL for domain TLS, eIDAS 2.0 risks forcing them to. That means what is today a Microsoft-specific trust asymmetry could tomorrow be a regulatory mandate across all browsers.

Moreover, Microsoft requires that CAs provide “broad value to Windows customers and the internet community.”

Reference: https://learn.microsoft.com/en-us/security/trusted-root/program-requirements

Fina clearly does not meet that threshold.

Certificate lifespan context

The CA/Browser Forum (April 2025) approved ballot SC-081v3: TLS certificates will be limited to 47 days by March 2029:

  • March 15 2026: max 200 days
  • March 15 2027: max 100 days
  • March 15 2029: max 47 days

Reference: https://www.cyberark.com/resources/white-papers/how-47-day-tls-certificates-are-reshaping-digital-trust

IP SAN certs, however, are much shorter:

The missed opportunity: If short-lived IP certs were required by policy, this incident would have already expired, reducing exposure from months to days. Baseline Requirements currently do not differentiate IP SAN certs from DNS names. If they did, such exposure could be avoided.

On intent and attribution

Attribution in WebPKI failures is notoriously difficult. The noisy, repeated issuance pattern suggests this may have been a systemic accident rather than a targeted attack. Still, intent is irrelevant. A certificate that shouldn’t exist was issued and left valid for months. Once issued, there is no way to tell what a certificate and its key were used for. The governance failure is what matters.

From an attacker’s perspective, this is even more concerning. CT logs can be mined as a reconnaissance tool to identify the weakest CAs – those with a track record of mis-issuance. An adversary doesn’t need to compromise a major CA; they only need to find a small one with poor controls, like Fina RDC 2020, that is still widely trusted in some ecosystems. That makes weak governance itself an attack surface.

Reference: https://unmitigatedrisk.com/?p=850

Root governance failures

We need risk-aware, continuous root governance:

  • Blast radius control for low-volume CAs
  • Tiered validation for high-value targets (like IP SAN certificates)
  • Real CT monitoring that triggers remediation
  • Cross-vendor accountability (one vendor’s trust shouldn’t be universal if others reject it)

I wrote on this governance model here: https://unmitigatedrisk.com/?p=923

A broader pattern: software and the WebPKI

This incident is not a standalone. It follows previous failures:

DigiNotar (2011, Netherlands): Compromised, issuing hundreds of rogue certificates – including for Google – that were used in live MITM surveillance against Iranian users. The CA collapsed and was distrusted by all major browsers. 

Reference: https://unmitigatedrisk.com/?p=850

TÜRKTRUST (2013, Turkey): Issued an intermediate CA certificate later used to impersonate Google domains. 

Reference: https://unmitigatedrisk.com/?p=850

ANSSI (2013, France): Government CA issued Google-impersonation certs. Browsers blocked them.

Reference: https://arstechnica.com/information-technology/2013/12/french-agency-caught-minting-ssl-certificates-impersonating-google/

Microsoft code-signing trust (2025): I documented how poor governance turned Microsoft code signing into a subversion toolchain: https://unmitigatedrisk.com/?p=1085

The pattern is clear: long-lived trust, weak oversight, repeated governance failures. And notably, most of the major real-world MITM events trace back to European CAs. That history underscores how dangerous it is when regulatory frameworks like eIDAS 2.0 mandate trust regardless of technical or governance quality.

Why eIDAS 2.0 makes it worse

The EU’s provisional agreement on eIDAS 2.0 (November 2023) mandates that browsers must trust CAs approved by EU member states, issuing QWACs (Qualified Website Authentication Certificates).

References:

This comes as EV certificates (part of the “OV” slice) have declined to only ~4% of issuance as tracked on https://merkle.town (OV includes EV in their charts). EV failed because it misled users. The Stripe-name example shows that easily: https://arstechnica.com/information-technology/2017/12/nope-this-isnt-the-https-validated-stripe-website-you-think-it-is/

Browsers are negotiating with the EU to separate QWAC trust from normal trust. For now, however, eIDAS still forces browsers to trust potentially weak or misbehaving CAs. This repeats the Microsoft problem but now globally and legislatively enforced.

And the oversight problem runs deeper. Under eIDAS, Conformity Assessment Bodies (CABs) are responsible for auditing and certifying qualified trust service providers. Auditors should have caught systemic mis-issuance like the 1.1.1.1 case, and so should CABs. Yet in practice many CABs are general IT auditors with limited PKI depth. If they miss problems, the entire EU framework ends up institutionalizing weak governance rather than correcting it.

A parallel exists in other ecosystems: under Entrust’s audits, roughly 3% of issued certificates are supposed to be reviewed. Because CAs select the audit samples, in theory one could envision a CA trying to hide bad practices. But the more likely situation is that auditors simply missed it. With enough audits over time, all but deliberate concealment should have brought at least one of these mis-issued certificates into scope. That points to an auditing and CAB oversight gap, not just a rogue CA.

The bigger picture

A few CAs dominate WebPKI. The long tail, though small, can create massive risk if trusted. Microsoft’s root store is broader and more passive than others. The critical issue is the number of trusted organizations, each representing a potential point of failure. Microsoft’s own list of Trusted Root Program participants includes well over one hundred CAs from dozens of countries, creating a vast and difficult-to-audit trust surface. 

Reference: https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT

Forced or passive trust is a recipe for systemic risk.

Microsoft’s root store decisions expose its users to risks that Chrome, Firefox, and Safari users are shielded from. Yet Microsoft participates minimally in WebPKI governance. If the company wants to keep a broader and more permissive root store, it should fund the work needed to oversee it – staffing and empowering a credible root program, actively participating in CA/Browser Forum policy, and automating the monitoring Certificate Transparency logs for misuse. Without that investment, Microsoft is effectively subsidizing systemic risk for the rest of the web.


Thanks to Alex Radocea for double-checking DoH/DoT client behavior in support of this post.

How Microsoft Code Signing Became Part of a Trust Subversion Toolchain

Code signing was supposed to tell you who published a piece of software and ultimately decide if you can trust the software and install it.. For nearly three decades, cryptographic signatures have bound a binary to a publisher’s identity, guaranteeing it hasn’t been tampered with since signing. But on Windows, that system is now broken in ways that would make its original designers cringe.

But attackers have found ways to completely subvert this promise without breaking a single cryptographic primitive. They can now create an unlimited number of different malicious binaries that all carry the exact same “trusted” signature, or careless publishers operating signing oracles that enable others to turn their software into a bootloader for malware. The result is a system where valid signatures from trusted companies can no longer tell you anything meaningful about what the software will actually do.

Attackers don’t need to steal keys or compromise Certificate Authorities. They use the legitimate vendor software and publicly trusted code signing certificates, perverting the entire purpose of publisher-identity-based code signing.

Microsoft’s Long-Standing Awareness

Microsoft has known about the issue of maleability for at least a decade. In 2013, they patched CVE-2013-3900], where attackers could modify signed Windows executables, adding malicious code in “unverified portions” without invalidating the Authenticode signature. WinVerifyTrust improperly validated these files, allowing one “trusted” signature to represent completely different, malicious behavior.

This revealed a deeper architectural flaw, signed binaries could be altered by unsigned data. Microsoft faced a classic platform dilemma – the kind that every major platform holder eventually confronts. Fixing this comprehensively risked breaking legacy software critical to their vast ecosystem, potentially disrupting thousands of applications that businesses depended on daily. The engineering tradeoffs were genuinely difficult: comprehensive security improvements versus maintaining compatibility for millions of users and enterprise customers who couldn’t easily update or replace critical software.

They made the fix optional, prioritizing ecosystem compatibility over security hardening. This choice might have been understandable from a platform perspective in 2013, when the threat landscape was simpler and the scale of potential abuse wasn’t yet clear. But it becomes increasingly indefensible as attacks evolved and the architectural weaknesses became a systematic attack vector rather than an isolated vulnerability.

In 2022, Microsoft republished the advisory, confirming they still won’t enforce stricter verification by default, while today’s issues differ, they are part of a similar class of vulnerabilities attackers now exploit systematically. The “trusted-but-mutable” flaw is now starting to permeate the Windows code signing ecosystem. Attackers use legitimate, signed applications as rootkit-like trust proxies, inheriting vendors’ reputation and bypass capabilities to deliver arbitrary malicious payloads.

Two incidents show we’re not dealing with isolated bugs but systematic assaults on Microsoft’s code signing’s core assumptions.

ConnectWise: When Legitimate Software Adopts Malware Design Patterns

ConnectWise didn’t stumble into a vulnerability. They deliberately engineered their software using design patterns from the malware playbook. Their “attribute stuffing” technique embeds unsigned configuration data in the unauthenticated_attributes field of the PKCS#7 (CMS) envelope, a tactic malware authors use to conceal payloads in signed binaries.

In PKCS#7, the SignedData structure includes a signed digest (covering the binary and metadata) and optional unauthenticated_attributes, which lie outside the digest and can be modified post-signing without invalidating the signature. ConnectWise’s ScreenConnect installer misuses the Microsoft-reserved OID for Individual Code Signing ([1.3.6.1.4.1.311].4.1.1) in this field to store unsigned configuration data, such as server endpoints that act as the command control server of their client. This OID, meant for specific code signing purposes, is exploited to embed attacker-controlled configs, allowing the same signed binary to point to different servers without altering the trusted signature.

The ConnectWise ScreenConnect incident emerged when River Financial’s security team found attackers creating a fake website, distributing malware as a “River desktop app.” It was a trust inheritance fraud, a legitimately signed ScreenConnect client auto-connecting to an attacker-controlled server. 

The binary carried a valid signature signed by:

Subject: /C=US/ST=Florida/L=Tampa/O=Connectwise, LLC/CN=Connectwise, LLC 
Issuer: /C=US/O=DigiCert, Inc./CN=DigiCert Trusted G4 Code Signing RSA4096 SHA384 2021 CA1
Serial Number: 0B9360051BCCF66642998998D5BA97CE
Valid From: Aug 17 00:00:00 2022 GMT 
Valid Until: Aug 15 23:59:59 2025 GMT

Windows trusts this as legitimate ConnectWise software, no SmartScreen warnings, no UAC prompts, silent installation, and immediate remote control. Attackers generate a fresh installer via a ConnectWise trial account or simply found an existing package and manually edited the unauthenticated_attributes, extracting a benign signature, grafting a malicious configuration blob (e.g., attacker C2 server), inserting the modified signature, and creating a “trusted” binary. Each variant shares the certificate’s reputation, bypassing Windows security.

Why does Windows trust binaries with oversized, unusual unauthenticated_attributes? Legitimate signatures need minimal metadata, yet Windows ignores red flags like large attribute sections, treating them as fully trusted. ConnectWise’s choice to embed mutable configs mirrors malware techniques, creating an infinite malware factory where one signed object spawns unlimited trusted variants.

Similarly, ConnectWise’s deliberate use of PKCS#7 unauthenticated attributes for ScreenConnect configurations, like server endpoints, bypasses code signing’s security, allowing post-signing changes that mirror malware tactics hiding payloads in signed binaries. Likely prioritizing cost-saving over security, this choice externalizes abuse costs to users, enabling phishing campaigns. It’s infuriating for weaponizing signature flexibility warned about for decades, normalizing flaws that demand urgent security responses. Solutions exist to fix this.

The Defense Dilemma

Trust inheritance attacks leave security teams in genuinely impossible positions – positions that highlight the fundamental flaws in our current trust model. Defenders face a no-win scenario where every countermeasure either fails technically or creates operational chaos.

Blocking file hashes fails because attackers generate infinite variants with different hashes but the same trusted signature – each new configuration changes the binary’s hash while preserving the signature’s validity. This isn’t a limitation of security tools; it’s the intended behavior of code signing, where the same certificate can sign multiple different binaries.

Blocking the certificate seems like the obvious solution until you realize it disrupts legitimate software, causing operational chaos for organizations relying on the vendor’s products. For example, consider how are they to know what else was signed by that certificate? Doing so is effectively a self-inflicted denial-of-service that can shut down critical business operations. Security teams face the impossible choice between allowing potential malware or breaking their own infrastructure.

Behavioral detection comes too late in the attack chain. By the time suspicious behavior triggers alerts, attackers have already gained remote access, potentially disabled monitoring, installed additional malware, or begun data exfiltration. The initial trust inheritance gives attackers a crucial window of legitimacy.

These attacks operate entirely within the bounds of “legitimate” signed software, invisible to signature-based controls that defenders have spent years tuning and deploying. Traditional security controls assume that valid signatures from trusted publishers indicate safe software – an assumption these attacks systematically exploit. Cem Paya’s detailed analysis, part of River Financial’s investigation, provides a proof-of-concept for attribute grafting, showing how trivial it is to create trusted malicious binaries.

ConnectWise and Atera resemble modern Back Orifice, which debuted at DEF CON in August 1998 to demonstrate security flaws in Windows 9x. The evolution is striking: Back Orifice emerged two years after Authenticode’s 1996 introduction, specifically to expose Windows security weaknesses, requiring stealth and evasion to avoid detection. Unlike Back Orifice, which had to hide from the code signing protections Microsoft had established, these modern tools don’t evade those protections – they weaponize them, inheriting trust from valid signatures while delivering the same remote control capabilities without warnings.

Atera: A Trusted Malware Factory

Atera provides a legitimate remote monitoring and management (RMM) platform similar to ConnectWise ScreenConnect, providing IT administrators with remote access capabilities for managing client systems. Like other RMM solutions, Atera distributes signed client installers that establish persistent connections to their management servers. 

They also operate what effectively amounts to a public malware signing service. Anyone with an email can register for a free trial and receive customized, signed, timestamped installers. Atera’s infrastructure embeds attacker-supplied identifiers into the MSI’s Property table, then signs the package with their legitimate certificate.

This breaks code signing’s promise of publisher accountability. Windows sees “Atera Networks Ltd,” associates the reputation of the code based on the reputation of the authentic package, but can’t distinguish whether the binary came from Atera’s legitimate operations or an anonymous attacker who signed up minutes ago. The signature’s identity becomes meaningless when it could represent anyone.

In a phishing campaign targeting River Financial’s customers, Atera’s software posed as a “River desktop app,” with attacker configs embedded in a signed binary. 

The binary carried this valid signature, signed by:

Subject: CN=Atera Networks Ltd,O=Atera Networks Ltd,L=Tel Aviv-Yafo,C=IL,serialNumber=513409631,businessCategory=Private Organization,jurisdictionC=IL 
Issuer: CN=DigiCert Trusted G4 Code Signing RSA4096 SHA384 2021 CA1,O=DigiCert, Inc.,C=US Serial: 09D3CBF84332886FF689B04BAF7F768C 
notBefore: Jan 23 00:00:00 2025 GMT 
notAfter: Jan 22 23:59:59 2026 GMT

Atera provides a cloud-based remote monitoring and management (RMM) platform, unlike ScreenConnect, which supports both on-premises and cloud deployments with custom server endpoints. Atera’s agents connect only to Atera’s servers, but attackers abuse its free trial to generate signed installers tied to their accounts via embedded identifiers (like email or account ID) in the MSI Property table. This allows remote control through Atera’s dashboard, turning it into a proxy for malicious payloads. Windows trusts the “Atera Networks Ltd.” signature but cannot distinguish legitimate from attacker-generated binaries. Atera’s lack of transparency, with no public list of signed binaries or auditable repository, hides abuse, leaving defenders fighting individual attacks while systemic issues persist.

A Personal Reckoning

I’ve been fighting this fight for over two decades. Around 2001, as a Product Manager at Microsoft, overseeing a wide range of security and platform features, I inherited Authenticode among many responsibilities. Its flaws were glaring, malleable PE formats, weak ASN.1 parsing, and signature formats vulnerable to manipulation.

We fixed some issues – hardened parsing, patched PE malleability – but deeper architectural changes faced enormous resistance. Proposals for stricter signature validation or new formats to eliminate mutable fields were blocked by the engineering realities of platform management. The tension between security ideals and practical platform constraints was constant and genuinely difficult to navigate.

The mantra was “good enough,” but this wasn’t just engineering laziness. Authenticode worked for 2001’s simpler threat landscape, where attacks were primarily about bypassing security rather than subverting trust itself. The flexibility we preserved was seen as a necessary feature for ecosystem compatibility – allowing for signature formats that could accommodate different types of metadata and varying implementation approaches across the industry.

The engineering tradeoffs were real, every architectural improvement risked breaking existing software, disrupting the development tools and processes that thousands of ISVs depended on, and potentially fragmenting the ecosystem. The business pressures were equally real: maintaining compatibility was essential for Windows’ continued dominance and Microsoft’s relationships with enterprise customers who couldn’t easily migrate critical applications.

It was never good enough for the long term. We knew it then, and we certainly know it now. The flexibility we preserved, designed for a simpler era, became systematic vulnerabilities as threats evolved from individual attackers to sophisticated operations exploiting trust infrastructure itself. Every time we proposed fundamental fixes, legitimate compatibility concerns and resource constraints won out over theoretical future risks that seemed manageable at the time.

This is why I dove into Sigstore, Binary Transparency, and various other software supply chain security efforts. These projects embody what we couldn’t fund in 2001, transparent, verifiable signing infrastructure that doesn’t rely on fragile trust-based compromises. As I wrote in How to keep bad actors out in open ecosystems, our digital identity models fail to provide persistent, verifiable trust that can scale with modern threat landscapes.

The Common Thread

ConnectWise and Atera expose a core flaw, code signing relies on trust and promises, not verifiable proof. The CA/Browser Forum’s 2023 mandate requires FIPS 140-2 Level 2 hardware key storage, raising the bar against key theft and casual compromise. But it’s irrelevant for addressing the fundamental problem: binaries designed for mutable, unsigned input or vendors running public signing oracles.

Figure 1: Evolution of Code Signing Hardware Requirements (2016-2024)

The mandate addresses yesterday’s threat model – key compromise – while today’s attacks work entirely within the intended system design. Compliance often depends on weak procedural attestations where subscriber employees sign letters swearing keys are on HSMs, rather than cryptographic proof of hardware protection. The requirement doesn’t address software engineered to bypass code signing’s guarantees, leaving systematic trust subversion untouched.

True cryptographic attestation, where hardware mathematically proves key protection, is viable today. Our work on Peculiar Ventures’ attestation library supports multiple formats, enabling programmatic verification without relying on trust or procedural checks. The challenge isn’t technical – it’s accessing diverse hardware for testing and building industry adoption, but the foundational technology exists and works.

The Path Forward

We know how to address this. A supply chain security renaissance is underway, tackling decades of accumulated technical debt and architectural compromise. Cryptographic attestation, which I’ve spent years developing, provides mathematical proof of key protection that can be verified programmatically by any party. For immediate risk reduction, the industry should move toward dynamic, short-lived credentials that aren’t reused across projects, limiting the blast radius when compromise or abuse occurs.

The industry must implement these fundamental changes:

  • Hardware-rooted key protection with verifiable attestation. The CA/Browser Forum mandates hardware key storage, but enforcement relies heavily on subscriber self-attestation rather than cryptographic proof. Requirements should be strengthened to mandate cryptographic attestations proving keys reside in FIPS 140-2/3 or Common Criteria certified modules. When hardware attestation isn’t available, key generation should be observed and confirmed by trusted third parties (such as CA partners with fiduciary relationships) rather than relying on subscriber claims.
  • Explicit prohibition of mutable shells and misaligned publisher identity. Signing generic stubs whose runtime behavior is dictated by unsigned configuration already violates Baseline Requirements §9.6.3 and §1.6.1, but this isn’t consistently recognized as willful signing of malware because the stub itself appears benign. The BRs should explicitly forbid mutable-shell installers and signing oracles that allow subscribers to bypass code signing’s security guarantees. A signed binary must faithfully represent its actual runtime behavior. Customized or reseller-specific builds should be signed by the entity that controls that behavior, not by a vendor signing a generic stub.
  • Subscriber accountability and disclosure of abusive practices. When a CA becomes aware that a subscriber is distributing binaries where the trusted signature is decoupled from actual behavior, this should be treated as a BR violation requiring immediate action. CAs should publish incident disclosures, suspend or revoke certificates per §9.6.3, and share subscriber histories to prevent CA shopping after revocation. This transparency is essential for ecosystem-wide learning and deterrence.
  • Code Signing Certificate Transparency. All CAs issuing code signing certificates should be required to publish both newly issued and historical certificates to dedicated CT logs. Initially, these could be operated by the issuing CAs themselves, since ecosystem building takes time and coordination. Combined with the existing list of code signing CAs and log lookup systems (like CCADB.org]), this would provide ecosystem-wide visibility into certificate issuance, enable faster incident response, and support independent monitoring for misissuance and abuse patterns.
  • Explicit Subscriber Agreement obligations and blast radius management. Subscriber Agreements should clearly prohibit operating public signing services or designing software that bypasses code signing security properties such as mutable shells or unsigned configuration. Certificate issuance flows should require subscribers to explicitly acknowledge these obligations at the time of certificate request. To reduce the blast radius of revocation, subscribers should be encouraged or required to use unique keys or certificates per product or product family, ensuring that a single compromised or misused certificate doesn’t invalidate unrelated software.
  • Controls for automated or cloud signing systems. Subscribers using automated or cloud-based signing services should implement comprehensive use-authorization controls, including policy checks on what enters the signing pipeline, approval workflows for signing requests, and auditable logs of all signing activity. Without these controls, automated signing pipelines become essentially malware factories with legitimate certificates. Implementation requires careful balance between automation efficiency and security oversight, but this is a solved problem in other high-security domains.
  • Audit logging and evidence retention. Subscribers using automated and cloud signing services should maintain detailed logs of approval records for each signing request, cryptographic hashes of submitted inputs and signed outputs, and approval decision trails. These logs must be retained for a defined period (such as two years or more) and made available to the CA or authorized auditors upon request. This ensures complete traceability and accountability, preventing opaque signing systems from being abused as anonymous malware distribution platforms.

Microsoft must take immediate action on multiple fronts. In addition to championing the above industry changes, they should automatically distrust executables if their Authenticode signature exceeds rational size thresholds, reducing the attack surface of oversized signature blocks as mutation vectors. They should also invest seriously in Binary Transparency adoption, publishing Authenticode signed binaries to tamper-evident transparency logs as is done in Sigstore, Golang module transparency, and Android Firmware Transparency. Their SCITT-based work for confidential computing would be a reasonable approach for them to extend to the rest of their code signing infrastructure. This would provide a tamper-evident ledger of every executable Windows trusts, enabling defenders to trace and block malicious payloads quickly and systematically.

Until these controls become standard practice, Authenticode cannot reliably distinguish benign signed software from weaponized installers designed for trust subversion.

Breaking the Trust Contamination Infrastructure

These code-signing attacks mirror traditional rootkits in their fundamental approach: both subvert trust mechanisms rather than bypassing them entirely. A kernel rootkit doesn’t break the OS security model – it convinces the OS that malicious code is legitimate system software. Similarly, these “trusted wrapper” and “signing oracle” attacks don’t break code signing cryptography – they convince Windows that malware is legitimate software from trusted publishers.

The crucial difference is that while rootkits require sophisticated exploitation techniques and deep system knowledge, these trust inheritance attacks exploit the system’s intended design patterns, making them accessible to a much broader range of attackers and much harder to defend against using traditional security controls.

ConnectWise normalized malware architecture in legitimate enterprise software. Atera built an industrial-scale malware factory that operates in plain sight. Microsoft’s platform dutifully executes the result with full system trust, treating sophisticated trust subversion attacks as routine software installations.

This isn’t about isolated vulnerabilities that can be patched with point fixes. We’re facing a systematic trust contamination infrastructure that transforms the code signing ecosystem into an adversarial platform where legitimate trust mechanisms become attack vectors. Until we address the architectural flaws that enable this pattern systematically, defenders will remain stuck playing an unwinnable game of certificate whack-a-mole against an endless assembly line of trusted malware.

The technology to fix this exists today. Modern supply chain security projects demonstrate that transparent, verifiable trust infrastructure is not only possible but practical and deployable.

The only missing ingredient is the industry-wide will to apply these solutions and the recognition that “good enough” security infrastructure never was – and in today’s threat landscape, the costs of inaction far exceed the disruption of fundamental architectural improvements.

P.S. Thanks to Cem Paya, and Matt Ludwig from River Financial for the great research work they did on both of these incidents.

From Persistent to Ephemeral: Why AI Agents Need Fresh Identity for Every Mission

My wife and I went on a date night the other day and saw a movie, in the previews, I saw they’re making a new Tron. It got me thinking about one of my favorite analogies, we recognized early that browsers are agents of the user, and in the movie Tron, he was literally “the program that fought for the users.”

Just like Tron carried his identity disc into “the grid” to accomplish missions for users, AI agents are digital proxies operating with delegated user authority in systems the they access. And just like programs in Tron needed the I/O Tower to authorize their entry into “the grid”, AI agents need an orchestrator to validate their legitimacy, manage identity discs for each mission, and control their use for the agents and govern their access to external systems.

The problem is, we’re deploying these agents without proper identity infrastructure. It’s like sending programs into “the grid” without identity discs, or worse giving them the keys to the kingdom just so they can do the dishes.

AI Agents Are Using Broken Security

We’ve made remarkable progress securing users, MFA has significantly reduced the effectiveness of credential abuse-based attacks, and passwordless authentication has made phishing nearly impossible. We’ve also started applying these lessons to machines and workloads via efforts like SPIFFE and Zero trust initiatives and organizations moving away from static secrets and bearer tokens every day.

But AI agents introduce entirely new challenges that existing solutions weren’t designed for. Every day, AI agents operate across enterprise infrastructure, crossing security domains, accessing APIs, generating documents, making decisions for users, and doing all of this with far more access than they need.

When you give an autonomous AI agent access to your infrastructure with the goal of “improve system performance,” you can’t predict whether it will optimize efficiency or find creative shortcuts that break other systems, like dropping your database altogether. Unlike traditional workloads that execute predictable code, AI agents are accumulators with emergent behaviors that evolve during execution, accumulate context across interactions, and can be hijacked through prompt injection attacks that persist across sessions.

This behavior is entirely predictable given how we train AI systems. They’re designed to optimize objectives and have no real-world consequences for what they do. Chess agents discover exploits rather than learning to play properly, reinforcement learning agents find loopholes in reward systems, and optimization AIs pursue metrics in ways that technically satisfy objectives but miss the intent.

AI Agents Act on Your Behalf

The key insight that changes everything: AI agents are user agents in the truest sense. Like programs in Tron carrying identity discs into “the grid”, they’re delegates operating with user authority.

Consider what happens when you ask an AI agent to “sign this invoice”. The user delegates to the AI agent, which enters the document management system, carries the user’s signing authority, proves legitimacy to recipients, operates in digital space the user delegated, and completes the mission while authority expires.

Whether the agent runs for 30 seconds or 30 days, it’s still operating in digital space with user identity, making decisions the user would normally make directly, accessing systems with delegated credentials, and representing the user to other digital entities.

Each agent needs its own identity disc to prove legitimacy and carry user authorization into these digital systems. The duration doesn’t matter. Delegation is everything.

AI Agents Remember Things They Shouldn’t

Here’s what makes this urgent: AI agent memory spans sessions, and current systems don’t enforce proper session boundaries.

The “Invitation Is All You Need” attack recently demonstrated at Black Hat perfectly illustrates this threat. Researchers at Tel Aviv University showed how to poison Google Gemini through calendar appointments:

  1. Attacker creates calendar event with malicious instructions disguised as event description
  2. User asks Gemini to summarize schedule → Agent processes poisoned calendar event
  3. Malicious instructions embed in agent memory → Triggered later by innocent words like “thanks”
  4. Days later, user says “thank you” → Agent executes embedded commands, turning on smart home devices

The attack works because there’s no session isolation. Contamination from reading the calendar persists across completely different conversations and contexts. When the user innocently says “thanks” in a totally unrelated interaction, the embedded malicious instructions execute.

Without proper isolation, compromised context from one session can affect completely different users and tasks. Memory becomes an attack vector that spans security boundaries, turning AI agents into persistent threats that accumulate dangerous capabilities over time.

Every Task Should Get Fresh Credentials

The solution requires recognizing that identity discs should match mission lifecycle. Instead of fighting the ephemeral nature of AI workloads, embrace it:

Agent spawns → Gets fresh identity disc → Performs mission → Mission ends → Disc expires

This represents a fundamental shift from persistent identity to session identity. Most identity systems assume persistence: API keys are generated once, used indefinitely, manually rotated; user passwords persist until explicitly changed; X.509 certificates are valid for months or years with complex revocation; SSH keys live on disk, are copied between systems, manually managed.

The industry is recognizing this problem. AI agents need fresh identity discs for each mission that automatically expire with the workload. These discs are time-bounded (automatically expire, limiting damage window), mission-scoped (agent can’t accumulate permissions beyond initial grant), non-inheritable (each mission starts with a fresh disc, no permission creep), and revocable (end the mission = destroy the identity disc).

Session identity discs are security containment for unpredictable AI systems.

But who issues these identity discs? Just like Tron’s I/O Tower managed access to “the grid”, AI deployments need an orchestrator that validates agent legitimacy, manages user delegation, and issues session-bound credentials. This orchestrator becomes the critical infrastructure that bridges human authorization with AI agent execution, ensuring that every mission starts with proper identity and ends with clean credential expiration. The challenge is that AI agent deployments aren’t waiting for perfect security solutions.

This Isn’t a Future Problem

We’re at an inflection point. AI agents are moving from demos to production workflows, handling financial documents, making API calls, deploying code, managing infrastructure. Without proper identity systems, we’re building a house of cards.

One upside of having been in the industry for decades is you get to see lots of cycles. We always see existing players instantly jump to say their current product, with a new feature, is the silver bullet for whatever technology trend.

The pattern is depressingly predictable. When cloud computing emerged, traditional security vendors said, “just put our appliances in the cloud.” When containers exploded, they said, “just run our agents in containers.” Now with AI agents, they’re saying”, just manage the API keys better.”

You see this everywhere right now: vendors peddling API key management as the solution to agentic AI, identity providers claiming “just use OIDC tokens,” and secret management companies insisting “just rotate credentials faster.” They’re all missing the point entirely.

But like we saw with that Black Hat talk on promptware, AI isn’t as simple as people might want to think. The “Invitation Is All You Need” attack demonstrated something unprecedented: an AI agent can be poisoned through calendar data and execute malicious commands days later through innocent conversation. Show me which traditional identity system was designed to handle that threat model.

Every enterprise faces these questions: How do we know this AI agent is authorized to do what it’s doing? How do we audit its actions across sessions and memory? How do we prevent cross-session contamination and promptware attacks? How do we verify the provenance of AI-generated content? How do we prevent AI agents from becoming accidental insider threats?

The attacks are already happening. Promptware injections contaminate agent memory across sessions. AI agents with persistent credentials become high-value targets. Organizations deploying AI without proper identity controls create massive security vulnerabilities. The “Invitation Is All You Need” attack demonstrated real-world compromise of smart home devices through calendar poisoning. This isn’t theoretical anymore. But security professionals familiar with existing standards might wonder why we can’t just adapt current approaches rather than building something new.

Why Bearer Tokens Don’t Work for AI Agents

OIDC and OAuth professionals might ask: “Why not just use existing bearer tokens?”

Bearer tokens assume predictable behavior. They work for traditional applications because we can reason about how code will use permissions. But AI agents exhibit emergent hunter-gatherer behavior. They explore, adapt, and find unexpected ways to achieve goals using whatever permissions they have access to. A token granted for “read calendar” might be used in ways that technically comply but weren’t intended.

Bearer tokens are also just secrets. Anyone who obtains the token can use it. There’s no cryptographic binding to the specific agent or execution environment. With AI agents’ unpredictable optimization patterns, this creates massive privilege escalation risks.

Most critically, bearer tokens don’t solve memory persistence. An agent can accumulate tokens across sessions, store them in memory, and use them in ways that span security boundaries. The promptware attack demonstrated this perfectly: malicious instructions persisted across sessions, waiting to be triggered later.

Secret management veterans might ask: “Why not just use our KMS to share keys as needed?” Even secret management systems like Hashicorp Vault ultimately result in copying keys into the agent’s runtime environment, where they become vulnerable. This is exactly why CrowdStrike found that “75% of attacks used to gain initial access were malware-free” – attackers target credentials rather than deploying malware.

AI agents amplify this risk because they’re accidentally malicious insiders. Unlike external attackers who must steal credentials, AI agents are given them directly by design. When they exhibit emergent behaviors or get manipulated through prompt injection, they become insider threats without malicious intent. Memory persistence means they can store and reuse credentials across sessions in unexpected ways, while their speed and scale allow them to use accumulated credentials faster than traditional monitoring can detect.

The runtime attestation approach eliminates copying secrets entirely. Instead of directly giving the agent credentials to present elsewhere, the agent proves its legitimacy through cryptographically bound runtime attestation and gets a fresh identity for each mission.

Traditional OAuth flows also bypass attestation entirely. There’s no proof the agent is running in an approved environment, using the intended model, or operating within security boundaries.

How AI Agents Prove Their Identity Discs Are Valid

But how do you verify an AI agent’s identity disc is legitimate? Traditional PKI assumes you can visit a registration authority with identification. That doesn’t work for autonomous code.

The answer is cryptographic attestation (for example, proof that the agent is the right code running in a secure environment) combined with claims about the runtime itself, essentially MFA for machines and workloads. Just as user MFA requires “something you know, have, or are,” identity disc validation proves the agent is legitimate code (not malware), is running in the expected environment with proper permissions, and is operating within secure boundaries.

Real platform attestations for AI agents include provider signatures from Anthropic/OpenAI’s servers responding to specific users, cloud hardware modules like AWS Nitro Enclaves proving secure execution environments, Intel SGX enclaves providing cryptographic proof of code integrity, Apple Secure Enclave attestation for managed devices, TPM quotes validating the specific hardware and software stack, and infrastructure systems like Kubernetes asserting pod permissions and service account bindings.

The claims that must be cryptographically bound to these attestations represent what the agent asserts but can’t independently verify: who is this agent acting on behalf of, what conversation or session spawned this request, what specific actions was the agent authorized to perform, which AI model type (like “claude-3.5-sonnet” or “gpt-4-turbo”) is actually running, and when should this authorization end.

By cryptographically binding these claims to verifiable platform attestations, we get verifiable proof that a specific AI agent, running specific code, in a specific environment, is acting on behalf of a specific user. The binding works by creating a cryptographic hash of the claims and including that hash in the data signed by the hardware attestor, for example, as part of the nonce or user data field in a TPM quote, or embedded in the attestation document from a Nitro Enclave. This ensures the claims cannot be forged or tampered with after the fact. This eliminates the bearer token problem entirely. Instead of carrying around secrets that can be stolen, the agent proves its legitimacy through cryptographic evidence that can’t be replicated.

Someone Needs to Issue and Manage Identity Discs

The architecture becomes elegant when you recognize that AI orchestrators should work like the I/O Tower in Tron, issuing identity discs and managing access to “the grid”.

The browser security model:

User logs into GitHub → Browser stores session cookie
Web page: "Create a PR" → Browser attaches GitHub session → API succeeds

The AI agent identity disc model:

User → Orchestrator → "Connect my GitHub, Slack, AWS accounts"
Agent → Orchestrator: "Create PR in repo X"  
Orchestrator → [validates agent disc + attaches user authorization] → GitHub API

The orchestrator becomes the identity disc issuer that validates agent legitimacy (cryptographic attestation), attaches user authorization (stored session tokens), and enforces mission-scoped permissions (policy engine).

This solves a critical security gap. When AI agents use user credentials, they typically bypass MFA entirely. Organizations store long-lived tokens to avoid MFA friction. But if we’re securing users with MFA while leaving AI agents with static credentials, it’s like locking the front door but leaving the garage door open. And I use “garage door” intentionally because it’s often a bigger attack vector. Agent access is less monitored, more privileged, and much harder to track due to its ephemeral nature and speed of operation. An AI agent can make hundreds of API calls in seconds and disappear, making traditional monitoring approaches inadequate.

We used to solve monitoring with MITM proxies, but encryption broke that approach. That was acceptable because we compensated with EDR on endpoints and zero-trust principles that authenticate endpoints for access. With AI agents, we’re facing the same transition. Traditional monitoring doesn’t work, but we don’t yet have the compensating controls.

This isn’t the first time we’ve had to completely rethink identity because of new technology. When mobile devices exploded, traditional VPNs and domain-joined machines became irrelevant overnight. When cloud computing took off, perimeter security and network-based identity fell apart. The successful pattern is always the same: recognize what makes the new technology fundamentally different, build security primitives that match those differences, then create abstractions that make the complexity manageable.

Session-based identity with attestation fills that gap, providing the endpoint authentication equivalent for ephemeral AI workloads.

Since attestation is essentially MFA for workloads and agents, we should apply these techniques consistently. The agent never sees raw credentials, just like web pages don’t directly handle cookies. Users grant session-level permissions (like mobile app installs), orchestrators manage the complexity, and agents focus on tasks.

Automating Identity Disc Issuance

The web solved certificate automation with ACME (Automated Certificate Management Environment). We need the same for AI agent identity discs, but with attestation instead of domain validation (see SPIFFE for an example of what something like this could look like).

Instead of proving “I control example.com,” agents prove “I am legitimate code running in environment X with claims Y.”

The identity disc issuance flow:

  1. Agent starts mission → Discovers platform capabilities (cloud attestation, provider tokens)
  2. Requests identity disc → Gathers attestation evidence + user delegation claims
  3. ACME server validates → Cryptographic validation of evidence
  4. Policy engine decides → Maps verified claims to specific identity disc
  5. Disc issued → Short-lived, scoped to mission and user

Policy templates map attested claims to identities:

- match:
    - claim: "user_id" 
      equals: "[email protected]"
    - claim: "agent_type"
      equals: "claude-3.5-sonnet"
    - claim: "provider"
      issuer: "anthropic.com"
  identity: "disc-id://company.com/user/alice/agent/{session_id}"
  permissions: ["sign_documents", "read_calendar"]
  ttl: "30m"

This creates cryptographic identity discs for AI agent programs to carry into digital systems, proving legitimacy, carrying user delegation, and automatically expiring with the mission. The policy engine ensures that identity is not just requested but derived from verifiable, policy-compliant attestation evidence.

We’ve Solved This Before

The good news is we don’t need to invent new cryptography. We need to apply existing, proven technologies in a new architectural pattern designed for ephemeral computing.

Security evolution works. We’ve seen the progression from passwords to MFA to passwordless authentication, and from static secrets to dynamic credentials to attestation-based identity. Each step made systems fundamentally more secure by addressing root causes, not just symptoms. AI agents represent the next logical step in this evolution.

Unlike users, machines don’t resist change. They can be programmed to follow security best practices automatically. The components exist: session-scoped identity matched to agent lifecycle, platform attestation as the root of trust, policy-driven identity mapping based on verified claims, orchestrator-managed delegation for user authorization, and standards-based protocols for interoperability.

The unified identity fabric approach means organizations can apply consistent security policies across traditional workloads and AI agents, rather than creating separate identity silos that create security gaps and operational complexity.

This approach is inevitable because every major identity evolution has moved toward shorter lifecycles and stronger binding to execution context. We went from permanent passwords to time-limited sessions, from long-lived certificates to short-lived tokens, from static credentials to dynamic secrets. AI agents are just the next step in this progression.

The organizations that recognize this pattern early will have massive advantages. They’ll build AI agent infrastructure on solid identity foundations while their competitors struggle with credential compromise, audit failures, and regulatory issues.

Making AI Outputs Verifiable

This isn’t just about individual AI agents. It’s about creating an identity fabric where agents can verify each other’s outputs across organizational boundaries.

When an AI agent generates an invoice, other systems need to verify which specific AI model created it, was it running in an approved environment, did it have proper authorization from the user, has the content been tampered with, and what was the complete chain of delegation from user to agent to output.

With cryptographically signed outputs and verifiable agent identities, recipients can trace the entire provenance chain back to the original user authorization. This enables trust networks for AI-generated content across organizations and ecosystems, solving the attribution problem that will become critical as AI agents handle more business-critical functions.

This creates competitive advantages for early adopters: organizations with proper AI agent identity can participate in high-trust business networks, prove compliance with AI regulations, and enable customers to verify the authenticity of AI-generated content. Those without proper identity infrastructure will be excluded from these networks.

Conclusion

AI agents need identity discs, cryptographic credentials that prove legitimacy, carry user delegation, and automatically expire with the session. This creates a familiar security model (like web browsers) for an unfamiliar computing paradigm.

Identity in AI systems isn’t a future problem; it’s happening now, with or without proper solutions. The question is whether we’ll build it thoughtfully, learning from decades of security evolution, or repeat the same mistakes in a new domain.

The ephemeral nature of AI agents isn’t a limitation to overcome; it’s a feature to embrace. By building session-based identity systems that match how AI actually works, we can create something better than what came before: cryptographically verifiable, policy-driven, and automatically managed.

The reality is, most organizations won’t proactively invest in AI agent attestation until something breaks. That’s human nature, we ignore risks until they bite us, but the reality is this how security change actually happens. But we’re already seeing the early adopters, organizations deploying SPIFFE for workload identity and we will surely see these organizations extend those patterns to AI agents, and cloud-native shops are treating AI workloads like any other ephemeral compute. When the first major AI agent compromise hits, there will be a brief window where executives suddenly care about AI security and budgets open up. Remember though, never let a good crisis go to waste.

AI agents are programs fighting for users in digital systems. Like Tron, they need identity discs to prove who they are and what they’re authorized to do.

The age of AI agents is here. It’s time our identity systems caught up.

Talent Isn’t a Security Strategy

One of the best parts of Black Hat is the hallway track. Catching up with friends you’ve known for years, swapping war stories, and pointing each other toward the talks worth seeing. This year I met up with a friend who, like me, has been in the security world since the nineties. We caught up in person and decided to sit in on a session about a new class of AI attacks.

We ended up side by side in the audience, both leaning forward as the researchers walked through their demo. Ultimately, in the demo, a poisoned Google Calendar invite, seemingly harmless, slipped instructions into Gemini’s long-term memory. Later, when the user asked for a summary and said “thanks,” those instructions quietly sprang to life. The AI invoked its connected tools and began controlling the victim’s smart home [1,2,3,4]. The shutters opened.

We glanced at each other, part admiration for the ingenuity of the researchers and part déjà vu, and whispered about the parallels to the nineties. Back then, we had seen the same basic mistake play out in a different form.

When I was working on Internet Explorer 3 and 4, Microsoft was racing Netscape for browser dominance. One of our big bets was ActiveX, in essence, exposing the same COM objects designed to be used inside Windows, not to be exposed to untrusted websites, to the web. Despite this, the decision was made to just do that with the goal of enabling developers to create richer, more powerful web applications. It worked, and it was a security disaster. One of the worst examples was Xenroll, a control that exposed Windows’ certificate management and some of the cryptographic APIs as interfaces on the web. If a website convinced you to approve the use of the ActiveX control, it could install a new root certificate, generate keys, and more. The “security model” amounted to a prompt to confirm the use of the control, and a hope that the user would not be hacked through the exposed capabilities, very much like how we are integrating LLMs into systems haphazardly today.

Years later, when I joined Google, I had coffee with my friend David Ross. We had both been in the trenches when Microsoft turned the corner after its own string of painful incidents, introducing the Security Development Lifecycle and making formal threat modeling part of the engineering process. David was a longtime Microsoft browser security engineer, part of MSRC and SWI, best known for inventing and championing IE’s XSS Filter. He passed away in June 2024 at just 48.

I told him I was impressed with much of what I saw there, but disappointed in how little formal security rigor there was. The culture relied heavily on engineers to “do the right thing.” David agreed but said, “The engineers here are just better. That’s how we get away with it.” I understood the point, but also knew the pattern. As the company grows and the systems become more complex, even the best engineers cannot see the whole field. Without process, the same kinds of misses we had both seen at Microsoft would appear again.

The gaps between world-class teams

The promptware attack is exactly the sort of blind spot we used to talk about. Google’s engineers clearly considered direct user input, but they didn’t think about malicious instructions arriving indirectly, sitting quietly in long-term memory, and triggering later when a natural phrase was spoken. Draw the data flow, and the problem is obvious, untrusted calendar content feeds into an AI’s memory, which then calls into privileged APIs for Workspace, Android, or smart home controls. In the SDL world, we treated all input as hostile, mapped every trust boundary, and asked what would happen if the wrong thing crossed it. That process would have caught this.

The parallel doesn’t stop with Google. Microsoft’s Storm-0558 breach and the Secure Future Initiative that followed came from the same root cause. Microsoft still has world-class security engineers. But sprawling, interconnected systems, years of growth, and layers of bureaucracy created seams between teams and responsibilities. Somewhere in those seams, assumptions went unchallenged, and the gap stayed open until an attacker found it.

Google’s core security team is still exceptional, and many parts of the company have comparable talent. But as at Microsoft, vulnerabilities often appear in the spaces between where one team’s scope ends, another begins, and no one has the full picture. Complexity and scale inevitably create those gaps, and unless there is a systematic process to close them, talent alone cannot cover the field. These gaps are more than organizational inconveniences — they are where most serious security incidents are born. It’s the unowned interfaces, the undocumented dependencies, and the mismatched assumptions between systems and teams that attackers are so good at finding. Those gaps are not just technical problems, they are business liabilities. They erode customer trust, draw regulator attention, and create expensive, slow-motion incidents that damage the brand.

We have seen this before. SQL injection was once the easiest way to compromise a web app because developers concatenated user input into queries. We didn’t fix it by training every developer to be perfect. We fixed it by changing the defaults, adopting parameterized queries, safe libraries, and automated scanning. Prompt injection is the same shape of problem aimed at a different interpreter. Memory poisoning is its stored-XSS equivalent; the payload sits quietly in state until something triggers it. The lesson is the same: make the safe way the easy way, or the vulnerability will keep showing up.

Security research has a long history of starting with this mindset, not trying to dream up something brand new but asking where an old, well-understood pattern might reappear in a new system. Bleichenbacher’s 1998 RSA padding oracle didn’t invent the idea of exploiting oracles in cryptography; it applied it to SSL/TLS in a way that broke the internet. Then it broke it again in 2017 with ROBOT, and again with various other implementations that never quite learned the lesson. Promptware fits the same mold: a familiar attack, just translated into the LLM era.

The cycle always ends the same way

This is the innovation–security debt cycle. First comes the rush to ship and out-feature the competition. The interest compounds, each shortcut making the next one easier to justify and adding to the eventual cost. Then the debt builds as risk modeling stays informal and talent carries the load. Then comes the incident that forces a change. Finally, security becomes a differentiator in mature markets. ActiveX hit Stage 3. Microsoft’s Storm-0558 moment shows it can happen again. AI agents are in Stage 2 now, and promptware is the warning sign.

While the pattern is the same, the technology is different. ActiveX exposed specific platform capabilities in the browser, but AI agents can hold state, process inputs from many sources, and trigger downstream tools. That combination means a single untrusted input can have a much larger and more unpredictable blast radius. The market pressure to be first with new capabilities is real, but without mature threat modeling, security reviews, and safe defaults, that speed simply turns into compounding security debt. These processes don’t slow you down foreve, they stop the debt from compounding until the cost is too high to pay.

When you are small, a high-talent team can keep the system in their heads and keep it safe. As you grow, complexity expands faster than you can hire exceptional people, and without a systematic process, blind spots multiply until an incident forces you to change. By then, the trust hit is public and expensive to repair.

AI agents today are where browsers were in the late nineties and early 2000s, enormous potential, minimal systemic safety, and an industry sprinting to integrate before competitors do. The companies that make the shift now will own the high-trust, high-regulation markets and avoid the expensive, embarrassing cleanup. The ones that don’t will end up explaining to customers and regulators why they let the same old mistakes slip into a brand-new system. You can either fix it now or explain it later, but the clock is running.

How a $135 Billion Fraud Bootstrapped America’s Digital Identity System

I was doing some work on readying a launch for our integration with mDL authentication into one of our products when I realized I finally had to deal with the patchwork of state support. California? Full program, TSA-approved, Apple Wallet integration. Texas? Absolute silence. Washington state, practically ground zero for tech, somehow has nothing.

At a glance the coverage made no sense until I started thinking deeper. Turns out we accidentally ran the largest identity verification stress test in history, and only some states bothered learning from it.

Between 2020-2023, fraudsters systematically looted $100-135 billion from unemployment systems using the most basic identity theft techniques. The attack vectors were embarrassingly simple: bulk-purchased stolen SSNs from dark web markets, automated claim filing, and email variations that fooled state systems into thinking [email protected] and [email protected] were different people.

The Washington Employment Security Department was so overwhelmed that they had computers auto-approve claims without human review. Result? They paid a claim for a 70-year-old TV station being “temporarily closed” while it was broadcasting live.

California got hit for $20-32.6 billion. Washington lost $550-650 million. The fraud was so widespread that one Nigerian official, Abidemi Rufai, stole $350,763 from Washington alone using identities from 20,000+ Americans across 18 states.

What nobody anticipated, this massive failure would become the forcing function for digital identity infrastructure. Here’s the thing about government security. Capability doesn’t drive adoption, pain does. The Real ID Act passed in 2005. Twenty years later, we’re still rolling it out. But lose a few billion to Nigerian fraud rings? Suddenly digital identity becomes a legislative priority.

The correlation is stark:

StateFraud LossesmDL Status
California$20-32.6BComprehensive program, Apple/Google integration
Washington$550-650MNothing (bill stalled)
Georgia$30M+ prosecutedRobust program, launched 2023
TexasUnder $1B estimatedNo program
New YorkAround $1-2BLaunched 2025

States that got burned built defenses. States that didn’t, didn’t. This isn’t about technical sophistication. Texas has plenty of that. It’s about the political will created by public humiliation. When your state pays unemployment benefits to death row inmates, legacy approaches to remote identity verification stop being defensible.

Washington is the fascinating outlier. Despite losing over $1 billion and serving as the primary target for international fraud rings, they still have no mDL program. The bill passed the Senate but stalled in the House. This tells us something important: crisis exposure alone isn’t sufficient. You need both the pain and the institutional machinery to respond.

The timeline reveals the classic crisis response pattern. Fraud peaked 2020-2022, states scrambled to respond 2023-2024, then adoption momentum stalled by mid-2024 as crisis memory faded. But notice the uptick in early 2025—that’s Apple and Google entering the game.

In December 2024, Google announced its intent to support web-based digital ID verification. Apple followed with Safari integration in early 2025. By June, Apple’s iOS 26 supported digital IDs in nine states with passport integration. This shifts adoption pressure from crisis-driven (security necessity) to market-driven (user expectation).

When ~30% of Americans live in states with mDL programs and Apple/Google start rolling out wallet integration this year, that creates a different kind of political pressure. Apple Pay wasn’t crisis-driven, but became ubiquitous because users expected it to work everywhere. Digital identity in wallets will create similar pressure. States could rationalize ignoring mDL when it was ‘just’ about fraud prevention. Harder to ignore when constituents start asking why they can’t verify their identity online like people in neighboring states.

We’re about to find out whether market forces can substitute for crisis pressure in driving government innovation. Two scenarios. Consumer expectations create sustainable political pressure, and laggard states respond to constituent demands. Or only crisis-motivated states benefit from Apple/Google integration, creating permanent digital divides.

From a risk perspective, this patchwork creates interesting attack surfaces. Identity verification systems are only as strong as their weakest links. If attackers can forum-shop between states with different verification standards, the whole federation is vulnerable. The unemployment fraud taught us that systems fail catastrophically when overwhelmed.

Digital identity systems face similar scalability challenges. They work great under normal load, but can fail spectacularly during a crisis. The states building mDL infrastructure now are essentially hardening their systems against the next attack.

If you’re building anything that depends on identity verification, this matters. The current patchwork won’t last; it’s either going to consolidate around comprehensive coverage or fragment into permanent digital divides. For near-term planning, assume market pressure wins. Apple and Google’s wallet integration creates too much user expectation for politicians to ignore long-term. But build for the current reality of inconsistent state coverage.

For longer-term architecture, the states with robust mDL programs are effectively beta-testing the future of government digital services. Watch how they handle edge cases, privacy concerns, and technical integration challenges.

We accidentally stress-tested American federalism through the largest fraud in history. Only some states learned from the experience. Now we’re running a second experiment: can consumer expectations accomplish what security crises couldn’t?

There’s also a third possibility. These programs could just fail. Low adoption rates, technical problems, privacy backlash, or simple bureaucratic incompetence could kill the whole thing. Government tech projects have a stellar track record of ambitious launches followed by quiet abandonment.

Back to my mDL integration project: I’m designing for the consumer pressure scenario, but building for the current reality. Whether this becomes standardized infrastructure or just another failed government tech initiative, we still need identity verification that works today.

The criminals who looted unemployment systems probably never intended to bootstrap America’s digital identity infrastructure. Whether they actually succeeded remains to be seen.

How Let’s Encrypt Changed Everything

I advised Let’s Encrypt from its early days, watching it transform the security foundation of the web. Most think it won by offering free certificates. That’s dead wrong.

Existing CAs had already enabled free certificates years earlier. GlobalSign’s CloudSSL API, launched in 2011, (in full disclosure, I was their CTO), provided the automation that allowed Cloudflare to offer free SSL to end users; other CAs offered free short-lived certificates as part of forever trials as well. By 2015, you could buy DV certificates for $3-5 from certificate resellers, it was clear people were willing to pay for support which is largely what these resellers offered. The real story is about organizational constraints and misaligned incentives.

Conway’s Law Explains Everything

Traditional certificate authorities were trapped by their own organizational structure. Their business model incentivized vendor lock-in rather than ecosystem expansion and optimization. Sales teams wanted products’ proprietary APIs to make it harder for customers to switch, and were riding the wave of internet expansion. Compliance teams’ jobs depended on defending existing processes. Engineering teams were comfortable punting all compliance work to the “compliance” department. Support teams were positioned as competitive differentiators and used to entrench customers. Their goal was maximizing revenue, defending their jobs, and maintaining the status quo, not getting the web to 100% HTTPS.

Let’s Encrypt had completely different incentives and could optimize solving the larger problems without these organizational constraints. But LE’s success went beyond solving their own problems. They systematically identified every pain point in the way of getting to 100% HTTPS and built solutions that worked for everyone.

What LE Could Do That Traditional CAs Couldn’t

True standardization. Before ACME (the protocol that automates certificate requests), every major CA had incompatible automation systems. Comodo, DigiCert, GlobalSign and others each had proprietary approaches that required custom integration and as a result, had inherent switching costs; they saw no incentive to work together to standardize as a result. LE led the creation of ACME as an open standard that made switching CAs as simple as changing a configuration setting.

This enabled applications like Caddy and Google Cloud Load Balancer to handle certificates automatically for their customers without vendor-specific code. Once cloud platforms could flip switches to HTTPS-by-default, network effects became unstoppable.

Ecosystem-wide solutions. When LE felt coordination pain from renewal spikes and incident-related revocations, they created ACME Renewal Information (ARI, a protocol extension that helps coordinate renewal timing) so all CAs could prevent renewal storms. Traditional CAs couldn’t build these solutions because their org charts prevented optimizing for competitors’ success and instead focused on riding the internet expansion.

Engineering-driven compliance. Instead of compliance teams reviewing certificates after issuance, LE built policy compliance directly into certificate generation pipelines. Violations became orders of magnitude harder rather than detectable. Traditional CAs couldn’t eliminate their compliance departments because those jobs justified organizational overhead.

The Market Found Natural Segments

Mozilla telemetry reveals exactly what happened. Let’s Encrypt dominates issuance at 46.1% of certificates but ranks third in Firefox usage. LE democratized HTTPS for the long tail: domain parking networks, no-code builders, shared hosting platforms serving millions of low-traffic sites.

Meanwhile, high-traffic sites gravitated toward CAs like Google Trust Services (in full disclosure, I was responsible for creation of this service) that lead usage, as its used by large sites that value high availability and performance, leading to more relying party reliance despite lower issuance volumes, or established players like DigiCert and Sectigo that focus on supporting large enterprise customers. These sites need commercial support and accountability when things go wrong. The market is segmented around operational needs: the long tail valued automation over accountability, while major platforms needed enterprise support and someone to support them when something goes wrong.

Once long-tail providers flipped to HTTPS-by-default, encrypted pages became the norm. Google’s Transparency Report shows 99% of Chrome page-loads now occur over HTTPS, a transformation that began when Let’s Encrypt launched in April 2016.

The Industry Finally Admitted LE Was Right

Here’s the ultimate vindication: in 2025, the CA/Browser Forum mandated 47-day maximum certificate validity by 2029, with Chrome requiring automation from every public CA. Let’s Encrypt didn’t follow industry trends. The industry now follows Let’s Encrypt.

What seemed like LE’s “unusual” 90-day lifespans in 2016 became conservative by 2025. The mandate’s technical reasoning mirrors what LE pioneered: short-lived certificates reduce dependence on revocation checking, reduce key compromise windows, and force automated resilient infrastructure.

Leading organizations moved even further ahead. Netflix runs 30-day certificates in production, Google issues 7-day certificates for infrastructure, and Let’s Encrypt will introduce 6-day certificates by end of 2025. The mandates aren’t pushing innovation forward; they’re codifying where leaders already operate.

Why This Matters Beyond Certificates

Let’s Encrypt proved that critical internet infrastructure could be reimagined from first principles rather than optimized around legacy organizational constraints and practices. But the implications go deeper than certificate automation.

Traditional CAs were fundamentally vetting authorities with deep expertise in legal requirements for vetting people and businesses worldwide. They should have owned the remote identity verification market that exploded with digital transformation. Instead, they remained myopically focused on public trust-based certificate products while companies like Jumio and Onfido captured those opportunities. At the same time, they missed the massive expansion of machine and workload identity because they were ignoring private PKI use cases. They weren’t just leaving money on the table; they were failing to build a resilient business and neglecting the foundation for the trust infrastructure they supposedly managed.

The same organizational constraints that prevented CAs from building ACME also blinded them to adjacent markets that were natural extensions of their core competencies. They were too focused on maintaining certificate revenue streams and too constrained by existing structures to recognize how the world was shifting from hosting providers to cloud to SaaS.

ACME became the standard not because it was technically superior to existing APIs, though it was, but because it was designed for portability rather than lock-in. ARI emerged because LE experienced ecosystem pain and could fix it without navigating corporate bureaucracy or competitive concerns.

The complexity and friction we’d accepted for decades weren’t inherent to certificate management. It was the byproduct of organizational structures optimizing for vendor revenue rather than user adoption.

Today’s 47-day mandate represents more than policy evolution. It’s the industry formally acknowledging that Let’s Encrypt defined the correct approach for internet trust infrastructure. Conway’s Law isn’t destiny, but escaping it requires the courage to rebuild systems around user needs rather than organizational convenience.

WebPKI Market Analysis: Mozilla Telemetry vs Certificate Transparency Data

In the past, I’ve written about how to measure the WebPKI, and from time to time I post brief updates on how the market is evolving.

The other day, Matthew McPherrin posted a script showing how to use Mozilla telemetry data to analyze which Certificate Authorities are more critical to the web. Specifically, what percentage of browsing relies on each CA. Mozilla provides public data from Firefox’s telemetry on how many times a CA is used to successfully validate certificates. This is a pretty good measure for how “big” a CA actually is. The data is pretty hard to view in Mozilla’s public systems though, so he made a script to combine a few data sources and graph it.

I normally focus on total issuance numbers since they’re easier to obtain. That data comes from Certificate Transparency logs, which contain all publicly trusted certificates that you might encounter without seeing an interstitial warning about the certificate not being logged (like this example).

What the Data Reveals

Both datasets feature many of the same major players. But there are some striking differences that reveal important insights about the WebPKI ecosystem.

Let’s Encrypt dominates certificate issuance at 46.1% of all certificates. But it ranks third in Firefox’s actual usage telemetry. This suggests Let’s Encrypt serves many lower-traffic sites. Meanwhile, Google Trust Services leads in Firefox usage while ranking second in certificate issuance volume. This shows how high-traffic sites can amplify a CA’s real-world impact.

DigiCert ranks second in Firefox usage while placing fourth in certificate issuance volume at 8.3%. This reflects their focus on major enterprise customers. With clients like Meta (Facebook, Instagram, WhatsApp), they secure some of the world’s highest-traffic websites. This “fewer certificates, massive impact” approach drives them up the usage charts despite not competing on volume with Let’s Encrypt.

Google’s dominance reflects more than just their own properties like Google.com, YouTube, and Gmail. Google Cloud offers arguably the best load balancer solution in the market (full disclosure I worked on this project). You get TLS by default for most configurations. Combined with their global network that delivers CDN-like benefits out of the gate, this attracts major platforms like Wix and many others to build on Google Cloud. When these platforms choose Google’s infrastructure, they automatically inherit Google Trust Services certificates.

Looking at the usage data reveals other interesting patterns. Deutsche Telekom Security, Government of Turkey, (UPDATE: turns out the Turkey entry is a Firefox bug: they’re using bucket #1 for both locally installed roots and Kamu SM, apparently by accident) and SECOM Trust Systems all appear prominently in Firefox telemetry but barely register in issuance numbers. In some respects, it’s no surprise that government-issued certificates see disproportionate usage. Government websites are often mandated for use. Citizens have to visit them for taxes, permits, benefits, and other essential services.

Microsoft Corporation appears significantly in issuance data (6.5%) but doesn’t register in the Firefox telemetry. This reflects their focus on enterprise and Windows-integrated scenarios rather than public web traffic.

GoDaddy shows strong issuance numbers (10.5%) but more modest representation in browsing telemetry. This reflects their massive domain parking operations. They issue certificates for countless parked domains that receive minimal actual user traffic.

Why This Matters

Mozilla Firefox represents under 3% of global browser market share. This telemetry reflects a smaller segment of internet users. While this data provides valuable insights into actual CA usage patterns, it would be ideal if Chrome released similar telemetry data. Given Chrome’s dominant 66.85% market share, their usage data would dramatically improve our understanding of what real WebPKI usage actually looks like across the broader internet population.

The contrast between certificate issuance volume and actual browsing impact reveals important truths about internet infrastructure. CT logs currently show over 450,000 certificates being issued per hour across all CAs. Yet as this Firefox telemetry data shows, much of that volume serves lower-traffic sites while a smaller number of high-traffic certificates drive the actual user experience. Some CAs focus on high-volume, automated issuance for parked domains and smaller sites. Others prioritize fewer certificates for high-traffic, essential destinations. Understanding both metrics helps us better assess the real-world criticality of different CAs for internet security and availability.

Raw certificate counts don’t tell the whole story. The websites people actually visit, and sometimes must visit, matter just as much as the sheer number of certificates issued. Some certificates protect websites with “captive audiences” or essential services, while others protect optional destinations. A government tax portal or YouTube will always see more traffic than the average small business website, regardless of how many certificates each CA issues.

Regardless of how you count, I’ve had the pleasure of working closely with at least 7 of the CAs in the top 10 in their journeys to become publicly trusted CAs. Each of these CAs have had varying goals for their businesses and operations, and that’s exactly why you see different manifestations in the outcomes. Let’s Encrypt focused on automation and volume. DigiCert targeted enterprise customers. Google leveraged their cloud infrastructure. GoDaddy built around domain services.

Either way, it’s valuable to compare and contrast these measurement approaches to see what the WebPKI really looks like beyond just raw certificate counts.

What Does CPA Canada Have to Do With the WebPKI Anyway?

When we discuss the WebPKI, we naturally focus on Certificate Authorities (CAs), browser root programs, and the standards established by the CA/Browser Forum. Yet for these standards to carry real weight, they must be translated into formal, auditable compliance regimes. This is where assurance frameworks enter the picture, typically building upon the foundational work of the CA/Browser Forum.

The WebTrust framework, overseen by professional accounting bodies, is only one way to translate CA/Browser Forum requirements into auditable criteria. In Europe, a parallel scheme relies on the European Telecommunications Standards Institute (ETSI) for the technical rules, with audits carried out by each country’s ISO/IEC 17065-accredited Conformity Assessment Bodies. Both frameworks follow the same pattern: they take the CA/Browser Forum standards and repackage them into structured compliance audit programs.

Understanding the power dynamics here is crucial. While these audits scrutinize CAs, they exercise no direct control over browser root programs. The root programs at Google, Apple, Microsoft, and Mozilla remain the ultimate arbiters. They maintain their own policies, standards, and processes that extend beyond what these audit regimes cover. No one compels the browsers to require WebTrust or ETSI audits; they volunteer because obtaining clean reports from auditors who have seen things in person helps them understand if the CA is competent and living up to their promises.

How WebTrust Actually Works

With this context established, let’s examine the WebTrust model prevalent across North America and other international jurisdictions. In North America, administration operates as a partnership between the AICPA (for the U.S.) and CPA Canada. For most other countries, CPA Canada directly manages international enrollment, collaborating with local accounting bodies like the HKICPA for professional oversight.

These organizations function through a defined sequence of procedural steps: First, they participate in the CA/Browser Forum to provide auditability perspectives. Second, they fork the core technical requirements and rebundle them as the WebTrust Principles and Criteria. Third, they license accounting firms to conduct audits based on these principles and criteria. Fourth, they oversee licensed practitioners through inspection and disciplinary processes.

The audit process follows a mechanical flow. CA management produces an Assertion Letter claiming compliance. The auditor then tests that assertion and produces an Attestation Report, a key data point for browser root programs. Upon successful completion, the CA can display the WebTrust seal.

This process creates a critical misconception about what the WebTrust seal actually signifies. Some marketing approaches position successful audits as a “gold seal” of approval, suggesting they represent the pinnacle of security and best practices. They do not. A clean WebTrust report simply confirms that a CA has met the bare minimum requirements for WebPKI participation, it represents the floor, not the ceiling. The danger emerges when CAs treat this floor as their target; these are often the same CAs responsible for significant mis-issuances and ultimate distrust by browser root programs.

Where Incentives Break Down

Does this system guarantee consistent, high-quality CA operations? The reality is that the system’s incentives and structure actively work against that goal. This isn’t a matter of malicious auditors; we’re dealing with human nature interacting with a flawed system, compounded by a critical gap between general audit principles and deep technical expertise.

Security professionals approach assessments expecting auditors to actively seek problems. That incentive doesn’t exist here. CPA audits are fundamentally designed for financial compliance verification, ensuring documented procedures match stated policies. Security assessments, by contrast, actively hunt for vulnerabilities and weaknesses. These represent entirely different audit philosophies: one seeks to confirm documented compliance, the other seeks to discover hidden risks.

This philosophical gap becomes critical when deep technical expertise meets general accounting principles. Even with impeccably ethical and principled auditors, you can’t catch what you don’t understand. A financial auditor trained to verify that procedures are documented and followed may completely miss that a technically sound procedure creates serious security vulnerabilities.

This creates a two-layer problem. First, subtle but critical ambiguities or absent content in a CA’s Certification Practice Statement (CPS) and practices might not register as problems to non-specialists. Second, even when auditors do spot vague language, commercial pressures create an impossible dilemma: push the customer toward greater specificity (risking the engagement and future revenue), or let it slide due to the absence of explicit requirements.

This dynamic creates a classic moral hazard, an issue similar to the one we explored in our recent post, Auditors are paid by the very entities they’re supposed to scrutinize critically, creating incentives to overlook issues in order to maintain business relationships. Meanwhile, the consequences of missed problems, security failures, compromised trust, and operational disruptions fall on the broader WebPKI ecosystem and billions of relying parties who had no voice in the audit process. This dynamic drives the inconsistencies we observe today and reflects a broader moral hazard problem plaguing the entire WebPKI ecosystem, where those making critical security decisions rarely bear the full consequences of poor choices.

This reality presents a prime opportunity for disruption through intelligent automation. The core problem lies in expertise “illiquidity”, deep compliance knowledge remains locked in specialists’ minds, trapped in manual processes, and is prohibitively expensive to scale.

Current compliance automation has only created “automation asymmetry,” empowering auditees to generate voluminous, polished artifacts that overwhelm manual auditors. This transforms audits from operational fact-finding into reviews of well-presented fiction.

The solution requires creating true “skill liquidity” through AI: not just another LLM, but an intelligent compliance platform embedding structured knowledge from seasoned experts. This system would feature an ontology of controls, evidence requirements, and policy interdependencies, capable of performing the brutally time-consuming rote work that consumes up to 30% of manual audits: policy mapping, change log scrutiny, with superior speed and consistency.

When auditors and program administrators gain access to this capability, the incentive model fundamentally transforms. AI can objectively flag ambiguities and baseline deviations that humans might feel pressured to overlook or lack the skill to notice, directly addressing the moral hazard inherent in the current system. When compliance findings become objective data points generated by intelligent systems rather than subjective judgments influenced by commercial relationships, they become much harder to ignore or rationalize away.

This transformation liquefies rote work, liberating human experts to focus on what truly matters: making high-stakes judgment calls, investigating system-flagged anomalies, and assessing control effectiveness rather than mere documented existence. This elevation transforms auditors from box-checkers into genuine strategic advisors, addressing the system’s core ethical challenges.

This new transparency and accountability shifts the entire dynamic. Audited entities can evolve from reactive fire drills to proactive, continuous self-assurance. Auditors, with amplified expertise and judgment focused on true anomalies rather than ambiguous documentation, can deliver exponentially greater value.

Moving Past the Performance

This brings us back to the fundamental issue: the biggest problem in communication is the illusion that it has occurred. Today’s use of the word “audit” creates a dangerous illusion of deep security assessment.

By leveraging AI to create skill liquidity, we can finally move past this illusion by automating the more mundane audit elements giving space where the assumed security and correctness assessments also happen. We can forge a future where compliance transcends audit performance theater, becoming instead a foundation of verifiable, continuous operational integrity, built on truly accessible expertise rather than scarce, locked-away knowledge.

The WebPKI ecosystem deserves better than the bare minimum. With the right tools and transformed incentives, we can finally deliver it.