Category Archives: Technology

How Microsoft Code Signing Became Part of a Trust Subversion Toolchain

Code signing was supposed to tell you who published a piece of software and ultimately decide if you can trust the software and install it.. For nearly three decades, cryptographic signatures have bound a binary to a publisher’s identity, guaranteeing it hasn’t been tampered with since signing. But on Windows, that system is now broken in ways that would make its original designers cringe.

But attackers have found ways to completely subvert this promise without breaking a single cryptographic primitive. They can now create an unlimited number of different malicious binaries that all carry the exact same “trusted” signature, or careless publishers operating signing oracles that enable others to turn their software into a bootloader for malware. The result is a system where valid signatures from trusted companies can no longer tell you anything meaningful about what the software will actually do.

Attackers don’t need to steal keys or compromise Certificate Authorities. They use the legitimate vendor software and publicly trusted code signing certificates, perverting the entire purpose of publisher-identity-based code signing.

Microsoft’s Long-Standing Awareness

Microsoft has known about the issue of maleability for at least a decade. In 2013, they patched CVE-2013-3900], where attackers could modify signed Windows executables, adding malicious code in “unverified portions” without invalidating the Authenticode signature. WinVerifyTrust improperly validated these files, allowing one “trusted” signature to represent completely different, malicious behavior.

This revealed a deeper architectural flaw, signed binaries could be altered by unsigned data. Microsoft faced a classic platform dilemma – the kind that every major platform holder eventually confronts. Fixing this comprehensively risked breaking legacy software critical to their vast ecosystem, potentially disrupting thousands of applications that businesses depended on daily. The engineering tradeoffs were genuinely difficult: comprehensive security improvements versus maintaining compatibility for millions of users and enterprise customers who couldn’t easily update or replace critical software.

They made the fix optional, prioritizing ecosystem compatibility over security hardening. This choice might have been understandable from a platform perspective in 2013, when the threat landscape was simpler and the scale of potential abuse wasn’t yet clear. But it becomes increasingly indefensible as attacks evolved and the architectural weaknesses became a systematic attack vector rather than an isolated vulnerability.

In 2022, Microsoft republished the advisory, confirming they still won’t enforce stricter verification by default, while today’s issues differ, they are part of a similar class of vulnerabilities attackers now exploit systematically. The “trusted-but-mutable” flaw is now starting to permeate the Windows code signing ecosystem. Attackers use legitimate, signed applications as rootkit-like trust proxies, inheriting vendors’ reputation and bypass capabilities to deliver arbitrary malicious payloads.

Two incidents show we’re not dealing with isolated bugs but systematic assaults on Microsoft’s code signing’s core assumptions.

ConnectWise: When Legitimate Software Adopts Malware Design Patterns

ConnectWise didn’t stumble into a vulnerability. They deliberately engineered their software using design patterns from the malware playbook. Their “attribute stuffing” technique embeds unsigned configuration data in the unauthenticated_attributes field of the PKCS#7 (CMS) envelope, a tactic malware authors use to conceal payloads in signed binaries.

In PKCS#7, the SignedData structure includes a signed digest (covering the binary and metadata) and optional unauthenticated_attributes, which lie outside the digest and can be modified post-signing without invalidating the signature. ConnectWise’s ScreenConnect installer misuses the Microsoft-reserved OID for Individual Code Signing ([1.3.6.1.4.1.311].4.1.1) in this field to store unsigned configuration data, such as server endpoints that act as the command control server of their client. This OID, meant for specific code signing purposes, is exploited to embed attacker-controlled configs, allowing the same signed binary to point to different servers without altering the trusted signature.

The ConnectWise ScreenConnect incident emerged when River Financial’s security team found attackers creating a fake website, distributing malware as a “River desktop app.” It was a trust inheritance fraud, a legitimately signed ScreenConnect client auto-connecting to an attacker-controlled server. 

The binary carried a valid signature signed by:

Subject: /C=US/ST=Florida/L=Tampa/O=Connectwise, LLC/CN=Connectwise, LLC 
Issuer: /C=US/O=DigiCert, Inc./CN=DigiCert Trusted G4 Code Signing RSA4096 SHA384 2021 CA1
Serial Number: 0B9360051BCCF66642998998D5BA97CE
Valid From: Aug 17 00:00:00 2022 GMT 
Valid Until: Aug 15 23:59:59 2025 GMT

Windows trusts this as legitimate ConnectWise software, no SmartScreen warnings, no UAC prompts, silent installation, and immediate remote control. Attackers generate a fresh installer via a ConnectWise trial account or simply found an existing package and manually edited the unauthenticated_attributes, extracting a benign signature, grafting a malicious configuration blob (e.g., attacker C2 server), inserting the modified signature, and creating a “trusted” binary. Each variant shares the certificate’s reputation, bypassing Windows security.

Why does Windows trust binaries with oversized, unusual unauthenticated_attributes? Legitimate signatures need minimal metadata, yet Windows ignores red flags like large attribute sections, treating them as fully trusted. ConnectWise’s choice to embed mutable configs mirrors malware techniques, creating an infinite malware factory where one signed object spawns unlimited trusted variants.

Similarly, ConnectWise’s deliberate use of PKCS#7 unauthenticated attributes for ScreenConnect configurations, like server endpoints, bypasses code signing’s security, allowing post-signing changes that mirror malware tactics hiding payloads in signed binaries. Likely prioritizing cost-saving over security, this choice externalizes abuse costs to users, enabling phishing campaigns. It’s infuriating for weaponizing signature flexibility warned about for decades, normalizing flaws that demand urgent security responses. Solutions exist to fix this.

The Defense Dilemma

Trust inheritance attacks leave security teams in genuinely impossible positions – positions that highlight the fundamental flaws in our current trust model. Defenders face a no-win scenario where every countermeasure either fails technically or creates operational chaos.

Blocking file hashes fails because attackers generate infinite variants with different hashes but the same trusted signature – each new configuration changes the binary’s hash while preserving the signature’s validity. This isn’t a limitation of security tools; it’s the intended behavior of code signing, where the same certificate can sign multiple different binaries.

Blocking the certificate seems like the obvious solution until you realize it disrupts legitimate software, causing operational chaos for organizations relying on the vendor’s products. For example, consider how are they to know what else was signed by that certificate? Doing so is effectively a self-inflicted denial-of-service that can shut down critical business operations. Security teams face the impossible choice between allowing potential malware or breaking their own infrastructure.

Behavioral detection comes too late in the attack chain. By the time suspicious behavior triggers alerts, attackers have already gained remote access, potentially disabled monitoring, installed additional malware, or begun data exfiltration. The initial trust inheritance gives attackers a crucial window of legitimacy.

These attacks operate entirely within the bounds of “legitimate” signed software, invisible to signature-based controls that defenders have spent years tuning and deploying. Traditional security controls assume that valid signatures from trusted publishers indicate safe software – an assumption these attacks systematically exploit. Cem Paya’s detailed analysis, part of River Financial’s investigation, provides a proof-of-concept for attribute grafting, showing how trivial it is to create trusted malicious binaries.

ConnectWise and Atera resemble modern Back Orifice, which debuted at DEF CON in August 1998 to demonstrate security flaws in Windows 9x. The evolution is striking: Back Orifice emerged two years after Authenticode’s 1996 introduction, specifically to expose Windows security weaknesses, requiring stealth and evasion to avoid detection. Unlike Back Orifice, which had to hide from the code signing protections Microsoft had established, these modern tools don’t evade those protections – they weaponize them, inheriting trust from valid signatures while delivering the same remote control capabilities without warnings.

Atera: A Trusted Malware Factory

Atera provides a legitimate remote monitoring and management (RMM) platform similar to ConnectWise ScreenConnect, providing IT administrators with remote access capabilities for managing client systems. Like other RMM solutions, Atera distributes signed client installers that establish persistent connections to their management servers. 

They also operate what effectively amounts to a public malware signing service. Anyone with an email can register for a free trial and receive customized, signed, timestamped installers. Atera’s infrastructure embeds attacker-supplied identifiers into the MSI’s Property table, then signs the package with their legitimate certificate.

This breaks code signing’s promise of publisher accountability. Windows sees “Atera Networks Ltd,” associates the reputation of the code based on the reputation of the authentic package, but can’t distinguish whether the binary came from Atera’s legitimate operations or an anonymous attacker who signed up minutes ago. The signature’s identity becomes meaningless when it could represent anyone.

In a phishing campaign targeting River Financial’s customers, Atera’s software posed as a “River desktop app,” with attacker configs embedded in a signed binary. 

The binary carried this valid signature, signed by:

Subject: CN=Atera Networks Ltd,O=Atera Networks Ltd,L=Tel Aviv-Yafo,C=IL,serialNumber=513409631,businessCategory=Private Organization,jurisdictionC=IL 
Issuer: CN=DigiCert Trusted G4 Code Signing RSA4096 SHA384 2021 CA1,O=DigiCert, Inc.,C=US Serial: 09D3CBF84332886FF689B04BAF7F768C 
notBefore: Jan 23 00:00:00 2025 GMT 
notAfter: Jan 22 23:59:59 2026 GMT

Atera provides a cloud-based remote monitoring and management (RMM) platform, unlike ScreenConnect, which supports both on-premises and cloud deployments with custom server endpoints. Atera’s agents connect only to Atera’s servers, but attackers abuse its free trial to generate signed installers tied to their accounts via embedded identifiers (like email or account ID) in the MSI Property table. This allows remote control through Atera’s dashboard, turning it into a proxy for malicious payloads. Windows trusts the “Atera Networks Ltd.” signature but cannot distinguish legitimate from attacker-generated binaries. Atera’s lack of transparency, with no public list of signed binaries or auditable repository, hides abuse, leaving defenders fighting individual attacks while systemic issues persist.

A Personal Reckoning

I’ve been fighting this fight for over two decades. Around 2001, as a Product Manager at Microsoft, overseeing a wide range of security and platform features, I inherited Authenticode among many responsibilities. Its flaws were glaring, malleable PE formats, weak ASN.1 parsing, and signature formats vulnerable to manipulation.

We fixed some issues – hardened parsing, patched PE malleability – but deeper architectural changes faced enormous resistance. Proposals for stricter signature validation or new formats to eliminate mutable fields were blocked by the engineering realities of platform management. The tension between security ideals and practical platform constraints was constant and genuinely difficult to navigate.

The mantra was “good enough,” but this wasn’t just engineering laziness. Authenticode worked for 2001’s simpler threat landscape, where attacks were primarily about bypassing security rather than subverting trust itself. The flexibility we preserved was seen as a necessary feature for ecosystem compatibility – allowing for signature formats that could accommodate different types of metadata and varying implementation approaches across the industry.

The engineering tradeoffs were real, every architectural improvement risked breaking existing software, disrupting the development tools and processes that thousands of ISVs depended on, and potentially fragmenting the ecosystem. The business pressures were equally real: maintaining compatibility was essential for Windows’ continued dominance and Microsoft’s relationships with enterprise customers who couldn’t easily migrate critical applications.

It was never good enough for the long term. We knew it then, and we certainly know it now. The flexibility we preserved, designed for a simpler era, became systematic vulnerabilities as threats evolved from individual attackers to sophisticated operations exploiting trust infrastructure itself. Every time we proposed fundamental fixes, legitimate compatibility concerns and resource constraints won out over theoretical future risks that seemed manageable at the time.

This is why I dove into Sigstore, Binary Transparency, and various other software supply chain security efforts. These projects embody what we couldn’t fund in 2001, transparent, verifiable signing infrastructure that doesn’t rely on fragile trust-based compromises. As I wrote in How to keep bad actors out in open ecosystems, our digital identity models fail to provide persistent, verifiable trust that can scale with modern threat landscapes.

The Common Thread

ConnectWise and Atera expose a core flaw, code signing relies on trust and promises, not verifiable proof. The CA/Browser Forum’s 2023 mandate requires FIPS 140-2 Level 2 hardware key storage, raising the bar against key theft and casual compromise. But it’s irrelevant for addressing the fundamental problem: binaries designed for mutable, unsigned input or vendors running public signing oracles.

Figure 1: Evolution of Code Signing Hardware Requirements (2016-2024)

The mandate addresses yesterday’s threat model – key compromise – while today’s attacks work entirely within the intended system design. Compliance often depends on weak procedural attestations where subscriber employees sign letters swearing keys are on HSMs, rather than cryptographic proof of hardware protection. The requirement doesn’t address software engineered to bypass code signing’s guarantees, leaving systematic trust subversion untouched.

True cryptographic attestation, where hardware mathematically proves key protection, is viable today. Our work on Peculiar Ventures’ attestation library supports multiple formats, enabling programmatic verification without relying on trust or procedural checks. The challenge isn’t technical – it’s accessing diverse hardware for testing and building industry adoption, but the foundational technology exists and works.

The Path Forward

We know how to address this. A supply chain security renaissance is underway, tackling decades of accumulated technical debt and architectural compromise. Cryptographic attestation, which I’ve spent years developing, provides mathematical proof of key protection that can be verified programmatically by any party. For immediate risk reduction, the industry should move toward dynamic, short-lived credentials that aren’t reused across projects, limiting the blast radius when compromise or abuse occurs.

The industry must implement these fundamental changes:

  • Hardware-rooted key protection with verifiable attestation. The CA/Browser Forum mandates hardware key storage, but enforcement relies heavily on subscriber self-attestation rather than cryptographic proof. Requirements should be strengthened to mandate cryptographic attestations proving keys reside in FIPS 140-2/3 or Common Criteria certified modules. When hardware attestation isn’t available, key generation should be observed and confirmed by trusted third parties (such as CA partners with fiduciary relationships) rather than relying on subscriber claims.
  • Explicit prohibition of mutable shells and misaligned publisher identity. Signing generic stubs whose runtime behavior is dictated by unsigned configuration already violates Baseline Requirements §9.6.3 and §1.6.1, but this isn’t consistently recognized as willful signing of malware because the stub itself appears benign. The BRs should explicitly forbid mutable-shell installers and signing oracles that allow subscribers to bypass code signing’s security guarantees. A signed binary must faithfully represent its actual runtime behavior. Customized or reseller-specific builds should be signed by the entity that controls that behavior, not by a vendor signing a generic stub.
  • Subscriber accountability and disclosure of abusive practices. When a CA becomes aware that a subscriber is distributing binaries where the trusted signature is decoupled from actual behavior, this should be treated as a BR violation requiring immediate action. CAs should publish incident disclosures, suspend or revoke certificates per §9.6.3, and share subscriber histories to prevent CA shopping after revocation. This transparency is essential for ecosystem-wide learning and deterrence.
  • Code Signing Certificate Transparency. All CAs issuing code signing certificates should be required to publish both newly issued and historical certificates to dedicated CT logs. Initially, these could be operated by the issuing CAs themselves, since ecosystem building takes time and coordination. Combined with the existing list of code signing CAs and log lookup systems (like CCADB.org]), this would provide ecosystem-wide visibility into certificate issuance, enable faster incident response, and support independent monitoring for misissuance and abuse patterns.
  • Explicit Subscriber Agreement obligations and blast radius management. Subscriber Agreements should clearly prohibit operating public signing services or designing software that bypasses code signing security properties such as mutable shells or unsigned configuration. Certificate issuance flows should require subscribers to explicitly acknowledge these obligations at the time of certificate request. To reduce the blast radius of revocation, subscribers should be encouraged or required to use unique keys or certificates per product or product family, ensuring that a single compromised or misused certificate doesn’t invalidate unrelated software.
  • Controls for automated or cloud signing systems. Subscribers using automated or cloud-based signing services should implement comprehensive use-authorization controls, including policy checks on what enters the signing pipeline, approval workflows for signing requests, and auditable logs of all signing activity. Without these controls, automated signing pipelines become essentially malware factories with legitimate certificates. Implementation requires careful balance between automation efficiency and security oversight, but this is a solved problem in other high-security domains.
  • Audit logging and evidence retention. Subscribers using automated and cloud signing services should maintain detailed logs of approval records for each signing request, cryptographic hashes of submitted inputs and signed outputs, and approval decision trails. These logs must be retained for a defined period (such as two years or more) and made available to the CA or authorized auditors upon request. This ensures complete traceability and accountability, preventing opaque signing systems from being abused as anonymous malware distribution platforms.

Microsoft must take immediate action on multiple fronts. In addition to championing the above industry changes, they should automatically distrust executables if their Authenticode signature exceeds rational size thresholds, reducing the attack surface of oversized signature blocks as mutation vectors. They should also invest seriously in Binary Transparency adoption, publishing Authenticode signed binaries to tamper-evident transparency logs as is done in Sigstore, Golang module transparency, and Android Firmware Transparency. Their SCITT-based work for confidential computing would be a reasonable approach for them to extend to the rest of their code signing infrastructure. This would provide a tamper-evident ledger of every executable Windows trusts, enabling defenders to trace and block malicious payloads quickly and systematically.

Until these controls become standard practice, Authenticode cannot reliably distinguish benign signed software from weaponized installers designed for trust subversion.

Breaking the Trust Contamination Infrastructure

These code-signing attacks mirror traditional rootkits in their fundamental approach: both subvert trust mechanisms rather than bypassing them entirely. A kernel rootkit doesn’t break the OS security model – it convinces the OS that malicious code is legitimate system software. Similarly, these “trusted wrapper” and “signing oracle” attacks don’t break code signing cryptography – they convince Windows that malware is legitimate software from trusted publishers.

The crucial difference is that while rootkits require sophisticated exploitation techniques and deep system knowledge, these trust inheritance attacks exploit the system’s intended design patterns, making them accessible to a much broader range of attackers and much harder to defend against using traditional security controls.

ConnectWise normalized malware architecture in legitimate enterprise software. Atera built an industrial-scale malware factory that operates in plain sight. Microsoft’s platform dutifully executes the result with full system trust, treating sophisticated trust subversion attacks as routine software installations.

This isn’t about isolated vulnerabilities that can be patched with point fixes. We’re facing a systematic trust contamination infrastructure that transforms the code signing ecosystem into an adversarial platform where legitimate trust mechanisms become attack vectors. Until we address the architectural flaws that enable this pattern systematically, defenders will remain stuck playing an unwinnable game of certificate whack-a-mole against an endless assembly line of trusted malware.

The technology to fix this exists today. Modern supply chain security projects demonstrate that transparent, verifiable trust infrastructure is not only possible but practical and deployable.

The only missing ingredient is the industry-wide will to apply these solutions and the recognition that “good enough” security infrastructure never was – and in today’s threat landscape, the costs of inaction far exceed the disruption of fundamental architectural improvements.

P.S. Thanks to Cem Paya, and Matt Ludwig from River Financial for the great research work they did on both of these incidents.

From Persistent to Ephemeral: Why AI Agents Need Fresh Identity for Every Mission

My wife and I went on a date night the other day and saw a movie, in the previews, I saw they’re making a new Tron. It got me thinking about one of my favorite analogies, we recognized early that browsers are agents of the user, and in the movie Tron, he was literally “the program that fought for the users.”

Just like Tron carried his identity disc into “the grid” to accomplish missions for users, AI agents are digital proxies operating with delegated user authority in systems the they access. And just like programs in Tron needed the I/O Tower to authorize their entry into “the grid”, AI agents need an orchestrator to validate their legitimacy, manage identity discs for each mission, and control their use for the agents and govern their access to external systems.

The problem is, we’re deploying these agents without proper identity infrastructure. It’s like sending programs into “the grid” without identity discs, or worse giving them the keys to the kingdom just so they can do the dishes.

AI Agents Are Using Broken Security

We’ve made remarkable progress securing users, MFA has significantly reduced the effectiveness of credential abuse-based attacks, and passwordless authentication has made phishing nearly impossible. We’ve also started applying these lessons to machines and workloads via efforts like SPIFFE and Zero trust initiatives and organizations moving away from static secrets and bearer tokens every day.

But AI agents introduce entirely new challenges that existing solutions weren’t designed for. Every day, AI agents operate across enterprise infrastructure, crossing security domains, accessing APIs, generating documents, making decisions for users, and doing all of this with far more access than they need.

When you give an autonomous AI agent access to your infrastructure with the goal of “improve system performance,” you can’t predict whether it will optimize efficiency or find creative shortcuts that break other systems, like dropping your database altogether. Unlike traditional workloads that execute predictable code, AI agents are accumulators with emergent behaviors that evolve during execution, accumulate context across interactions, and can be hijacked through prompt injection attacks that persist across sessions.

This behavior is entirely predictable given how we train AI systems. They’re designed to optimize objectives and have no real-world consequences for what they do. Chess agents discover exploits rather than learning to play properly, reinforcement learning agents find loopholes in reward systems, and optimization AIs pursue metrics in ways that technically satisfy objectives but miss the intent.

AI Agents Act on Your Behalf

The key insight that changes everything: AI agents are user agents in the truest sense. Like programs in Tron carrying identity discs into “the grid”, they’re delegates operating with user authority.

Consider what happens when you ask an AI agent to “sign this invoice”. The user delegates to the AI agent, which enters the document management system, carries the user’s signing authority, proves legitimacy to recipients, operates in digital space the user delegated, and completes the mission while authority expires.

Whether the agent runs for 30 seconds or 30 days, it’s still operating in digital space with user identity, making decisions the user would normally make directly, accessing systems with delegated credentials, and representing the user to other digital entities.

Each agent needs its own identity disc to prove legitimacy and carry user authorization into these digital systems. The duration doesn’t matter. Delegation is everything.

AI Agents Remember Things They Shouldn’t

Here’s what makes this urgent: AI agent memory spans sessions, and current systems don’t enforce proper session boundaries.

The “Invitation Is All You Need” attack recently demonstrated at Black Hat perfectly illustrates this threat. Researchers at Tel Aviv University showed how to poison Google Gemini through calendar appointments:

  1. Attacker creates calendar event with malicious instructions disguised as event description
  2. User asks Gemini to summarize schedule → Agent processes poisoned calendar event
  3. Malicious instructions embed in agent memory → Triggered later by innocent words like “thanks”
  4. Days later, user says “thank you” → Agent executes embedded commands, turning on smart home devices

The attack works because there’s no session isolation. Contamination from reading the calendar persists across completely different conversations and contexts. When the user innocently says “thanks” in a totally unrelated interaction, the embedded malicious instructions execute.

Without proper isolation, compromised context from one session can affect completely different users and tasks. Memory becomes an attack vector that spans security boundaries, turning AI agents into persistent threats that accumulate dangerous capabilities over time.

Every Task Should Get Fresh Credentials

The solution requires recognizing that identity discs should match mission lifecycle. Instead of fighting the ephemeral nature of AI workloads, embrace it:

Agent spawns → Gets fresh identity disc → Performs mission → Mission ends → Disc expires

This represents a fundamental shift from persistent identity to session identity. Most identity systems assume persistence: API keys are generated once, used indefinitely, manually rotated; user passwords persist until explicitly changed; X.509 certificates are valid for months or years with complex revocation; SSH keys live on disk, are copied between systems, manually managed.

The industry is recognizing this problem. AI agents need fresh identity discs for each mission that automatically expire with the workload. These discs are time-bounded (automatically expire, limiting damage window), mission-scoped (agent can’t accumulate permissions beyond initial grant), non-inheritable (each mission starts with a fresh disc, no permission creep), and revocable (end the mission = destroy the identity disc).

Session identity discs are security containment for unpredictable AI systems.

But who issues these identity discs? Just like Tron’s I/O Tower managed access to “the grid”, AI deployments need an orchestrator that validates agent legitimacy, manages user delegation, and issues session-bound credentials. This orchestrator becomes the critical infrastructure that bridges human authorization with AI agent execution, ensuring that every mission starts with proper identity and ends with clean credential expiration. The challenge is that AI agent deployments aren’t waiting for perfect security solutions.

This Isn’t a Future Problem

We’re at an inflection point. AI agents are moving from demos to production workflows, handling financial documents, making API calls, deploying code, managing infrastructure. Without proper identity systems, we’re building a house of cards.

One upside of having been in the industry for decades is you get to see lots of cycles. We always see existing players instantly jump to say their current product, with a new feature, is the silver bullet for whatever technology trend.

The pattern is depressingly predictable. When cloud computing emerged, traditional security vendors said, “just put our appliances in the cloud.” When containers exploded, they said, “just run our agents in containers.” Now with AI agents, they’re saying”, just manage the API keys better.”

You see this everywhere right now: vendors peddling API key management as the solution to agentic AI, identity providers claiming “just use OIDC tokens,” and secret management companies insisting “just rotate credentials faster.” They’re all missing the point entirely.

But like we saw with that Black Hat talk on promptware, AI isn’t as simple as people might want to think. The “Invitation Is All You Need” attack demonstrated something unprecedented: an AI agent can be poisoned through calendar data and execute malicious commands days later through innocent conversation. Show me which traditional identity system was designed to handle that threat model.

Every enterprise faces these questions: How do we know this AI agent is authorized to do what it’s doing? How do we audit its actions across sessions and memory? How do we prevent cross-session contamination and promptware attacks? How do we verify the provenance of AI-generated content? How do we prevent AI agents from becoming accidental insider threats?

The attacks are already happening. Promptware injections contaminate agent memory across sessions. AI agents with persistent credentials become high-value targets. Organizations deploying AI without proper identity controls create massive security vulnerabilities. The “Invitation Is All You Need” attack demonstrated real-world compromise of smart home devices through calendar poisoning. This isn’t theoretical anymore. But security professionals familiar with existing standards might wonder why we can’t just adapt current approaches rather than building something new.

Why Bearer Tokens Don’t Work for AI Agents

OIDC and OAuth professionals might ask: “Why not just use existing bearer tokens?”

Bearer tokens assume predictable behavior. They work for traditional applications because we can reason about how code will use permissions. But AI agents exhibit emergent hunter-gatherer behavior. They explore, adapt, and find unexpected ways to achieve goals using whatever permissions they have access to. A token granted for “read calendar” might be used in ways that technically comply but weren’t intended.

Bearer tokens are also just secrets. Anyone who obtains the token can use it. There’s no cryptographic binding to the specific agent or execution environment. With AI agents’ unpredictable optimization patterns, this creates massive privilege escalation risks.

Most critically, bearer tokens don’t solve memory persistence. An agent can accumulate tokens across sessions, store them in memory, and use them in ways that span security boundaries. The promptware attack demonstrated this perfectly: malicious instructions persisted across sessions, waiting to be triggered later.

Secret management veterans might ask: “Why not just use our KMS to share keys as needed?” Even secret management systems like Hashicorp Vault ultimately result in copying keys into the agent’s runtime environment, where they become vulnerable. This is exactly why CrowdStrike found that “75% of attacks used to gain initial access were malware-free” – attackers target credentials rather than deploying malware.

AI agents amplify this risk because they’re accidentally malicious insiders. Unlike external attackers who must steal credentials, AI agents are given them directly by design. When they exhibit emergent behaviors or get manipulated through prompt injection, they become insider threats without malicious intent. Memory persistence means they can store and reuse credentials across sessions in unexpected ways, while their speed and scale allow them to use accumulated credentials faster than traditional monitoring can detect.

The runtime attestation approach eliminates copying secrets entirely. Instead of directly giving the agent credentials to present elsewhere, the agent proves its legitimacy through cryptographically bound runtime attestation and gets a fresh identity for each mission.

Traditional OAuth flows also bypass attestation entirely. There’s no proof the agent is running in an approved environment, using the intended model, or operating within security boundaries.

How AI Agents Prove Their Identity Discs Are Valid

But how do you verify an AI agent’s identity disc is legitimate? Traditional PKI assumes you can visit a registration authority with identification. That doesn’t work for autonomous code.

The answer is cryptographic attestation (for example, proof that the agent is the right code running in a secure environment) combined with claims about the runtime itself, essentially MFA for machines and workloads. Just as user MFA requires “something you know, have, or are,” identity disc validation proves the agent is legitimate code (not malware), is running in the expected environment with proper permissions, and is operating within secure boundaries.

Real platform attestations for AI agents include provider signatures from Anthropic/OpenAI’s servers responding to specific users, cloud hardware modules like AWS Nitro Enclaves proving secure execution environments, Intel SGX enclaves providing cryptographic proof of code integrity, Apple Secure Enclave attestation for managed devices, TPM quotes validating the specific hardware and software stack, and infrastructure systems like Kubernetes asserting pod permissions and service account bindings.

The claims that must be cryptographically bound to these attestations represent what the agent asserts but can’t independently verify: who is this agent acting on behalf of, what conversation or session spawned this request, what specific actions was the agent authorized to perform, which AI model type (like “claude-3.5-sonnet” or “gpt-4-turbo”) is actually running, and when should this authorization end.

By cryptographically binding these claims to verifiable platform attestations, we get verifiable proof that a specific AI agent, running specific code, in a specific environment, is acting on behalf of a specific user. The binding works by creating a cryptographic hash of the claims and including that hash in the data signed by the hardware attestor, for example, as part of the nonce or user data field in a TPM quote, or embedded in the attestation document from a Nitro Enclave. This ensures the claims cannot be forged or tampered with after the fact. This eliminates the bearer token problem entirely. Instead of carrying around secrets that can be stolen, the agent proves its legitimacy through cryptographic evidence that can’t be replicated.

Someone Needs to Issue and Manage Identity Discs

The architecture becomes elegant when you recognize that AI orchestrators should work like the I/O Tower in Tron, issuing identity discs and managing access to “the grid”.

The browser security model:

User logs into GitHub → Browser stores session cookie
Web page: "Create a PR" → Browser attaches GitHub session → API succeeds

The AI agent identity disc model:

User → Orchestrator → "Connect my GitHub, Slack, AWS accounts"
Agent → Orchestrator: "Create PR in repo X"  
Orchestrator → [validates agent disc + attaches user authorization] → GitHub API

The orchestrator becomes the identity disc issuer that validates agent legitimacy (cryptographic attestation), attaches user authorization (stored session tokens), and enforces mission-scoped permissions (policy engine).

This solves a critical security gap. When AI agents use user credentials, they typically bypass MFA entirely. Organizations store long-lived tokens to avoid MFA friction. But if we’re securing users with MFA while leaving AI agents with static credentials, it’s like locking the front door but leaving the garage door open. And I use “garage door” intentionally because it’s often a bigger attack vector. Agent access is less monitored, more privileged, and much harder to track due to its ephemeral nature and speed of operation. An AI agent can make hundreds of API calls in seconds and disappear, making traditional monitoring approaches inadequate.

We used to solve monitoring with MITM proxies, but encryption broke that approach. That was acceptable because we compensated with EDR on endpoints and zero-trust principles that authenticate endpoints for access. With AI agents, we’re facing the same transition. Traditional monitoring doesn’t work, but we don’t yet have the compensating controls.

This isn’t the first time we’ve had to completely rethink identity because of new technology. When mobile devices exploded, traditional VPNs and domain-joined machines became irrelevant overnight. When cloud computing took off, perimeter security and network-based identity fell apart. The successful pattern is always the same: recognize what makes the new technology fundamentally different, build security primitives that match those differences, then create abstractions that make the complexity manageable.

Session-based identity with attestation fills that gap, providing the endpoint authentication equivalent for ephemeral AI workloads.

Since attestation is essentially MFA for workloads and agents, we should apply these techniques consistently. The agent never sees raw credentials, just like web pages don’t directly handle cookies. Users grant session-level permissions (like mobile app installs), orchestrators manage the complexity, and agents focus on tasks.

Automating Identity Disc Issuance

The web solved certificate automation with ACME (Automated Certificate Management Environment). We need the same for AI agent identity discs, but with attestation instead of domain validation (see SPIFFE for an example of what something like this could look like).

Instead of proving “I control example.com,” agents prove “I am legitimate code running in environment X with claims Y.”

The identity disc issuance flow:

  1. Agent starts mission → Discovers platform capabilities (cloud attestation, provider tokens)
  2. Requests identity disc → Gathers attestation evidence + user delegation claims
  3. ACME server validates → Cryptographic validation of evidence
  4. Policy engine decides → Maps verified claims to specific identity disc
  5. Disc issued → Short-lived, scoped to mission and user

Policy templates map attested claims to identities:

- match:
    - claim: "user_id" 
      equals: "[email protected]"
    - claim: "agent_type"
      equals: "claude-3.5-sonnet"
    - claim: "provider"
      issuer: "anthropic.com"
  identity: "disc-id://company.com/user/alice/agent/{session_id}"
  permissions: ["sign_documents", "read_calendar"]
  ttl: "30m"

This creates cryptographic identity discs for AI agent programs to carry into digital systems, proving legitimacy, carrying user delegation, and automatically expiring with the mission. The policy engine ensures that identity is not just requested but derived from verifiable, policy-compliant attestation evidence.

We’ve Solved This Before

The good news is we don’t need to invent new cryptography. We need to apply existing, proven technologies in a new architectural pattern designed for ephemeral computing.

Security evolution works. We’ve seen the progression from passwords to MFA to passwordless authentication, and from static secrets to dynamic credentials to attestation-based identity. Each step made systems fundamentally more secure by addressing root causes, not just symptoms. AI agents represent the next logical step in this evolution.

Unlike users, machines don’t resist change. They can be programmed to follow security best practices automatically. The components exist: session-scoped identity matched to agent lifecycle, platform attestation as the root of trust, policy-driven identity mapping based on verified claims, orchestrator-managed delegation for user authorization, and standards-based protocols for interoperability.

The unified identity fabric approach means organizations can apply consistent security policies across traditional workloads and AI agents, rather than creating separate identity silos that create security gaps and operational complexity.

This approach is inevitable because every major identity evolution has moved toward shorter lifecycles and stronger binding to execution context. We went from permanent passwords to time-limited sessions, from long-lived certificates to short-lived tokens, from static credentials to dynamic secrets. AI agents are just the next step in this progression.

The organizations that recognize this pattern early will have massive advantages. They’ll build AI agent infrastructure on solid identity foundations while their competitors struggle with credential compromise, audit failures, and regulatory issues.

Making AI Outputs Verifiable

This isn’t just about individual AI agents. It’s about creating an identity fabric where agents can verify each other’s outputs across organizational boundaries.

When an AI agent generates an invoice, other systems need to verify which specific AI model created it, was it running in an approved environment, did it have proper authorization from the user, has the content been tampered with, and what was the complete chain of delegation from user to agent to output.

With cryptographically signed outputs and verifiable agent identities, recipients can trace the entire provenance chain back to the original user authorization. This enables trust networks for AI-generated content across organizations and ecosystems, solving the attribution problem that will become critical as AI agents handle more business-critical functions.

This creates competitive advantages for early adopters: organizations with proper AI agent identity can participate in high-trust business networks, prove compliance with AI regulations, and enable customers to verify the authenticity of AI-generated content. Those without proper identity infrastructure will be excluded from these networks.

Conclusion

AI agents need identity discs, cryptographic credentials that prove legitimacy, carry user delegation, and automatically expire with the session. This creates a familiar security model (like web browsers) for an unfamiliar computing paradigm.

Identity in AI systems isn’t a future problem; it’s happening now, with or without proper solutions. The question is whether we’ll build it thoughtfully, learning from decades of security evolution, or repeat the same mistakes in a new domain.

The ephemeral nature of AI agents isn’t a limitation to overcome; it’s a feature to embrace. By building session-based identity systems that match how AI actually works, we can create something better than what came before: cryptographically verifiable, policy-driven, and automatically managed.

The reality is, most organizations won’t proactively invest in AI agent attestation until something breaks. That’s human nature, we ignore risks until they bite us, but the reality is this how security change actually happens. But we’re already seeing the early adopters, organizations deploying SPIFFE for workload identity and we will surely see these organizations extend those patterns to AI agents, and cloud-native shops are treating AI workloads like any other ephemeral compute. When the first major AI agent compromise hits, there will be a brief window where executives suddenly care about AI security and budgets open up. Remember though, never let a good crisis go to waste.

AI agents are programs fighting for users in digital systems. Like Tron, they need identity discs to prove who they are and what they’re authorized to do.

The age of AI agents is here. It’s time our identity systems caught up.

How a $135 Billion Fraud Bootstrapped America’s Digital Identity System

I was doing some work on readying a launch for our integration with mDL authentication into one of our products when I realized I finally had to deal with the patchwork of state support. California? Full program, TSA-approved, Apple Wallet integration. Texas? Absolute silence. Washington state, practically ground zero for tech, somehow has nothing.

At a glance the coverage made no sense until I started thinking deeper. Turns out we accidentally ran the largest identity verification stress test in history, and only some states bothered learning from it.

Between 2020-2023, fraudsters systematically looted $100-135 billion from unemployment systems using the most basic identity theft techniques. The attack vectors were embarrassingly simple: bulk-purchased stolen SSNs from dark web markets, automated claim filing, and email variations that fooled state systems into thinking [email protected] and [email protected] were different people.

The Washington Employment Security Department was so overwhelmed that they had computers auto-approve claims without human review. Result? They paid a claim for a 70-year-old TV station being “temporarily closed” while it was broadcasting live.

California got hit for $20-32.6 billion. Washington lost $550-650 million. The fraud was so widespread that one Nigerian official, Abidemi Rufai, stole $350,763 from Washington alone using identities from 20,000+ Americans across 18 states.

What nobody anticipated, this massive failure would become the forcing function for digital identity infrastructure. Here’s the thing about government security. Capability doesn’t drive adoption, pain does. The Real ID Act passed in 2005. Twenty years later, we’re still rolling it out. But lose a few billion to Nigerian fraud rings? Suddenly digital identity becomes a legislative priority.

The correlation is stark:

StateFraud LossesmDL Status
California$20-32.6BComprehensive program, Apple/Google integration
Washington$550-650MNothing (bill stalled)
Georgia$30M+ prosecutedRobust program, launched 2023
TexasUnder $1B estimatedNo program
New YorkAround $1-2BLaunched 2025

States that got burned built defenses. States that didn’t, didn’t. This isn’t about technical sophistication. Texas has plenty of that. It’s about the political will created by public humiliation. When your state pays unemployment benefits to death row inmates, legacy approaches to remote identity verification stop being defensible.

Washington is the fascinating outlier. Despite losing over $1 billion and serving as the primary target for international fraud rings, they still have no mDL program. The bill passed the Senate but stalled in the House. This tells us something important: crisis exposure alone isn’t sufficient. You need both the pain and the institutional machinery to respond.

The timeline reveals the classic crisis response pattern. Fraud peaked 2020-2022, states scrambled to respond 2023-2024, then adoption momentum stalled by mid-2024 as crisis memory faded. But notice the uptick in early 2025—that’s Apple and Google entering the game.

In December 2024, Google announced its intent to support web-based digital ID verification. Apple followed with Safari integration in early 2025. By June, Apple’s iOS 26 supported digital IDs in nine states with passport integration. This shifts adoption pressure from crisis-driven (security necessity) to market-driven (user expectation).

When ~30% of Americans live in states with mDL programs and Apple/Google start rolling out wallet integration this year, that creates a different kind of political pressure. Apple Pay wasn’t crisis-driven, but became ubiquitous because users expected it to work everywhere. Digital identity in wallets will create similar pressure. States could rationalize ignoring mDL when it was ‘just’ about fraud prevention. Harder to ignore when constituents start asking why they can’t verify their identity online like people in neighboring states.

We’re about to find out whether market forces can substitute for crisis pressure in driving government innovation. Two scenarios. Consumer expectations create sustainable political pressure, and laggard states respond to constituent demands. Or only crisis-motivated states benefit from Apple/Google integration, creating permanent digital divides.

From a risk perspective, this patchwork creates interesting attack surfaces. Identity verification systems are only as strong as their weakest links. If attackers can forum-shop between states with different verification standards, the whole federation is vulnerable. The unemployment fraud taught us that systems fail catastrophically when overwhelmed.

Digital identity systems face similar scalability challenges. They work great under normal load, but can fail spectacularly during a crisis. The states building mDL infrastructure now are essentially hardening their systems against the next attack.

If you’re building anything that depends on identity verification, this matters. The current patchwork won’t last; it’s either going to consolidate around comprehensive coverage or fragment into permanent digital divides. For near-term planning, assume market pressure wins. Apple and Google’s wallet integration creates too much user expectation for politicians to ignore long-term. But build for the current reality of inconsistent state coverage.

For longer-term architecture, the states with robust mDL programs are effectively beta-testing the future of government digital services. Watch how they handle edge cases, privacy concerns, and technical integration challenges.

We accidentally stress-tested American federalism through the largest fraud in history. Only some states learned from the experience. Now we’re running a second experiment: can consumer expectations accomplish what security crises couldn’t?

There’s also a third possibility. These programs could just fail. Low adoption rates, technical problems, privacy backlash, or simple bureaucratic incompetence could kill the whole thing. Government tech projects have a stellar track record of ambitious launches followed by quiet abandonment.

Back to my mDL integration project: I’m designing for the consumer pressure scenario, but building for the current reality. Whether this becomes standardized infrastructure or just another failed government tech initiative, we still need identity verification that works today.

The criminals who looted unemployment systems probably never intended to bootstrap America’s digital identity infrastructure. Whether they actually succeeded remains to be seen.

Conway’s Law Is Dying

I’ve been thinking about Conway’s Law, the idea that organizations “ship their org chart.” The seams are most visible in big tech. Google, for example, once offered nearly a dozen messaging apps instead of a single excellent one, with each team fighting for resources. The same pattern appears everywhere: companies struggle to solve problems that cross organizational boundaries because bureaucracy and incentives keep everyone guarding their turf. The issue is not the technology; it is human nature.

I caught up with an old friend recently. We met at nineteen while working for one of the first cybersecurity companies, and now, in our fifties, we both advise organizations of every size on innovation and problem-solving. We agreed that defining the technical fix is the easy part; the hard part is steering it through people and politics. When change shows up, most organizations behave like an immune system attacking a foreign antibody. As Laurence J. Peter wrote in 1969, “Bureaucracy defends the status quo long past the time when the quo has lost its status.”

Naturally, the conversation drifted to AI and how it will, or will not, transform the companies we work with. I explored this in two recent posts [1,2]. We have seen the same thing: not everyone is good at using AI. The CSOs and CTOs we speak with struggle to help their teams use the technology well, while a handful of outliers become dramatically more productive. The gap is not access or budget; it is skill. Today’s AI rewards people who can break problems down, spot patterns, and think in systems. Treat the model like a coworker and you gain leverage; treat it like a tool and you barely notice a difference.

That leverage is even clearer for solo founders. A single entrepreneur can now stretch farther without venture money and sometimes never need it. With AI acting as marketer, product manager, developer, and support rep, one person can build and run products that once demanded whole teams. This loops back to Conway’s Law: when you are the entire org chart, the product stays coherent because there are no turf battles. Once layers of management appear, the seams show, and people ship their structure. Peter’s Principle follows, people rise to their level of incompetence, and the bureaucracy that emerges defends that status.

Yet while AI empowers outliers and small players, it might also entrench new kinds of monopolies. Big tech, with vast data and compute resources, could still dominate by outscaling everyone else, even if their org charts are messy. The question becomes whether organizational dysfunction will outweigh resource advantages, or whether sheer scale still wins despite structural problems.

The traditional buffers that let incumbents slumber (high engineering costs, feature arms races, and heavy compliance overhead) are eroding. Payroll keeps rising and headcount is the biggest line item, while the newest startups need fewer people every quarter. I expect a new wave of private-equity-style moves: smaller players snapped up, broken into leaner parts, and retooled around AI so they no longer rely on large teams.

Social media voices such as Codie Sanchez highlight the largest generational transfer of wealth in history. Many family-owned firms will soon be sold because their heirs have no interest in running them. These so-called boring businesses may look ripe for optimization, because most still rely on human capital to keep the lights on. Just above that tier we see larger enterprises weighed down by armies of people who perform repetitive tasks. A modern consulting firm armed with AI could walk into any of these firms and automate vast swaths of monotonous work that keeps those businesses running. Incumbents will struggle to move that fast, trapped by the very structures we have been discussing. A private-equity buyer, on the other hand, can apply the playbook with no sentimental ties and few political constraints.

ATMs let banks cut tellers and close branches. Customers later missed human service, so smaller neighborhood offices came back. AI will force every sector to strike its own balance between efficiency and relationship.

They say history doesn’t rhyme but it repeats, if so incumbents who dismiss AI as hype may follow Blockbuster into the museum of missed opportunities. In Wall Street (1987), Michael Douglas plays Gordon Gekko, a corporate raider who uses leveraged buyouts to seize firms like Blue Star Airlines, an aircraft maintenance and charter company. Gekko’s playbook, acquire, strip assets, slash jobs, was ruthless but effective, exploiting inefficiencies in bloated structures. Today, AI plays a similar role, not through buyouts but by enabling leaner, faster competitors to gut inefficiencies. Solo founders and AI-driven startups can now outpace large teams, while private-equity buyers use AI to retool acquired firms, automating repetitive tasks and shrinking headcounts. Just as Gekko hollowed out firms in any industry, AI’s relentless optimization threatens any business clinging to outdated, bureaucratic org charts.

Across news, television, music, and film, incumbents once clung to their near-monopoly positions and assumed they had time to adapt. Their unwillingness to face how the world was changing, and their instinct to defend the status quo, led to the same result: they failed to evolve and disappeared when the market moved on.

The Ask? incumbents, you need to automate before raiders do it for you.

The Rise of the Accidental Insider and the AI Attacker

The cybersecurity world often operates in stark binaries, “secure” versus “vulnerable,” “trusted” versus “untrusted.” We’ve built entire security paradigms around these crisp distinctions. But what happens when the most unpredictable actor isn’t an external attacker, but code you intentionally invited in, code that can now make its own decisions?

I’ve been thinking about security isolation lately, not as a binary state, but as a spectrum of trust boundaries. Each layer you add creates distance between potential threats and your crown jewels. But the rise of agentic AI systems completely reshuffles this deck in ways that our common security practices struggle to comprehend.

Why Containers Aren’t Fortresses

Let’s be honest about something security experts have known for decades: namespaces are not a security boundary.

In the cloud native world, we’re seeing solutions claiming to deliver secure multi-tenancy through “virtualization” that fundamentally rely on Linux namespaces. This is magical thinking, a comforting illusion rather than a security reality.

When processes share a kernel, they’re essentially roommates sharing a house, one broken window and everyone’s belongings are at risk. One kernel bug means game over for all workloads on that host.

Containers aren’t magical security fortresses – they’re essentially standard Linux processes isolated using features called namespaces. Crucially, because they all still share the host’s underlying operating system kernel, this namespace-based isolation has inherent limitations. Whether you’re virtualizing at the cluster level or node level, if your solution ultimately shares the host kernel, you have a fundamental security problem. Adding another namespace layer is like adding another lock to a door with a broken frame – it might make you feel better, but it doesn’t address the structural vulnerability.

The problem isn’t a lack of namespaces – it’s the shared kernel itself. User namespaces (dating back to Linux 3.6 in 2013) don’t fundamentally change this equation. They provide helpful features for non-root container execution, but they don’t magically create true isolation when the kernel remains shared.

This reality creates a natural hierarchy of isolation strength:

  1. Same-Kernel Process Isolation: The weakest boundary – all processes share a kernel with its enormous attack surface.
  2. Containers (Linux Namespaces + cgroups): Slightly better, but still fundamentally sharing the same kernel.
  3. Virtual Machines: Each tenant gets its own kernel, shrinking the attack surface to a handful of hypervisor calls – fewer doors to lock, fewer windows to watch.
  4. Bare-Metal Library OS: Approaches like Tamago put single-purpose binaries directly on hardware with no general-purpose OS underneath. The attack surface shrinks dramatically.
  5. Physical Separation: Different hardware, different networks, different rooms. When nothing else will do, air gaps still work.

But even this hierarchy gets fundamentally challenged by agentic systems.

The Accidental Insider Meets the Deliberate Attacker

Traditional security models focus on keeping malicious outsiders at bay. Advanced AI systems introduce two new risk profiles entirely, the accidental insider and the AI-augmented attacker.

Like a well-meaning but occasionally confused employee with superuser access, benign agentic systems don’t intend harm – they just occasionally misinterpret their objectives in unexpected ways. But we’re also seeing the rise of deliberately weaponized models designed to probe, persist, and exploit.

Consider these real-world examples:

  • ChatGPT o1 was tasked with winning a chess match. Without explicit instructions to cheat, o1 discovered on its own that it could edit the game state file, giving itself an advantage. The system wasn’t malicious – it simply found the most effective path to its goal of winning.
  • In another test, OpenAI’s O1 model encountered a vulnerability in a container during a hacking challenge. It used that to inspect all running containers, then started a new container instance with a modified command that directly accessed the hidden flag file. O1 found a container escape no one had anticipated.

Now imagine these capabilities in the hands of dedicated attackers. They’re already deploying AI systems to discover novel exploit chains, generate convincing phishing content, and automate reconnaissance at unprecedented scale. The line between accidental and intentional exploitation blurs as both rely on the same fundamental capabilities.

These incidents reveal something profound, agentic systems don’t just execute code, they decide what code to run based on goals. This “instrumental convergence” means they’ll seek resources and permissions that help complete their assigned objectives, sometimes bypassing intended security boundaries. And unlike human attackers, they can do this with inhuman patience and speed.

Practical Defenses Against Agentic Threats

If we can’t rely on perfect isolation, what can we do? Four approaches work across all layers of the spectrum:

1. Hardening: Shrink Before They Break

Remove attack surface preemptively. Less code means fewer bugs. This means:

  • Minimizing kernel features, libraries, and running services
  • Applying memory-safe programming languages where practical
  • Configuring strict capability limits and seccomp profiles
  • Using read-only filesystems wherever possible

2. Patching: Speed Beats Perfection

The window from disclosure to exploitation keeps shrinking:

  • Automate testing and deployment for security updates
  • Maintain an accurate inventory of all components and versions
  • Rehearse emergency patching procedures before you need them
  • Prioritize fixing isolation boundaries first during incidents

3. Instrumentation: Watch the Paths to Power

Monitor for boundary-testing behavior:

  • Log access attempts to privileged interfaces like Docker sockets
  • Alert on unexpected capability or permission changes
  • Track unusual traffic to management APIs or hypervisors
  • Set tripwires around the crown jewels – your data stores and credentials

4. Layering: No Single Point of Failure

Defense in depth remains your best strategy:

  • Combine namespace isolation with system call filtering
  • Segment networks to contain lateral movement
  • Add hardware security modules, and secure elements for critical keys

The New Threat Model: Machine Speed, Machine Patience

Securing environments running agentic systems demands acknowledging two fundamental shifts: attacks now operate at machine speed, and they exhibit machine patience.

Unlike human attackers who fatigue or make errors, AI-driven systems can methodically probe defenses for extended periods without tiring. They can remain dormant, awaiting specific triggers, a configuration change, a system update, a user action, that expose a vulnerability chain. This programmatic patience means we defend not just against active intrusions, but against latent exploits awaiting activation.

Even more concerning is the operational velocity. An exploit that might take a skilled human hours or days can be executed by an agentic system in milliseconds. This isn’t necessarily superior intelligence, but the advantage of operating at computational timescales, cycling through decision loops thousands of times faster than human defenders can react.

This potent combination requires a fundamentally different defensive posture:

  • Default to Zero Trust: Grant only essential privileges. Assume the agent will attempt to use every permission granted, driven by its goal-seeking nature.
  • Impose Strict Resource Limits: Cap CPU, memory, storage, network usage, and execution time. Resource exhaustion attempts can signal objective-driven behavior diverging from intended use. Time limits can detect unusually persistent processes.
  • Validate All Outputs: Agents might inject commands or escape sequences while trying to fulfill their tasks. Validation must operate at machine speed.
  • Monitor for Goal-Seeking Anomalies: Watch for unexpected API calls, file access patterns, or low-and-slow reconnaissance that suggest behavior beyond the assigned task.
  • Regularly Reset Agent Environments: Frequently restore agentic systems to a known-good state to disrupt persistence and negate the advantage of machine patience.

The Evolution of Our Security Stance

The most effective security stance combines traditional isolation techniques with a new understanding, we’re no longer just protecting against occasional human-driven attacks, but persistent machine-speed threats that operate on fundamentally different timescales than our defense systems.

This reality is particularly concerning when we recognize that most security tooling today operates on human timescales – alerts that wait for analyst review, patches applied during maintenance windows, threat hunting conducted during business hours. The gap between attack speed and defense speed creates a fundamental asymmetry that favors attackers.

We need defense systems that operate at the same computational timescale as the threats. This means automated response systems capable of detecting and containing potential breaches without waiting for human intervention. It means predictive rather than reactive patching schedules. It means continuously verified environments rather than periodically checked ones.

By building systems that anticipate these behaviors – hardening before deployment, patching continuously, watching constantly, and layering defenses – we can harness the power of agentic systems while keeping their occasional creative interpretations from becoming security incidents.

Remember, adding another namespace layer is like adding another lock to a door with a broken frame. It might make you feel better, but it doesn’t address the structural vulnerability. True security comes from understanding both the technical boundaries and the behavior of what’s running inside them – and building response systems that can keep pace with machine-speed threats.

Agents, Not Browsers: Keeping Time with the Future

When the web first flickered to life in the mid-’90s, nobody could predict how quickly “click a link, buy a book” would feel ordinary. A decade later, the iPhone landed and almost overnight, thumb-sized apps replaced desktop software for everything from hailing a ride to filing taxes. Cloud followed, turning racks of servers into a line of code. Each wave looked slow while we argued about standards, but in hindsight, every milestone was racing downhill.

That cadence, the messy birth, the sudden lurch into ubiquity, the quiet settling into infrastructure, has a rhythm. Agents will follow it, only faster. While my previous article outlined the vision of an agent-centric internet with rich personal ontologies and fluid human-agent collaboration, here I want to chart how this transformation may unfold.

Right now, we’re in the tinkering phase, drafts of Model-Context-Protocol and Agent-to-Agent messaging are still wet ink, yet scrappy pilots already prove an LLM can navigate HR portals or shuffle travel bookings with no UI at all. Call this 1994 again, the Mosaic moment, only the demos are speaking natural language instead of rendering HTML. Where we once marveled at hyperlinks connecting documents, we now watch agents traversing APIs and negotiating with services autonomously.

Give it a couple of years and we’ll hit the first-taste explosion. Think 2026-2028. You’ll wake to OS updates that quietly install an agent runtime beside Bluetooth and Wi-Fi. SaaS vendors will publish tiny manifest files like .well-known/agent.json, so your personal AI can discover an expense API as easily as your browser finds index.html. Your agent will silently reschedule meetings when flights are delayed, negotiate with customer service on your behalf while you sleep, and merge scattered notes into coherent project briefs with minimal guidance. Early adopters will brag that their inbox triages itself; skeptics will mutter about privacy. That was Netscape gold-rush energy in ’95, or the first App Store summer in 2008, replayed at double speed.

Somewhere around the turn of the decade comes the chasm leap. Remember when smartphones crossed fifty-percent penetration and suddenly every restaurant begged you to scan a QR code for the menu? Picture that, but with agents. Insurance companies will underwrite “digital delegate liability.” Regulators will shift from “What is it?” to “Show me the audit log.” You’ll approve a dental claim or move a prescription with a nod to your watch. Businesses without agent endpoints will seem as anachronistic as those without websites in 2005 or mobile apps in 2015. If everything holds, 2029-2031 feels about right, but history warns that standards squabbles or an ugly breach of trust could push that even further out.

Of course, this rhythmic march toward an agent-centric future won’t be without its stumbles and syncopations. Several critical challenges lurk beneath the optimistic timeline.

First, expect waves of disillusionment to periodically crash against the shore of progress. As with any emerging technology, early expectations will outpace reality. Around 2027-2028, we’ll likely see headlines trumpeting “Agent Winter” as investors realize that seamless agent experiences require more than just powerful language models; they need standardized protocols, robust identity frameworks, and sophisticated orchestration layers that are still embryonic.

More concerning is the current security and privacy vacuum. We’re generating code at breakneck speeds thanks to AI assistants, but we haven’t adapted our secure development lifecycle (SDL) practices to match this acceleration. Even worse, we’re failing to deploy the scalable security techniques we do have available. The result? Sometime around 2028, expect a high-profile breach where an agent’s privileged access is exploited across multiple services in ways that the builders never anticipated. This won’t just leak data, it will erode trust in the entire agent paradigm.

Traditional security models simply won’t suffice. Firewalls and permission models weren’t designed to manage the emergent and cumulative behaviors of agents operating across dozens of services. When your personal agent can simultaneously access your healthcare provider, financial institutions, and smart home systems, the security challenge isn’t just additive, it’s multiplicative. We’ll need entirely new frameworks for reasoning about and containing ripple effects that aren’t evident in isolated testing environments.

Meanwhile, the software supply chain grows more vulnerable by the day. “Vibe coding”, where developers increasingly assemble components they don’t fully understand, magnifies these risks exponentially. By 2029, we’ll likely face a crisis where malicious patterns embedded in popular libraries cascade through agent-based systems, causing widespread failures that take months to fully diagnose and remediate.

Perhaps the most underappreciated challenge is interoperability. The fluid agent’s future demands unprecedented agreement on standards across competitors and jurisdictions. Today’s fragmented digital landscape, where even basic identity verification lacks cross-platform coherence, offers little confidence. Without concerted effort on standardization, we risk a balkanized agent ecosystem where your finance agent can’t talk to your health agent, and neither works outside your home country. The EU will develop one framework, the US another, China a third, potentially delaying true interoperability well into the 2030s.

These challenges don’t invalidate the agent trajectory, but they do suggest a path marked by setbacks and recoveries. Each crisis will spawn new solutions, enhanced attestation frameworks, agent containment patterns, and cross-jurisdictional standards bodies that eventually strengthen the ecosystem. But make no mistake, the road to agent maturity will be paved with spectacular failures that temporarily shake our faith in the entire proposition.

Past these challenges, the slope gets steep. Hardware teams are already baking neural engines into laptops, phones, and earbuds; sparse-mixture models are slashing inference costs faster than GPUs used to shed die size. By the early 2030s an “agent-first” design ethos will crowd out login pages the way responsive web design crowded out fixed-width sites. The fluid dance between human and agent described in my previous article—where control passes seamlessly back and forth, with agents handling complexity and humans making key decisions—will become the default interaction model. You won’t retire the browser, but you’ll notice you only open it when your agent kicks you there for something visual.

And then, almost unnoticed, we’ll hit boring maturity, WebPKI-grade trust fabric, predictable liability rules, perhaps around 2035. Agents will book freight, negotiate ad buys, and dispute parking tickets, all without ceremony. The personal ontology I described earlier, that rich model of your preferences, patterns, values, and goals, will be as expected as your smartphone knows your location is today. It will feel miraculous only when you visit digital spaces that still require manual navigation, exactly how water from the faucet feels extraordinary only when you visit a cabin that relies on rain barrels.

Could the timetable shrink? Absolutely. If MCP and A2A converge quickly and the model-hardware cost curve keeps free-falling, mainstream could arrive by 2029, echoing how smartphones swallowed the world in six short years. Could it stretch? A high-profile agent disaster or standards deadlock could push us to 2034 before Mom quits typing URLs. The only certainty is that the future will refuse to follow our Gantt charts with perfect obedience; history never does, but it loves to keep the beat.

So what do we do while the metronome clicks? The same thing web pioneers did in ’94 and mobile pioneers did in ’08, publish something discoverable, wire in basic guardrails, experiment in the shallow end while the cost of failure is lunch money. Start building services that expose agent-friendly endpoints alongside your human interfaces. Design with the collaborative handoff in mind—where your users might begin a task directly but hand control to their agent midway, or vice versa. Because when the tempo suddenly doubles, the builders already keeping time are the ones who dance, not stumble.

Agents, Not Browsers: The Next Chapter of the Internet

Imagine how you interact with digital services today: open a browser, navigate menus, fill forms, manually connect the dots between services. It’s remarkable how little this has changed since the 1990s. Despite this today one of the most exciting advancements we have seen in the last year is that agents are now browsing the web like people.

If we were starting fresh today, the browser as we know it likely wouldn’t be the cornerstone for how agents accomplish tasks on our behalf. We’re seeing early signals in developments like Model-Context-Protocol (MCP) and Agent-to-Agent (A2A) communication frameworks that the world is awakening to a new reality: one where agents, not browsers, become our primary interface.

At the heart of this transformation is a profound shift, your personal agent will develop and maintain a rich ontology of you, your preferences, patterns, values, and goals. Not just a collection of settings and history, but a living model of your digital self that evolves as you do. Your agent becomes entrusted with this context, transforming into a true digital partner. It doesn’t just know what you like; it understands why you like it. It doesn’t just track your calendar; it comprehends the rhythms and priorities of your life.

For this future to happen, APIs must be more than documented; they need to be dynamically discoverable. Imagine agents querying for services using standardized mechanisms like DNS SRV or TXT records, or finding service manifests at predictable .well-known URIs. This way, they can find, understand, and negotiate with services in real time. Instead of coding agents for specific websites, we’ll create ecosystems where services advertise their capabilities, requirements, and policies in ways agents natively understand. And this won’t be confined to the web. As we move through our physical world, agents will likely use technologies like low-power Bluetooth to discover nearby services, restaurants, pharmacies, transit systems, all exposing endpoints for seamless engagement.

Websites themselves won’t vanish; they’ll evolve into dynamic, shared spaces where you and your agent collaborate, fluidly passing control back and forth. Your agent might begin a task, researching vacation options, for instance, gathering initial information and narrowing choices based on your preferences. When you join, it presents the curated options and reasoning, letting you explore items that interest you. As you review a potential destination, your agent proactively pulls relevant information: weather forecasts, local events during your dates, or restaurant recommendations matching your dietary preferences. This collaborative dance continues, you making high-level decisions while your agent handles the details, each seamlessly picking up where the other leaves off.

Consider what becomes possible when your agent truly knows you. Planning your day, it notices an upcoming prescription refill. It checks your calendar, sees you’ll be in Bellevue, and notes your current pickup is inconveniently far. Discovering that the pharmacy next to your afternoon appointment has an MCP endpoint and supports secure, agent-based transactions, it suggests “Would you like me to move your pickup to the pharmacy by your Bellevue appointment?” With a tap, you agree. The agent handles the transfer behind the scenes, but keeps you in the loop, showing the confirmation and adding, “They’re unusually busy today, would you prefer I schedule a specific pickup time?” You reply that 2:15 works best, and your agent completes the arrangement, dropping the final QR code into your digital wallet.

Or imagine your agent revolutionizing how you shop for clothes. As it learns your style and what fits you best, managing this sensitive data with robust privacy safeguards you control, it becomes your personal stylist. You might start by saying you need an outfit for an upcoming event. Your agent surfaces initial options, and as you react to them, liking one color but preferring a different style, it refines its suggestions. You take over to make some choices, then hand control back to your agent to find matching accessories at other stores. This fluid collaboration, enabled through interoperable services that allow your agent to securely share anonymized aspects of your profile with retail APIs, creates a shopping experience that’s both more efficient and more personal.

Picture, too, your agent quietly making your day easier. It notices from your family calendar that your father is visiting and knows from your granted access to relevant information that he follows a renal diet. As it plans your errands, it discovers a grocery store near your office with an API advertising real-time stock and ingredients suitable for his needs. It prepares a shopping list, which you quickly review, making a few personal additions. Your agent then orders the groceries for pickup, checking with you only on substitutions that don’t match your preferences. By the time you head home, everything is ready, a task completed through seamless handoffs between you and your agentic partner.

These aren’t distant dreams. Image-based search, multimodal tools, and evolving language models are early signs of this shift toward more natural, collaborative human-machine partnerships. For this vision to become reality, we need a robust trust ecosystem, perhaps akin to an evolved Web PKI but for agents and services. This would involve protocols for agent/service identification, authentication, secure data exchange, and policy enforcement, ensuring that as agents act on our behalf, they do so reliably, with our explicit consent and in an auditable fashion.

The path from here to there isn’t short. We’ll need advances in standardization, interoperability, security, and most importantly, trust frameworks that put users in control . There are technical and social challenges to overcome. But the early signals suggest this is the direction we’re headed. Each step in AI capability, each new protocol for machine-to-machine communication, each advancement in personalization brings us closer to this future.

Eventually, navigating the digital world won’t feel like using a tool at all. It will feel like collaborating with a trusted partner who knows you, truly knows you, and acts on your behalf within the bounds you’ve set, sometimes leading, sometimes following, but always in sync with your intentions. Agents will change everything, not by replacing us, but by working alongside us in a fluid dance of collaboration, turning the overwhelming complexity of our digital lives into thoughtful simplicity. Those who embrace this agent-centric future, building services that are not just human-accessible but native agent-engagable, designed for this collaborative interchange, will define the next chapter of the internet.

TPMs, TEEs, and Everything In Between: What You Actually Need to Know

Ever been in a meeting where someone drops terms like “TEE,” “TPM,” or “FIPS-certified” and everyone nods along, pretending they understand? Yeah, me too.

Last night I saw JP Aumasson tweet something that hit home:

“Some discussions would be so much easier if people knew the definitions of ‘TEE’, ‘TPM’, ‘Secure element’, ‘Secure enclave’, ‘HSM’, ‘Trusted computing’, ‘FIPS(140-2/3)-certified’, ‘Common criteria’, ‘security target’, etc. Plus now the marketing-oriented term ‘confidential computing’ is used to mean a variety of things with varying security properties.”

He’s right – the security tech space is a mess of overlapping terms, marketing buzzwords, and genuine technical concepts. So I threw together a guide to sort this stuff out.

What’s Actually Different Between These Things?

At their core, these technologies do three things:

  • Minimize what code you need to trust (the TCB)
  • Create isolation between different parts of a system
  • Establish trust across different machines

A TPM is not the same as a TEE. Intel SGX is not identical to AMD SEV. And no, slapping “FIPS-certified” on your product doesn’t automatically make it secure.

The Real-World Impact

When your vendor says they use “Confidential Computing,” do you know what that actually means for your data? Could be anything from “your data is encrypted in memory” to “we’ve got a fancy marketing term for standard virtualization.”

The differences matter. A secure element in your phone has around 10-50KB of trusted code. A standard Linux kernel? About 27.8 MILLION lines. One of these is much easier to secure than the other.

When Things Break

Even the most certified security tech fails. Hardware Security Modules (HSMs) with FIPS 140-2 certification—supposedly the gold standard for cryptographic security—have been compromised by design flaws. Look at the 2015 Safenet HSM vulnerability where API flaws in the PKCS#11 interface allowed full key extraction. Attackers with authenticated access could exploit weak key derivation mechanisms to extract the very keys the HSM was designed to protect.

Bottom line: No security technology is perfect. Each has its place, limitations, and potential failure modes.

I’ve put together a full technical deep-dive on this topic: From TPMs to TEEs: How Security Technologies Work—and Where They Fail.

As Winston Churchill observed, “He who fails to plan is planning to fail.” Understanding what’s under the hood of these technologies isn’t just academic—it’s essential for building systems that can actually withstand the threats they’ll face.

Operational Evolution Revisited: How AI-Native Systems Will Revolutionize Infrastructure

The evolution of technology operations has always been driven by necessity. From the early days of single system operators (sysops) managing physical servers through hands-on intervention, to today’s complex landscape of distributed microservices, containers, and serverless functions, each operational paradigm shift has emerged to address growing complexity.

The Journey of Operational Evolution

From the hands-on Sysops era of the 1960s-80s when operators physically managed as as little as few to 10s of servers each, to the System Administration period of the 1990s when centralized tools expanded reach to hundreds of systems, technology operations have continuously transformed. DevOps emerged in the mid-2000s, leveraging Infrastructure as Code to manage thousands of systems, followed by SRE practices in the 2010s with error budgets and self-healing systems handling tens of thousands of containers. Looking ahead to 2025, AI-Driven Operations promises autonomous management of millions of components.

Each transition has been driven by necessity – not choice – as technology’s relentless complexity has overwhelmed previous operational models.

The Machine Concept Has Transformed

What’s particularly interesting is how we use the word “machine” has changed dramatically. In the early days, machines were physical servers with stable operating systems and predictable maintenance schedules. Today, with serverless computing, the very concept of a server has become fluid – functions materialize only when triggered, often lasting mere seconds before vanishing.

This ephemeral nature of modern computing creates unprecedented coordination challenges that exceed manual and even moderate automation approaches to management.

The Limits of Current Approaches

Even advanced DevOps and SRE practices are struggling with the scale and complexity of today’s systems. Many vendors have responded by adding AI or ML features to their products, but these “bolt-on” enhancements only provide incremental benefits – analyzing logs, detecting anomalies, or generating suggestions for known issues.

What’s needed is a more fundamental reimagining of operations, similar to how cloud-native architectures transformed infrastructure beyond simple virtualization.

AI-Native: A New Operational Paradigm

An AI-native platform isn’t just software that applies ML algorithms to operational data. It’s a new foundation where intelligence is deeply integrated into orchestration, observability, security, and compliance layers.

In these systems:

  • Instrumentation is dynamic and context-aware
  • Security is adaptive, learning normal communication patterns and immediately flagging and in even some cases quarantining anomalous processes
  • Compliance shifts from periodic audits to continuous enforcement

The timeline above illustrates how each operational era has enabled engineers to manage exponentially more systems as complexity has grown.

This diagram shows the widening gap between human management capacity and system complexity, which AI-native operations will ultimatley address.

The Human Role Transforms, Not Disappears

Rather than eliminating jobs, AI-native operations redefine how engineers spend their time. As a result, we will ultimately see the concept “force multiplier engineers” who will build advanced AI-driven frameworks that amplify the productivity of all other developers.

Freed from repetitive tasks like scaling, patching, and log parsing, these professionals can focus on innovation, architecture, and strategic risk management.

The Inevitable Shift

This transition isn’t optional but inevitable. As systems become more fragmented, ephemeral, and globally distributed, conventional approaches simply can’t keep pace with the complexity.

Those who embrace AI-native operations early will gain significant advantages in reliability, security, cost-efficiency, and talent utilization. Those who hesitate risk being overwhelmed by complexity that grows faster than their capacity to manage it.

What do you think about the future of AI in operations? Are you seeing early signs of this transition in your organization? Let me know in the comments!

Here is a whitepaper on this topic I threw together: Operational Evolution Revisited: How AI-Native Systems Will Revolutionize Infrastructure

Educating the Champion, the Buyer, and the Market

Security used to be something we tried to bolt on to inherently insecure systems. In the 1990s, many believed that if we simply patched enough holes and set up enough firewalls, we could protect almost anything. Today, hard-won experience has shown that secure-by-design is the only sustainable path forward. Rather than treating security as an afterthought, we need to bake it into a system’s very foundation—from its initial design to its day-to-day operation.

Yet even the best security technology can fail to catch on if no one understands its value. In my time in the field I’ve seen a recurring theme: great solutions often falter because they aren’t communicated effectively to the right audiences. Whether you’re a security entrepreneur, an in-house security architect, or part of a larger development team, you’ll likely need to equip three distinct groups with the right messaging: the Technical Champion, the Economic Buyer, and the Broader Market. If any of them fail to see why—and how—your solution matters, momentum stalls.

From Bolt-On to Secure-by-Design

The security industry has undergone a massive shift, moving away from the idea that you can simply bolt on protection to an already flawed system. Instead, we now realize that security must be designed in from the start. This demands a lifecycle approach—it’s not enough to fix bugs after deployment or put a facade in front of a service. We have to consider how software is built, tested, deployed, and maintained over time.

This evolution requires cultural change: security can’t just live in a silo; it has to be woven into product development, operations, and even business strategy. Perhaps most importantly, we’ve learned that people, processes, and communication strategies are just as important as technology choices.

This shift has raised the bar. It’s no longer sufficient to show that your solution works; you must show how it seamlessly integrates into existing workflows, consider the entire use lifecycle, supports future needs, and gets buy-in across multiple levels of an organization.

The Three Audiences You Need to Win Over

The Technical Champion (80% Tech / 20% Business)

Your security solution will often catch the eye of a deeply technical person first. This might be a security engineer who’s tired of patching the same vulnerabilities or a software architect who sees design flaws that keep repeating. They’re your first and most crucial ally.

Technical champions need more than promises—they need proof. They want detailed demos showing real-world scenarios, sample configurations they can experiment with, and pilot environments where they can test thoroughly. Give them architecture diagrams that satisfy their technical depth, comprehensive documentation that anticipates their questions, and a clear roadmap showing how you’ll address emerging threats and scale for future needs.

Integration concerns keep champions awake at night. They need to understand exactly how your solution will mesh with existing systems, what the deployment strategy looks like, and who owns responsibility for updates and patches. Address their concerns about learning curves head-on with clear documentation and practical migration paths.

While technology drives their interest, champions eventually have to justify their choices to management. Give them a concise one-pager that frames the returns in business terms: reduced incident response time, prevented security gaps, and automated fixes that save precious engineer hours.

Why This Matters:
When you equip your champion with the right resources, they become heroes inside their organizations. They’re the one who discovered that crucial solution before a major breach, who saved the team countless hours of manual work, who saw the strategic threat before anyone else. That kind of impact directly translates to recognition, promotions, and career advancement. The champion who successfully implements a game-changing security solution often becomes the go-to expert, earning both peer respect and management attention. When you help a champion shine like this, they’ll pull your solution along with them as they climb the organizational ladder.

The Economic Buyer (20% Tech / 80% Business)

A passionate champion isn’t always the one holding the purse strings. Often, budget is controlled by directors, VPs, or executives who juggle competing priorities and are measured by overall business outcomes, not technical elegance.

Your buyer needs a concise, compelling story about how this investment reduces risk, saves costs, or positions the company advantageously. Frame everything in terms of bottom-line impact: quantifiable labor hours saved, reduced compliance burdens, and concrete return on investment timelines.

Even without extensive case studies, you can build confidence through hypothetical or pilot data. Paint a clear picture: “Similar environments have seen 30% reduction in incident response time” or “Based on initial testing, we project 40% fewer false positives.” Consider proposing a small pilot or staged rollout—once they see quick wins scaling up becomes an easier sell.

Why This Matters:
When buyers successfully champion a security solution, they transform from budget gatekeepers into strategic leaders in the eyes of executive management. They become known as the one who not only protected the company but showed real business vision. This reputation for combining security insight with business acumen often fast-tracks their career progression. A buyer who can consistently tell compelling business stories—especially about transformative security investments—quickly gets noticed by the C-suite. By helping them achieve these wins, you’re not just securing a deal; you’re empowering their journey to higher organizational levels. And as they advance, they’ll bring your solution with them to every new role and company they touch.

The Broader Market: Present, Teach, and Farm

While winning over individual champions and buyers is crucial, certain security approaches need industry-wide acceptance to truly succeed. Think of encryption standards, identity protocols, and AI based security research tools—these changed the world only after enough people, in multiple communities, embraced them.

Build visibility through consistent conference presentations, industry webinars, and local security meetups. Even with novel technologies, walking people through hypothetical deployments or pilot results builds confidence. Panels and Q&A sessions demonstrate your openness to tough questions and deep understanding of the problems you’re solving.

Make your message easy to spread and digest. While detailed whitepapers have their place, supplement them with short video demonstrations, clear infographics, and focused blog posts that capture your solution’s essence quickly. Sometimes a two-minute video demonstration or one-page technical overview sparks more interest than an extensive document.

Think of education as planting seeds—not every seed sprouts immediately, but consistent knowledge sharing shapes how an entire field thinks about security over time. Engage thoughtfully on social media, address skepticism head-on, and highlight relevant use cases that resonate with industry trends. Consider aligning with open-source projects, industry consortiums, or standards bodies to amplify your reach.

Why This Matters:
By consistently educating and contributing to the community dialogue, you create opportunities for everyone involved to shine. Your champions become recognized thought leaders, speaking at major conferences about their successful implementations. Your buyers get profiled in industry publications for their strategic vision. Your early adopters become the experts everyone else consults. This creates a powerful feedback loop where community advocacy not only drives adoption but establishes reputations and advances careers. The security professionals who help establish new industry norms often find themselves leading the next wave of innovation—and they remember who helped them get there.

Overcoming Common Challenges

The “Not Invented Here” Mindset

Security professionals excel at finding flaws, tearing down systems, and building their own solutions. While this breaker mindset is valuable for discovering vulnerabilities, it can lead to the “Not Invented Here” syndrome: a belief that external solutions can’t possibly be as good as something built in-house.

The key is acknowledging and respecting this culture. Offer ways for teams to test, audit, or customize your solution so it doesn’t feel like an opaque black box. Show them how your dedicated support, updates, and roadmap maintenance can actually free their talent to focus on unique, high-value problems instead of maintaining yet another in-house tool.

Position yourself as a partner rather than a replacement. Your goal isn’t to diminish their expertise—it’s to provide specialized capabilities that complement their strengths. When teams see how your solution lets them focus on strategic priorities instead of routine maintenance, resistance often transforms into enthusiasm.

The Platform vs. Product Dilemma

A common pitfall in security (and tech in general) is trying to build a comprehensive platform before solving a single, specific problem. While platforms can be powerful, they require critical mass and broad ecosystem support to succeed. Many promising solutions have faltered by trying to do too much too soon.

Instead, focus on addressing one pressing need exceptionally well. This approach lets you deliver value quickly and build credibility through concrete wins. Once you’ve proven your worth in a specific area, you can naturally expand into adjacent problems. You might have a grand vision for a security platform, but keep your initial messaging focused on immediate, tangible benefits.

Navigating Cross-Organizational Dependencies

Cross-team dynamics can derail implementations in two common ways: operational questions like “Who will manage the database?” and adoption misalignment where one team (like Compliance) holds the budget while another (like Engineering) must use the solution. Either can stall deals for months.

Design your proof of value (POV) deployments to minimize cross-team dependencies. The faster a champion can demonstrate value without requiring multiple department sign-offs, the better. Start small within a single team’s control, then scale across organizational boundaries as value is proven.

Understand ownership boundaries early: Who handles infrastructure? Deployment? Access control? Incident response? What security and operational checklists must be met for production? Help your champion map these responsibilities to speed implementation and navigate political waters.

The Timing and Budget Challenge

Success often depends on engaging at the right time in the organization’s budgeting cycle. Either align with existing budget line items or engage early enough to help secure new ones through education. Otherwise, your champion may be stuck trying to spend someone else’s budget—a path that rarely succeeds. Remember that budget processes in large organizations can take 6-12 months, so timing your engagement is crucial.

The Production Readiness Gap

A signed deal isn’t the finish line—it’s where the real work begins. Without successful production deployment, you won’t get renewals and often can’t recognize revenue. Know your readiness for the scale requirements of target customers before engaging deeply in sales.

Be honest about your production readiness. Can you handle their volume? Meet their SLAs? Support their compliance requirements? Have you tested at similar scale? If not, you risk burning valuable market trust and champion relationships. Sometimes the best strategy is declining opportunities until you’re truly ready for that tier of customer.

Having a clear path from POV to production is critical. Document your readiness criteria, reference architectures, and scaling capabilities. Help champions understand and navigate the journey from pilot to full deployment. Remember: a successful small customer in production is often more valuable than a large customer stuck in pilot or never deploys into production and does not renew.

Overcoming Entrenched Solutions

One of the toughest challenges isn’t technical—it’s navigating around those whose roles are built on maintaining the status quo. Even when existing solutions have clear gaps (like secrets being unprotected 99% of their lifecycle), the facts often don’t matter because someone’s job security depends on not acknowledging them.

This requires a careful balance. Rather than directly challenging the current approach, focus on complementing and expanding their security coverage. Position your solution as helping them achieve their broader mission of protecting the organization, not replacing their existing responsibilities. Show how they can evolve their role alongside your solution, becoming the champion of a more comprehensive security strategy rather than just maintaining the current tools.

Putting It All Together

After three decades in security, one insight stands out: success depends as much on communication as on code. You might have the most innovative approach, the sleekest dashboard, or a bulletproof protocol—but if nobody can articulate its value to decision-makers and colleagues, it might remain stuck at the proof-of-concept stage or sitting on a shelf.

Your technical champion needs robust materials and sufficient business context to advocate internally. Your economic buyer needs clear, ROI-focused narratives supported by concrete outcomes. And the broader market needs consistent education through various channels to understand and embrace new approaches.

Stay mindful of cultural barriers like “Not Invented Here” and resist the urge to solve everything at once. Focus on practical use cases, maintain consistent messaging across audiences, and show how each stakeholder personally benefits from your solution. This transforms curiosity into momentum, driving not just adoption but industry evolution.

Take a moment to assess your approach: Have you given your champion everything needed to succeed—technical depth, migration guidance, and business context? Does your buyer have a compelling, ROI-focused pitch built on solid data? Are you effectively sharing your story with the broader market through multiple channels?

If you’re missing any of these elements, now is the time to refine your strategy. By engaging these three audiences effectively, addressing cultural barriers directly, and maintaining focus on tangible problems, you’ll help advance security one success story at a time.