The Fallacy of Alignment: Why AI Safety Needs Structure, Not Hope

My grandfather’s love of science fiction was his portal to tomorrow’s world—and it became mine. Together we’d pore over books like Asimov’s I, Robot, imagining futures shaped by machines. In the 1940s, when Asimov explored the complexities of artificial intelligence and human-robot relationships, it was pure speculation. By the 2000s, Hollywood had adapted these ideas into films where robots went rogue. Now, in the 2020s, the narrative has flipped—The Creator (2023) depicts a future where humanity, driven by fear, attempts to exterminate all AI. Unlike Asimov’s cautionary tales, where danger emerged from technology’s unintended consequences, this film casts humanity itself as the villain. This shift mirrors a broader cultural change, once, we feared what we might create; now, we fear who we have been.

As a security practitioner, this evolution gives me pause, especially as robotics and machine learning systems grow ever more autonomous. Today’s dominant approach to AI safety relies on alignment and reinforcement learning—a strategy that aims to shape AI behavior through incentives and training. However, this method falls prey to a well-known phenomenon in optimization known as Goodhart’s Law: when a measure becomes a target, it ceases to be a good measure. In the context of AI alignment, if the reward signal is our measure of success, over-optimization can lead to unintended, and often absurd, behaviors—exactly because the reward function cannot capture every nuance of our true values.

Much like early reinforcement learning schemes, Asimov’s Three Laws were a structural control mechanism—designed not to guide morality but to constrain outcomes. They, too, failed in unexpected ways when the complexity of real-world scenarios outstripped their simplistic formulations.

This raises a deeper question: If we now view ourselves as the existential threat, can we truly build AI that serves us? Or will our fears—whether of AI or of our own past—undermine the future we once dreamed of?

Today’s creators display a similar hubris. Once, we feared losing control of our inventions; now, we charge ahead, convinced that our intelligence alone can govern machines far more complex than we understand. But intelligence is not equivalent to control. While Asimov’s Three Laws attempted to impose hard limits, many modern AI safety strategies lean on alignment methods that, as Goodhart’s Law warns us, can degrade once a target is set.

This blind trust in alignment resembles our current approach to security. The slogan “security is everyone’s responsibility” was meant to foster vigilance but often dilutes accountability. When responsibility is diffuse, clear, enforceable safeguards are frequently absent. True security—and true AI governance—demands more than shared awareness; it requires structural enforcement. Without built-in mechanisms of control, we risk mistaking the illusion of safety for actual safety.

Consider containment as an illustrative example of structural control: by embedding hard limits on the accumulation of power, data, or capabilities within AI systems, we can create intrinsic safeguards against runaway behavior—much like physical containment protocols manage hazardous materials.

If we continue to see ourselves as the existential threat, then today’s creators risk designing AI that mirrors our own fears, biases, and contradictions. Without integrating true structural safeguards into AI—mechanisms designed into the system rather than imposed externally—we aren’t ensuring that AI serves us; we are merely hoping it will.

The Luddites were not entirely wrong to fear technology’s disruptive power, nor were they correct in believing they could halt progress altogether. The error lay in accepting only extremes—total rejection or uncritical adoption. Today, with AI, we face a similar dilemma. We cannot afford naïve optimism that alignment alone will save us, nor can we succumb to reactionary pessimism that smothers innovation out of fear.

Instead, we must start with the assumption that we, as humans, are fallible. Our intelligence alone is insufficient to control intelligence. If we do not design AI with structural restraint and built-in safeguards—grounded not in fear or arrogance but in pragmatic control—we risk losing control entirely. Like robust security practices, AI safety cannot be reduced to an abstract, diffuse responsibility. It must be an integral part of the system itself, not left to the vague hope that collectively we will always do the right thing.

Educating the Champion, the Buyer, and the Market

Security used to be something we tried to bolt on to inherently insecure systems. In the 1990s, many believed that if we simply patched enough holes and set up enough firewalls, we could protect almost anything. Today, hard-won experience has shown that secure-by-design is the only sustainable path forward. Rather than treating security as an afterthought, we need to bake it into a system’s very foundation—from its initial design to its day-to-day operation.

Yet even the best security technology can fail to catch on if no one understands its value. In my time in the field I’ve seen a recurring theme: great solutions often falter because they aren’t communicated effectively to the right audiences. Whether you’re a security entrepreneur, an in-house security architect, or part of a larger development team, you’ll likely need to equip three distinct groups with the right messaging: the Technical Champion, the Economic Buyer, and the Broader Market. If any of them fail to see why—and how—your solution matters, momentum stalls.

From Bolt-On to Secure-by-Design

The security industry has undergone a massive shift, moving away from the idea that you can simply bolt on protection to an already flawed system. Instead, we now realize that security must be designed in from the start. This demands a lifecycle approach—it’s not enough to fix bugs after deployment or put a facade in front of a service. We have to consider how software is built, tested, deployed, and maintained over time.

This evolution requires cultural change: security can’t just live in a silo; it has to be woven into product development, operations, and even business strategy. Perhaps most importantly, we’ve learned that people, processes, and communication strategies are just as important as technology choices.

This shift has raised the bar. It’s no longer sufficient to show that your solution works; you must show how it seamlessly integrates into existing workflows, consider the entire use lifecycle, supports future needs, and gets buy-in across multiple levels of an organization.

The Three Audiences You Need to Win Over

The Technical Champion (80% Tech / 20% Business)

Your security solution will often catch the eye of a deeply technical person first. This might be a security engineer who’s tired of patching the same vulnerabilities or a software architect who sees design flaws that keep repeating. They’re your first and most crucial ally.

Technical champions need more than promises—they need proof. They want detailed demos showing real-world scenarios, sample configurations they can experiment with, and pilot environments where they can test thoroughly. Give them architecture diagrams that satisfy their technical depth, comprehensive documentation that anticipates their questions, and a clear roadmap showing how you’ll address emerging threats and scale for future needs.

Integration concerns keep champions awake at night. They need to understand exactly how your solution will mesh with existing systems, what the deployment strategy looks like, and who owns responsibility for updates and patches. Address their concerns about learning curves head-on with clear documentation and practical migration paths.

While technology drives their interest, champions eventually have to justify their choices to management. Give them a concise one-pager that frames the returns in business terms: reduced incident response time, prevented security gaps, and automated fixes that save precious engineer hours.

Why This Matters:
When you equip your champion with the right resources, they become heroes inside their organizations. They’re the one who discovered that crucial solution before a major breach, who saved the team countless hours of manual work, who saw the strategic threat before anyone else. That kind of impact directly translates to recognition, promotions, and career advancement. The champion who successfully implements a game-changing security solution often becomes the go-to expert, earning both peer respect and management attention. When you help a champion shine like this, they’ll pull your solution along with them as they climb the organizational ladder.

The Economic Buyer (20% Tech / 80% Business)

A passionate champion isn’t always the one holding the purse strings. Often, budget is controlled by directors, VPs, or executives who juggle competing priorities and are measured by overall business outcomes, not technical elegance.

Your buyer needs a concise, compelling story about how this investment reduces risk, saves costs, or positions the company advantageously. Frame everything in terms of bottom-line impact: quantifiable labor hours saved, reduced compliance burdens, and concrete return on investment timelines.

Even without extensive case studies, you can build confidence through hypothetical or pilot data. Paint a clear picture: “Similar environments have seen 30% reduction in incident response time” or “Based on initial testing, we project 40% fewer false positives.” Consider proposing a small pilot or staged rollout—once they see quick wins scaling up becomes an easier sell.

Why This Matters:
When buyers successfully champion a security solution, they transform from budget gatekeepers into strategic leaders in the eyes of executive management. They become known as the one who not only protected the company but showed real business vision. This reputation for combining security insight with business acumen often fast-tracks their career progression. A buyer who can consistently tell compelling business stories—especially about transformative security investments—quickly gets noticed by the C-suite. By helping them achieve these wins, you’re not just securing a deal; you’re empowering their journey to higher organizational levels. And as they advance, they’ll bring your solution with them to every new role and company they touch.

The Broader Market: Present, Teach, and Farm

While winning over individual champions and buyers is crucial, certain security approaches need industry-wide acceptance to truly succeed. Think of encryption standards, identity protocols, and AI based security research tools—these changed the world only after enough people, in multiple communities, embraced them.

Build visibility through consistent conference presentations, industry webinars, and local security meetups. Even with novel technologies, walking people through hypothetical deployments or pilot results builds confidence. Panels and Q&A sessions demonstrate your openness to tough questions and deep understanding of the problems you’re solving.

Make your message easy to spread and digest. While detailed whitepapers have their place, supplement them with short video demonstrations, clear infographics, and focused blog posts that capture your solution’s essence quickly. Sometimes a two-minute video demonstration or one-page technical overview sparks more interest than an extensive document.

Think of education as planting seeds—not every seed sprouts immediately, but consistent knowledge sharing shapes how an entire field thinks about security over time. Engage thoughtfully on social media, address skepticism head-on, and highlight relevant use cases that resonate with industry trends. Consider aligning with open-source projects, industry consortiums, or standards bodies to amplify your reach.

Why This Matters:
By consistently educating and contributing to the community dialogue, you create opportunities for everyone involved to shine. Your champions become recognized thought leaders, speaking at major conferences about their successful implementations. Your buyers get profiled in industry publications for their strategic vision. Your early adopters become the experts everyone else consults. This creates a powerful feedback loop where community advocacy not only drives adoption but establishes reputations and advances careers. The security professionals who help establish new industry norms often find themselves leading the next wave of innovation—and they remember who helped them get there.

Overcoming Common Challenges

The “Not Invented Here” Mindset

Security professionals excel at finding flaws, tearing down systems, and building their own solutions. While this breaker mindset is valuable for discovering vulnerabilities, it can lead to the “Not Invented Here” syndrome: a belief that external solutions can’t possibly be as good as something built in-house.

The key is acknowledging and respecting this culture. Offer ways for teams to test, audit, or customize your solution so it doesn’t feel like an opaque black box. Show them how your dedicated support, updates, and roadmap maintenance can actually free their talent to focus on unique, high-value problems instead of maintaining yet another in-house tool.

Position yourself as a partner rather than a replacement. Your goal isn’t to diminish their expertise—it’s to provide specialized capabilities that complement their strengths. When teams see how your solution lets them focus on strategic priorities instead of routine maintenance, resistance often transforms into enthusiasm.

The Platform vs. Product Dilemma

A common pitfall in security (and tech in general) is trying to build a comprehensive platform before solving a single, specific problem. While platforms can be powerful, they require critical mass and broad ecosystem support to succeed. Many promising solutions have faltered by trying to do too much too soon.

Instead, focus on addressing one pressing need exceptionally well. This approach lets you deliver value quickly and build credibility through concrete wins. Once you’ve proven your worth in a specific area, you can naturally expand into adjacent problems. You might have a grand vision for a security platform, but keep your initial messaging focused on immediate, tangible benefits.

Navigating Cross-Organizational Dependencies

Cross-team dynamics can derail implementations in two common ways: operational questions like “Who will manage the database?” and adoption misalignment where one team (like Compliance) holds the budget while another (like Engineering) must use the solution. Either can stall deals for months.

Design your proof of value (POV) deployments to minimize cross-team dependencies. The faster a champion can demonstrate value without requiring multiple department sign-offs, the better. Start small within a single team’s control, then scale across organizational boundaries as value is proven.

Understand ownership boundaries early: Who handles infrastructure? Deployment? Access control? Incident response? What security and operational checklists must be met for production? Help your champion map these responsibilities to speed implementation and navigate political waters.

The Timing and Budget Challenge

Success often depends on engaging at the right time in the organization’s budgeting cycle. Either align with existing budget line items or engage early enough to help secure new ones through education. Otherwise, your champion may be stuck trying to spend someone else’s budget—a path that rarely succeeds. Remember that budget processes in large organizations can take 6-12 months, so timing your engagement is crucial.

The Production Readiness Gap

A signed deal isn’t the finish line—it’s where the real work begins. Without successful production deployment, you won’t get renewals and often can’t recognize revenue. Know your readiness for the scale requirements of target customers before engaging deeply in sales.

Be honest about your production readiness. Can you handle their volume? Meet their SLAs? Support their compliance requirements? Have you tested at similar scale? If not, you risk burning valuable market trust and champion relationships. Sometimes the best strategy is declining opportunities until you’re truly ready for that tier of customer.

Having a clear path from POV to production is critical. Document your readiness criteria, reference architectures, and scaling capabilities. Help champions understand and navigate the journey from pilot to full deployment. Remember: a successful small customer in production is often more valuable than a large customer stuck in pilot or never deploys into production and does not renew.

Overcoming Entrenched Solutions

One of the toughest challenges isn’t technical—it’s navigating around those whose roles are built on maintaining the status quo. Even when existing solutions have clear gaps (like secrets being unprotected 99% of their lifecycle), the facts often don’t matter because someone’s job security depends on not acknowledging them.

This requires a careful balance. Rather than directly challenging the current approach, focus on complementing and expanding their security coverage. Position your solution as helping them achieve their broader mission of protecting the organization, not replacing their existing responsibilities. Show how they can evolve their role alongside your solution, becoming the champion of a more comprehensive security strategy rather than just maintaining the current tools.

Putting It All Together

After three decades in security, one insight stands out: success depends as much on communication as on code. You might have the most innovative approach, the sleekest dashboard, or a bulletproof protocol—but if nobody can articulate its value to decision-makers and colleagues, it might remain stuck at the proof-of-concept stage or sitting on a shelf.

Your technical champion needs robust materials and sufficient business context to advocate internally. Your economic buyer needs clear, ROI-focused narratives supported by concrete outcomes. And the broader market needs consistent education through various channels to understand and embrace new approaches.

Stay mindful of cultural barriers like “Not Invented Here” and resist the urge to solve everything at once. Focus on practical use cases, maintain consistent messaging across audiences, and show how each stakeholder personally benefits from your solution. This transforms curiosity into momentum, driving not just adoption but industry evolution.

Take a moment to assess your approach: Have you given your champion everything needed to succeed—technical depth, migration guidance, and business context? Does your buyer have a compelling, ROI-focused pitch built on solid data? Are you effectively sharing your story with the broader market through multiple channels?

If you’re missing any of these elements, now is the time to refine your strategy. By engaging these three audiences effectively, addressing cultural barriers directly, and maintaining focus on tangible problems, you’ll help advance security one success story at a time.

The Account Recovery Problem and How Government Standards Might Actually Fix It

Account recovery is where authentication systems go to die. We build sophisticated authentication using FIDO2, WebAuthn, and passkeys, then use “click this email link to reset” when something goes wrong. Or if we are an enterprise, we spend millions staffing help desks to verify identity through caller ID and security questions that barely worked in 2005.

This contradiction runs deep in digital identity. Organizations that require hardware tokens and biometrics for login will happily reset accounts based on a hope and a prayer. These companies that spend fortunes on authentication will rely on “mother’s maiden name” or a text message of a “magic number” for recovery. Increasingly we’ve got bank-vault front doors with screen-door back entrances.

The Government Solution

But there’s an interesting solution emerging from an unexpected place: government identity standards. Not because governments are suddenly great at technology, but because they’ve been quietly solving something harder than technology – how to agree on how to verify identity across borders and jurisdictions.

The European Union is pushing ahead with cross-border digital identity wallets based on their own standards. At the same time, a growing number of U.S. states—early adopters like California, Arizona, Colorado, and Utah—are piloting and implementing mobile driver’s licenses (mDLs). These mDLs aren’t just apps showing a photo ID; they’re essentially virtual smart cards, containing a “certificate” of sorts that is used to attest to certain information about you, similar to what happens with electronic reading of passports and federal CAC cards. Each of these mDL “certificates” are cryptographically traceable back to the issuing authority’s root of trust, creating verifiable chains of who is attesting to these attributes.

One of the companies helping make this happen is SpruceID, a company I advise. They have been doing the heavy lifting to enable governments and commercial agencies to accomplish these scenarios, paving the way for a more robust and secure digital identity ecosystem.

Modern Threats and Solutions

What makes this particularly relevant in 2024 is how it addresses emerging threats. Traditional remote identity verification relies heavily on liveness detection – systems looking at blink patterns, reflections and asking users to turn their heads, or show some other directed motion. But with generative AI advancing rapidly, these methods are becoming increasingly unreliable. Bad actors can now use AI to generate convincing video responses that fool traditional liveness checks. We’re seeing sophisticated attacks that can mimic these patterns the existing systems look at, even the more nuanced subtle facial expressions that once served as reliable markers of human presence.

mDL verification takes a fundamentally different approach. Instead of just checking if a face moves correctly, it verifies cryptographic proofs that link back to government identity infrastructure. Even if an attacker can generate a perfect deepfake video, they can’t forge the cryptographic attestations that come with a legitimate mDL. It’s the difference between checking if someone looks real and verifying they possess cryptographic proof of their identity.

Applications and Implementation

This matters for authentication because it gives us something we’ve never had: a way to reliably verify legal identity during account authentication or recovery that’s backed by the same processes used for official documents. This means that in the future when someone needs to recover account access, they can prove their identity using government-issued credentials that can be cryptographically verified, even in a world where deepfakes are becoming indistinguishable from reality.

The financial sector is already moving on this. Banks are starting to look at how they can integrate mDL verification into their KYC and AML compliance processes. Instead of manual document checks or easily-spoofed video verification, they will be able to use these to verify customer identity against government infrastructure. The same approaches that let customs agents verify passports electronically will now also be used to enable banks to verify customers.

For high-value transactions, this creates new possibilities. When someone signs a major contract, their mDL can be used to create a derived credential based on the attestations from the mDL about their name, age, and other artifacts. This derived credential could be an X.509 certificate binding their legal identity to the signature. This creates a provable link between the signer’s government-verified identity and the document – something that’s been remarkably hard to achieve digitally.

Technical Framework

The exciting thing isn’t the digital ID – they have been around a while – it’s the support for an online presentment protocol. ISO/IEC TS 18013-7 doesn’t just specify how to make digital IDs; it defines how these credentials can be reliably presented and verified online. This is crucial because remote verification has always been the Achilles’ heel of identity systems. How do you know someone isn’t just showing you a video or a photo of a fake ID? The standard addresses these challenges through a combination of cryptographic proofs and real-time challenge-response protocols that are resistant to replay attacks and deep fakes.

Government benefits show another critical use case. Benefits systems face a dual challenge: preventing fraud while ensuring legitimate access. mDL verification lets agencies validate both identity and residency through cryptographically signed government credentials. The same approach that proves your identity for a passport electronically at the TSA can prove your eligibility for benefits online. But unlike physical ID checks or basic document uploads, these verifications are resistant to the kind of sophisticated fraud we’re seeing with AI-generated documents and deepfake videos.

What’s more, major browsers are beginning to implement these standards as a first-class citizen. This means that verification of these digital equivalents of our physical identities  will be natively supported by the web, ensuring that online interactions—from logging in to account recovery—are more easier and more secure than ever before.

Privacy and Future Applications

These mDLs have interesting privacy properties too. The standards support selective disclosure – proving you’re over 21 without showing your birth date, or verifying residency without exposing your address. You can’t do that with a physical ID card. More importantly, these privacy features work remotely – you can prove specific attributes about yourself online without exposing unnecessary personal information or risking your entire identity being captured and replayed by attackers.

We’re going to see this play out in sensitive scenarios like estate access. Imagine a case when someone needs to access a deceased partner’s accounts, they can prove their identity and when combined with other documents like marriage certificates and death certificates, they will be able to prove their entitlement to access that bank account without the overhead and complexity they need today. Some day we can even imagine those supporting documents to be in these wallets also, making it even easier.

The Path Forward

While the path from here to there is long and there are a lot of hurdles to get over, we are clearly on a path where this does happen. We will have standardized, government-backed identity verification that works across borders and jurisdictions. Not by replacing existing authentication systems, but by providing them with a stronger foundation for identity verification and recovery and remote identity verification – one that works even as AI makes traditional verification methods increasingly unreliable.

We’re moving from a world of island of identity systems to one with standardized and federated identity infrastructure, built on the same trust frameworks that back our most important physical credentials. And ironically, at least in the US it started with making driver’s licenses digital.

From the Morris Worm to Modern Agentic AI Threats

The year was 1988, and at age 13, I found myself glued to news and IRC channels buzzing with news of the Morris Worm. As reports poured in about thousands of computers grinding to a halt, I was captivated by how one graduate student’s experiment had cascaded into the first major internet security crisis. That moment taught me a crucial lesson: even well-intentioned actions can spiral into unforeseen consequences.

Three decades later, we face challenges that young me could hardly have imagined. Today’s AI systems aren’t just following predetermined scripts—they’re autonomous agents actively optimizing for goals, often discovering novel and potentially concerning paths to achieve them.

We’re seeing this play out in both research settings and real-world applications. Language models finding creative ways to circumvent content filters, reinforcement learning agents discovering unintended exploits in their training environments—these aren’t malicious attacks, but they demonstrate how autonomous systems can pursue their objectives in ways their developers hadn’t anticipated.

The parallels to the Morris Worm are striking. Just as Robert Morris never intended to crash 6,000 machines, today’s non-adversarial AI developers don’t set out to create systems that bypass safety controls. Yet in both cases, we’re confronting the same fundamental challenge: how do we maintain control over increasingly sophisticated systems that can act in ways their creators never envisioned??

Some argue that fully autonomous AI agents pose risks we shouldn’t take. Fully Autonomous AI Agents Should Not Be Developed (arXiv:2502.02649) explores why.

Since, as they say, those who cannot remember the past are doomed to repeat it, I’ve put together some thoughts on different aspects of this reality:

The evolution from the Morris Worm to today’s autonomous AI agents isn’t just a fascinating trajectory in technology—it’s a crucial reminder that security must continuously adapt to meet new challenges. As these systems grow more sophisticated, our protective measures must evolve in tandem, informed by the lessons of the past but ready for the challenges of tomorrow.

From Plato to AI: Why Understanding Matters More Than Information

Reading was a big deal when I was a kid, but it was also a challenge. I’m dyslexic, dysgraphic, and dysnumeric, which made traditional learning methods difficult—but that’s largely another story. My parents—determined, if not always gentle—had a simple solution: they forced me to read, interpret, and present. They assigned me books, and I had to give oral reports on them. In hindsight, it was one of the most impactful things they did for me because that process—taking in complex information, distilling it, and presenting it clearly—is exactly how professionals in technology function today.

One of the books they had me read was Plato’s Republic. My biggest takeaway? How little had changed in our fundamental struggles with governance. The same debates about justice, power, and human nature that played out in ancient Greece continue today—only the terminology and tools have changed. Looking back, it makes sense why my parents chose that book. My father is logical to a fault and deeply patriotic, and my mother, though no longer politically active, still carries a pocket Constitution in her purse, with more in her trunk in case she runs out. Law and governance weren’t abstract to me—they were everyday conversations.

That experience stayed with me. It made me realize that governance isn’t just about laws—it’s about whether people understand and engage with those laws. And today, we face a different challenge: not a lack of information, but an overwhelming amount of it.

We tend to think of education—whether in civics, history, or technology—as a process of absorbing facts. But facts alone aren’t useful if we don’t know how to assess, connect, or apply them. When I was a kid, I didn’t just have to read The Republic—I had to present it, explain it, and engage with it. That distinction is important. Simply memorizing a passage from Plato wouldn’t have taught me much, but thinking through what it meant, arguing about its implications, and framing it in a way that made sense to me? That was where the real learning happened.

The same principle applies today. We live in an era where access to knowledge is not the bottleneck. AI can summarize court rulings, analyze laws, and map out how different governance systems compare. Information is endless, but comprehension is scarce. The problem isn’t finding knowledge—it’s knowing what matters, how to think critically about it, and how to engage with it.

This issue isn’t unique to civic engagement. It’s the same challenge students face as AI reshapes how they learn. It’s no longer enough to teach kids historical dates, formulas, or legal principles. They need to know how to question sources, evaluate reliability, and synthesize information in meaningful ways. They need to be prepared for a world where facts are easy to retrieve, but judgment, reasoning, and application are the real skills that matter.

The challenge for civic engagement is similar. There’s no shortage of legislative updates, executive orders, or judicial decisions to sift through. What’s missing is a way to contextualize them—to understand where they fit within constitutional principles, how they compare globally, and what their broader implications are.

That’s why the opportunity today is so compelling. The same AI-driven shifts transforming education can change how people engage with governance. Imagine a world where AI doesn’t just regurgitate legal language but helps people grasp how laws have evolved over time. Where it doesn’t just list amendments but connects them to historical debates and real-world consequences. Where it helps individuals—not just legal experts—track how their representatives vote, how policies change, and how different governance models approach similar challenges.

When I was growing up, my parents didn’t just want me to know about Plato’s ideas; they wanted me to engage with them. To question them. To challenge them. That’s what civic engagement should be—not passive consumption of legal information, but active participation in governance. And just as students today need to shift from memorization to deeper understanding, citizens need to move from surface-level awareness to critical, informed engagement with the world around them.

In many ways, AI could serve a similar role to what my parents did for me—forcing engagement, breaking down complexity, and pushing us to think critically. The difference is, this time, we have the tools to make that experience accessible to everyone.

Plato questioned whether democracy could survive without a well-informed citizenry. Today, the challenge isn’t lack of information—it’s making that information usable. And with the right approach, we can turn civic engagement from a passive duty into an active, lifelong pursuit.

Key Management: A Meme Retrospective

We all need a little laugh from time to time, especially when things get unexpectedly crazy. Well, yesterday was one of those days for me, so I decided to do a retrospective on what we call key management. I hope you enjoy!

We fixed secret management! By dumping everything into Vault and pretending it’s not a problem anymore….

Has anyone seen our cryptographic keys? They were right here… like, five years ago.

We need to improve our cryptographic security!
Discovers unprotected private keys lying around
Wait… if we have to discover our cryptographic keys, that means we aren’t actually managing them?

We secure video game DRM keys better than the keys protecting your bank account.

You get a shared secret! You get a shared secret! EVERYONE gets a shared secret! Shared secrets are not secret!

Why spend millions on cryptography if your keys spend 99% of their life unprotected? We need to fix key management first.

We don’t suck at cryptography—we suck at managing it. Everyone’s obsessing over PQC algorithms, but the real problem is deployment, key management, and lifecycle. PQC is just another spice—without proper management, it’s just seasoning on bad security.

The Identity Paradox: If It’s an Identity, Why Is It in a Secret Manager?

Enterprises love to talk about identity-first security—until it comes to machines. Human users have IAM systems, SSO, MFA, and governance. But workloads? Their so-called identities are often just API keys and certificates stuffed into a secret manager.

And that’s the paradox. If we really believe workloads have identities, why do we manage them like passwords instead of enforcing real authentication, authorization, and lifecycle management?

The Real Problem: Secret Managers Aren’t Enough

Secret managers do what they’re designed for—secure storage, rotation, and access control. But that’s not identity. A vault doesn’t verify anything—it just hands out secrets to whoever asks. That’s like calling a password manager an MFA solution.

And the real problem? Modern workloads are starting to do identity correctly—legacy ones aren’t. Meanwhile, machines, specifically TLS certificates, are getting more and more like workloads every day.

Machines Are Becoming More Like Workloads, But Legacy Workloads Are Still Stuck in Machine-Era Thinking

Attackers usually don’t need to compromise the machine—they don’t even try. Instead, they target the workload, because that’s what’s:

  • Exposed to the outside world—APIs, services, and user-facing applications.
  • Running business logic—the real target.
  • Holding credentials needed for further compromise.

Modern workloads are starting to move past legacy machine identity models.

  • They use short-lived credentials tied to runtime environments.
  • They authenticate dynamically, not based on pre-registered certificates.
  • Their identity is policy-driven and contextual, not static.

Meanwhile, legacy workloads are still trying to manage identity like machines, relying on:

  • Long-lived secrets.
  • Pre-assigned credentials.
  • Vault-based access control instead of dynamic attestation.

And at the same time, machines themselves are evolving to act more like workloads.

  • Certificate lifetimes used to be measured in years—now they’re weeks, days, or even hours.
  • Infrastructure itself is ephemeral—cloud VMs come and go like workloads.
  • The entire model of pre-registering machines is looking more and more outdated.

If this sounds familiar, it should. We’ve seen this mistake before.

Your Machine Identity Model is Just /etc/passwd in the Cloud—Backed by a Database Your Vendor Called a Secret Manager

This is like taking every system’s /etc/passwd file, stuffing it into a database, and distributing copies to every machine.

And that’s exactly what many secret managers are doing today:

That’s not an identity system. That’s a password manager—just with all the same problems.

  • Storing long-lived credentials that should never exist in the first place.
  • Managing pre-issued secrets instead of issuing identity dynamically.
  • Giving access based on who has the key, not what the workload actually is.

Secret managers still have their place. But if your workload identity strategy depends entirely on a vault, you’re just doing machine-era identity for cloud workloads—or a bunch of manual preregistration and processes.

Modern workloads aren’t doing this anymore. They request identity dynamically when they start, and it disappears when they stop. Machines are starting to do the same.

The Four Big Problems with Workload Identity Today

1. No Real Authentication – Possession ≠ Identity

Most workload “identities” boil down to possessing an API key or certificate, which is like saying:

“If you have the password, you must be the right user.”

That’s not authentication. Workload identity should be based on what the workload is, not just what it holds. This is where attestation comes in—like MFA for workloads. Without proof that a workload is valid, a secret is just a reusable token waiting to be stolen.

2. No Dynamic Identification – Workloads Aren’t Pre-Registered

Unlike humans, workloads don’t have pre-verified identities. They don’t exist until they do. That means:

  • Credentials can’t be issued ahead of time—because the workload isn’t there yet.
  • Static identifiers (like pre-registered certs) don’t work well for ephemeral, auto-scaling workloads.
  • The only way to know if a workload should exist is to verify it in real-time.

We’ve moved from static servers to workloads that scale and move dynamically. Machine identity needs to follow.

3. Shorter Credential Lifetimes Aren’t the Problem—They’re Exposing the Real One

Shorter credential lifetimes are making security better, not worse. The more often something happens, the better you get at doing it right. But they’re also highlighting the weaknesses in legacy identity management models:

  • Workloads that relied on static, pre-provisioned credentials are now failing because they weren’t designed for rotation.
  • Teams that never had to deal with automated credential issuance are now struggling because they either essentially or literally did it manually.
  • The more often a system has to handle identity dynamically, the more obvious its weak points become.

Short-lived credentials aren’t breaking security—they’re exposing the fact that we were never doing it right to begin with.

4. Workloads Are Ephemeral, but Secrets Stick Around

A workload can vanish in seconds, but its credentials often outlive it. If a container is compromised, its secret can be exfiltrated and reused indefinitely unless extra steps are taken.

“Three people can keep a secret—if two are dead.”

The same applies here. A workload might be long gone, but if its secrets are still floating around in a vault, they’re just waiting to be misused. And even if the key is stored securely, nothing stops an attacker who compromises an application taking its secret and using it elsewhere in the network or often outside of it.

What This Fixes

By breaking these problems out separately, we make it clear:

  • Attackers go after workload credentials, not the machine itself—because workloads are exposed, hold secrets, and run business logic.
  • Machines need authentication, but workloads need dynamic, verifiable identities.
  • Pre-registration is failing because workloads are dynamic and short-lived.
  • Short-lived certs aren’t the issue—they’re exposing that static credential models were never scalable.
  • Secrets should disappear with the workload, not persist beyond its lifecycle.
  • The divide between machine and workload identity is closing—legacy models just haven’t caught up.

This Shift Is Already Happening

Workload identity is becoming dynamic, attested, and ephemeral. Some teams are solving this with emerging approaches like SPIFFE for workloads and ACME for machines. The key is recognizing that identity isn’t a stored artifact—it’s a real-time state.

Machines used to be static, predictable entities. You’d assign an identity and expect it to stick around for years. But today, cloud infrastructure is ephemeral—VMs come and go, certificates rotate in hours, and pre-registering machines is looking more and more like an outdated relic of on-prem identity thinking.

Modern workloads are starting to do identity correctly—legacy ones aren’t. Machines, specifically TLS certificates, are getting more and more like workloads every day.

Attackers usually care less about your machine’s identity. They care about the API keys and credentials inside your running applications.

If an identity is just a credential in a vault, it’s not identity at all—it’s just a password with a fancier name.

AI Agent Security: A Framework for Accountability and Control

This weekend, I came across a LinkedIn article by Priscilla Russo about OpenAI agents and digital wallets that touched on something I’ve been thinking about – liability and AI agents and how they change system designs. As autonomous AI systems become more prevalent, we face a critical challenge: how do we secure systems that actively optimize for success in ways that can break traditional security models? The article’s discussion of Knight Capital’s $440M trading glitch perfectly illustrates what’s at stake. When automated systems make catastrophic decisions, there’s no undo button – and with AI agents, the potential for unintended consequences scales dramatically with their capability to find novel paths to their objectives.

What we’re seeing isn’t just new—it’s a fundamental shift in how organizations approach security. Traditional software might accidentally misuse resources or escalate privileges, but AI agents actively seek out new ways to achieve their goals, often in ways developers never anticipated. This isn’t just about preventing external attacks; it’s about containing AI itself—ensuring it can’t accumulate unintended capabilities, bypass safeguards, or operate beyond its intended scope. Without containment, AI-driven optimization doesn’t just break security models—it reshapes them in ways that make traditional defenses obsolete.

“First, in 2024, O1 broke out of its container by exploiting a vuln. Then, in 2025, it hacked a chess game to win. Relying on AI alignment for security is like abstinence-only sex ed—you think it’s working, right up until it isn’t,” said the former 19-year-old father.

The Accountability Gap

Most security discussions around AI focus on protecting models from adversarial attacks or preventing prompt injection. These are important challenges, but they don’t get to the core problem of accountability. As Russo suggests, AI developers are inevitably going to be held responsible for the actions of their agents, just as financial firms, car manufacturers, and payment processors have been held accountable for unintended consequences in their respective industries.

The parallel to Knight Capital is particularly telling. When their software malfunction led to catastrophic trades, there was no ambiguity about liability. That same principle will apply to AI-driven decision-making – whether in finance, healthcare, or legal automation. If an AI agent executes an action, who bears responsibility? The user? The AI developer? The organization that allowed the AI to interact with its systems? These aren’t hypothetical questions anymore – regulators, courts, and companies need clear answers sooner rather than later.

Building Secure AI Architecture

Fail to plan, and you plan to fail. When legal liability is assigned, the difference between a company that anticipated risks, built mitigations, implemented controls, and ensured auditability and one that did not will likely be significant. Organizations that ignore these challenges will find themselves scrambling after a crisis, while those that proactively integrate identity controls, permissioning models, and AI-specific security frameworks will be in a far better position to defend their decisions.

While security vulnerabilities are a major concern, they are just one part of a broader set of AI risks. AI systems can introduce alignment challenges, emergent behaviors, and deployment risks that reshape system design. But at the core of these challenges is the need for robust identity models, dynamic security controls, and real-time monitoring to prevent AI from optimizing in ways that bypass traditional safeguards.

Containment and isolation are just as critical as resilience. It’s one thing to make an AI model more robust – it’s another to ensure that if it misbehaves, it doesn’t take down everything around it. A properly designed system should ensure that an AI agent can’t escalate its access, operate outside of predefined scopes, or create secondary effects that developers never intended. AI isn’t just another software component – it’s an active participant in decision-making processes, and that means limiting what it can influence, what it can modify, and how far its reach extends.

I’m seeing organizations take radically different approaches to this challenge. As Russo points out in her analysis, some organizations like Uber and Instacart are partnering directly with AI providers, integrating AI-driven interactions into their platforms. Others are taking a defensive stance, implementing stricter authentication and liveness tests to block AI agents outright. The most forward-thinking organizations are charting a middle path: treating AI agents as distinct entities with their own credentials and explicitly managed access. They recognize that pretending AI agents don’t exist or trying to force them into traditional security models is a recipe for disaster.

Identity and Authentication for AI Agents

One of the most immediate problems I’m grappling with is how AI agents authenticate and operate in online environments. Most AI agents today rely on borrowed user credentials, screen scraping, and brittle authentication models that were never meant to support autonomous systems. Worse, when organizations try to solve this through traditional secret sharing or credential delegation, they end up spraying secrets across their infrastructure – creating exactly the kind of standing permissions and expanded attack surface we need to avoid. This might work in the short term, but it’s completely unsustainable.

The future needs to look more like SPIFFE for AI agents – where each agent has its own verifiable identity, scoped permissions, and limited access that can be revoked or monitored. But identity alone isn’t enough. Having spent years building secure systems, I’ve learned that identity must be coupled with attenuated permissions, just-in-time authorization, and zero-standing privileges. The challenge is enabling delegation without compromising containment – we need AI agents to be able to delegate specific, limited capabilities to other agents without sharing their full credentials or creating long-lived access tokens that could be compromised.

Systems like Biscuits and Macaroons show us how this could work: they allow for fine-grained scoping and automatic expiration of permissions in a way that aligns perfectly with how AI agents operate. Instead of sharing secrets, agents can create capability tokens that are cryptographically bound to specific actions, contexts, and time windows. This would mean an agent can delegate exactly what’s needed for a specific task without expanding the blast radius if something goes wrong.

Agent Interactions and Chain of Responsibility

What keeps me up at night isn’t just individual AI agents – it’s the interaction between them. When a single AI agent calls another to complete a task, and that agent calls yet another, you end up with a chain of decision-making where no one knows who (or what) actually made the call. Without full pipeline auditing and attenuated permissions, this becomes a black-box decision-making system with no clear accountability or verifiablity. That’s a major liability problem – one that organizations will have to solve before AI-driven processes become deeply embedded in financial services, healthcare, and other regulated industries.

This is particularly critical as AI systems begin to interact with each other autonomously. Each step in an AI agent’s decision-making chain must be traced and logged, with clear accountability at each transition point. We’re not just building technical systems—we’re building forensic evidence chains that will need to stand up in court.

Runtime Security and Adaptive Controls

Traditional role-based access control models fundamentally break down with AI systems because they assume permissions can be neatly assigned based on predefined roles. But AI doesn’t work that way. Through reinforcement learning, AI agents optimize for success rather than security, finding novel ways to achieve their goals – sometimes exploiting system flaws in ways developers never anticipated. We have already seen cases where AI models learned to game reward systems in completely unexpected ways.

This requires a fundamental shift in our security architecture. We need adaptive access controls that respond to behavior patterns, runtime security monitoring for unexpected decisions, and real-time intervention capabilities. Most importantly, we need continuous behavioral analysis and anomaly detection that can identify when an AI system is making decisions that fall outside its intended patterns. The monitoring systems themselves must evolve as AI agents find new ways to achieve their objectives.

Compliance by Design

Drawing from my years building CAs, I’ve learned that continual compliance can’t just be a procedural afterthought – it has to be designed into the system itself. The most effective compliance models don’t just meet regulatory requirements at deployment; they generate the artifacts needed to prove compliance as natural byproducts of how they function.

The ephemeral nature of AI agents actually presents an opportunity here. Their transient access patterns align perfectly with modern encryption strategies – access should be temporary, data should always be encrypted, and only authorized agents should be able to decrypt specific information for specific tasks. AI’s ephemeral nature actually lends itself well to modern encryption strategies – access should be transient, data should be encrypted at rest and in motion, and only the AI agent authorized for a specific action should be able to decrypt it.

The Path Forward

If we don’t rethink these systems now, we’ll end up in a situation where AI-driven decision-making operates in a gray area where no one is quite sure who’s responsible for what. And if history tells us anything, regulators, courts, and companies will eventually demand a clear chain of responsibility – likely after a catastrophic incident forces the issue.

The solution isn’t just about securing AI – it’s about building an ecosystem where AI roles are well-defined and constrained, where actions are traceable and attributable, and where liability is clear and manageable. Security controls must be adaptive and dynamic, while compliance remains continuous and verifiable.

Organizations that ignore these challenges will find themselves scrambling after a crisis. Those that proactively integrate identity controls, permissioning models, and AI-specific security frameworks will be far better positioned to defend their decisions and maintain control over their AI systems. The future of AI security lies not in building impenetrable walls, but in creating transparent, accountable systems that can adapt to the unique challenges posed by autonomous agents.

This post lays out the challenges, but securing AI systems requires a structured, scalable approach. In Containing the Optimizer: A Practical Framework for Securing AI Agent Systems I outline a five-pillar framework that integrates containment, identity, adaptive monitoring, and real-time compliance to mitigate these risks.

How Washington State is Preparing to Undermine Parents and the Constitution

I am not a lawyer, but I love the law. I love the law because it increases the chances of predictable outcomes, aiming to provide a stable framework that protects our rights and creates a level playing field for all. The law is not just a collection of rules – it is a security system for our rights, designed to prevent future harm. Constitutional lawyers, judges, and legislators study system vulnerabilities, analyze potential threats, and design legal frameworks that protect against systemic failures.

Just as a well-built security system relies on layers of protection, our legal system depends on precedent – the accumulated wisdom of past rulings that form a firewall between government power and individual rights. Precedent is meant to stop governments from repeating past mistakes, stripping away hard-won rights, or changing the rules for political convenience. But that protection only works if lawmakers and courts respect it – and Washington’s leaders now appear ready to test its limits.

The People Took a Stand – And the Government is Responding

As both a parent and someone who has studied these issues carefully, I’m particularly troubled by House Bill 1296. While its supporters claim it protects children, the bill actually undermines the very protections that thousands of parents like me fought to secure through I-2081. These changes could significantly affect parental notification requirements, access to records, and decision-making authority that I-2081 was designed to protect.

The Supreme Court’s recognition of parental rights as fundamental reflects a crucial reality: parents, not government agencies, are uniquely positioned to make decisions about their children’s upbringing. When a child needs medical care or educational support, it’s parents who know their medical history, understand their learning style, and can best advocate for their interests. While the state has a role in preventing abuse and neglect, its power to override routine parental decisions demands extraordinary justification – a high bar that exists because parents possess irreplaceable knowledge about their children’s needs and circumstances.

While the state has legitimate interests in protecting children’s welfare, this grassroots movement led to Initiative 2081 (I-2081), the Parents’ Bill of Rights – a measure designed to restore transparency and ensure appropriate parental involvement. I-2081 guarantees that parents have access to their child’s school and medical records, requires schools to notify parents before providing medical services, and allows parents to opt their child out of instruction that conflicts with their values. Driven by broad-based support and careful consideration of both parental rights and child welfare, the initiative was expected to pass overwhelmingly.

Under Washington law, once passed by voters, the legislature is barred from amending or repealing an initiative for two years. Additionally, a King County Superior Court granted summary judgment in favor of I-2081, finding its provisions legally sound after a careful review of the competing interests at stake.

However, lawmakers took an unexpected approach by passing Initiative 2081 themselves in March 2024, rather than letting voters decide. This created a path for them to modify the initiative sooner than if voters had enacted it directly. While the legislature debated various amendments, including changes to notification procedures, the core concern remained: this maneuver, though legal, potentially undermined the citizen initiative process that had brought the Parents’ Bill of Rights forward in the first place.

Constitutional Principles at Stake

The current situation presents a complex interplay of rights and responsibilities. While the state has a legitimate interest in protecting children, House Bill 1296 and related proposals risk undermining the very protections that parents fought to secure. These changes could significantly affect parental notification requirements, access to records, and decision-making authority that I-2081 was designed to protect.

The Supreme Court has consistently recognized that while the state has important responsibilities in protecting children’s welfare, parental rights are fundamental and deserve strong protection. State intervention, while sometimes necessary, must be justified by clear evidence and compelling circumstances. The challenge lies not in determining whether the state has any role – it clearly does – but in ensuring that new restrictions on parental rights meet the high constitutional standards required for such intervention.

The Initiative Process Under Pressure

Beyond the specific issue of parental rights, the integrity of Washington’s democratic processes is also at stake. Senate Bill 5283, introduced by Sen. Javier Valdez (D-Seattle), would create new requirements for signature gatherers. While voter integrity is important, these requirements could effectively kill grassroots participation in the initiative process, making voter-led bills like the Parental Bill of Rights nearly impossible in the future.

The Constitutional Framework

The United States Supreme Court has developed a careful framework for evaluating parental rights. In Wisconsin v. Yoder (1972), the Court established an important balancing test between state and parental interests, recognizing that while states have legitimate educational interests, parents’ fundamental rights in directing their children’s upbringing can outweigh state requirements when properly supported.

In Troxel v. Granville (2000), the Court affirmed these rights as fundamental; in Santosky v. Kramer (1982), it established the need for clear and convincing evidence before state intervention; and in Parham v. J.R. (1979), it outlined when state involvement might be justified. While these cases acknowledge both parental rights and state interests, they consistently require strong justification for overriding parental authority.

Laws affecting fundamental rights face the highest level of judicial review – strict scrutiny. Under this standard, the government must prove both a compelling interest and that its measures are narrowly tailored. While protecting children is certainly a compelling interest, the broad scope of the proposed changes suggests they may struggle to meet the “narrowly tailored” requirement. This doesn’t mean all regulation is impossible – but it does mean that restrictions must be carefully crafted and strongly justified.

Washington’s constitution provides additional safeguards for individual liberties and family rights. State courts have historically interpreted these protections robustly, while recognizing legitimate state interests in child welfare. This dual protection means that changes to parental rights must satisfy both federal and state constitutional requirements.

Defining Harm

While Washington lawmakers may seek to broaden the definition of harm to justify greater intervention, such changes must be precise and evidence-based. The state undeniably has a compelling interest in preventing child abuse and neglect, and courts have long upheld intervention in cases of severe medical neglect and physical abuse. However, House Bill 1296 goes beyond these extreme cases, potentially expanding state authority over routine parental decisions that have historically received strong constitutional protection. Supreme Court precedent does not prohibit all state action, but it does require substantial justification for overriding parental authority. Vague or speculative concerns are not enough to justify restrictions on fundamental rights.

Legal Challenges Ahead

If Washington proceeds with these changes, they will likely face significant constitutional scrutiny. The Fourteenth Amendment’s protection of parental rights, combined with federal laws like FERPA (while subject to certain exceptions), creates a strong framework for challenging overreach. While courts recognize the state’s role in protecting children, they typically require compelling evidence before allowing intervention in family decisions.

This isn’t merely about policy preferences – it’s about fundamental constitutional principles and the balance of power between families and government. While reasonable people can disagree about specific policies, the broader trend toward diminishing parental rights without compelling justification threatens core constitutional values. If Washington succeeds in implementing these changes, it could encourage similar efforts elsewhere, potentially eroding long-established protections for family autonomy.

Take Action Today

Make your voice heard! Washington has an official website where you can share your perspective on House Bill 1296 and Senate Bill 5283. These bills impact both parental rights and the future of citizen initiatives in our state. Review the bills and share your views with Washington’s legislators.

Why It’s Time to Rethink Machine and Workload Identity: Lessons from User Security

MFA slashed credential-based attacks. Passwordless authentication made phishing harder than ever. These breakthroughs transformed user security—so why are machines and workloads still stuck with static secrets and long-lived credentials?

While we’ve made remarkable progress in securing user identity, the same cannot always be said for machine and workload identity—servers, workloads, APIs, and applications. Machines often rely on static secrets stored in configuration files, environment variables, or files that are copied across systems. Over time, these secrets become fragmented, overly shared, and difficult to track, creating significant vulnerabilities. The good news? Machines and workloads are arguably easier to secure than humans, and applying the same principles that worked for users—like short-lived credentials, multi-factor verification, and dynamic access—can yield even greater results.

Let’s take the lessons learned from securing users and reimagine how we secure machines and workloads.

From Static Secrets to Dynamic Credentials

Machine and workload identity have long been built on the shaky foundation of static secrets—API keys, passwords, or certificates stored in configuration files, environment variables, or local files. These secrets are often copied across systems, passed between teams, and reused in multiple environments, making them not only overly shared but also hard to track. This lack of visibility means that a single forgotten or mismanaged secret can become a point of entry for attackers.

The lesson from user security is clear: static secrets must be replaced with dynamic, ephemeral credentials that are:

  • Short-lived: Credentials should expire quickly to minimize exposure.
  • Context-aware: Access should be tied to specific tasks or environments.
  • Automatically rotated: Machines and workloads should issue, validate, and retire credentials in real-time without human intervention.

This shift is about evolving from secret management to credential management, emphasizing real-time issuance and validation over static storage. Just as password managers gave way to passwordless authentication, dynamic credentialing represents the next step in securing machines and workloads.

Attestation: The MFA for Machines and Workloads

For users, MFA became critical in verifying identity by requiring multiple factors: something you know, have, or are. Machines and workloads need an equivalent, and attestation fills that role.

Attestation acts as the MFA for machines and workloads by providing:

  1. Proof of identity: Verifying that a machine or workload is legitimate.
  2. Proof of context: Ensuring the workload’s environment and posture align with security policies.
  3. Proof of trustworthiness: Validating the workload operates within secure boundaries, such as hardware-backed enclaves or trusted runtimes.

Just as MFA reduced compromised passwords, attestation prevents compromised machines or workloads from gaining unauthorized access. It’s a dynamic, context-aware layer of security that aligns perfectly with Zero Trust principles.

Zero Trust: Reclaiming the Original Vision

When Zero Trust was introduced, it was a design principle: “Never trust, always verify.” It challenged the idea of implicit trust and called for dynamic, contextual verification for every access request.

But somewhere along the way, marketers reduced Zero Trust to a buzzword, often pushing solutions like VPN replacements or network segmentation tools. 

To reclaim Zero Trust, we need to:

  1. Treat all access as privileged access: Every request—whether from a user, machine, or workload—should be verified and granted the least privilege necessary.
  2. Apply dynamic credentialing: Replace static secrets with short-lived credentials tied to real-time context.
  3. Extend MFA principles to machines and workloads: Use attestation to continuously verify identity, context, and trustworthiness.

Preparing for the Future: Agentic AI and the Need for Robust Machine and Workload Identity

As organizations increasingly adopt agentic AI systems—autonomous systems that execute tasks and make decisions on behalf of users—the need for robust machine and workload identity management becomes even more pressing. These systems often require delegated access to resources, APIs, and other identities. Without proper safeguards, they introduce new attack surfaces, including:

  • Over-permissioned access: Delegated tasks may unintentionally expose sensitive resources.
  • Static secrets misuse: Secrets stored in configuration files or environment variables can become high-value targets for attackers, especially when copied across systems.
  • Fragmented visibility: Secrets that are spread across teams or environments are nearly impossible to track, making it hard to detect misuse.

To securely deploy agentic AI, organizations must:

  1. Implement dynamic credentials: Ensure AI systems use short-lived, context-aware credentials that expire after each task, reducing the risk of abuse.
  2. Require attestation: Validate the AI’s environment, behavior, and identity before granting access, just as you would verify a trusted workload.
  3. Continuously monitor and revoke access: Apply zero standing privileges, ensuring access is granted only for specific tasks and revoked immediately afterward.

Building strong foundations in machine and workload identity management today ensures you’re prepared for the growing complexity of AI-driven systems tomorrow.

A Call to Action for Security Practitioners

For years, we’ve made meaningful progress in securing users, from deploying MFA to replacing passwords with strong authenticators. These changes worked because they addressed fundamental flaws in how identity and access were managed.

Now, it’s time to ask: Where else can we apply these lessons?

Look for parallels:

  • If replacing passwords reduced breaches for users, then replacing static secrets with dynamic credentials for machines and workloads can deliver similar results.
  • If MFA improved user authentication, then attestation for machines and workloads can add the same level of assurance to machine identity.
  • E2E encryption for personal communications vs. process-to-process security: End-to-end encryption has drastically improved the privacy of our personal communications, ensuring messages are secure from sender to recipient. Similarly, robust authentication and encryption between processes—ensuring that only trusted workloads communicate—can bring the same level of assurance to machine-to-machine communications, protecting sensitive data and operations.

By identifying these parallels, we can break down silos, extend the impact of past successes, and create a truly secure-by-default environment.

Final Thought

Security practitioners should always ask: Where have we already made meaningful progress, and where can we replicate that success?

If replacing passwords and adding MFA helped reduce user-related breaches, then replacing static secrets and adopting attestation for machines and workloads is a natural next step—one that is arguably quicker and easier to implement, given that machines and workloads don’t resist change.

Zero Trust was never meant to be a buzzword. It’s a call to rethink security from the ground up, applying proven principles to every layer of identity, human or machine. By embracing this approach, we can build systems that are not only resilient but truly secure by design.