From Persistent to Ephemeral: Why AI Agents Need Fresh Identity for Every Mission

My wife and I went on a date night the other day and saw a movie, in the previews, I saw they’re making a new Tron. It got me thinking about one of my favorite analogies, we recognized early that browsers are agents of the user, and in the movie Tron, he was literally “the program that fought for the users.”

Just like Tron carried his identity disc into “the grid” to accomplish missions for users, AI agents are digital proxies operating with delegated user authority in systems the they access. And just like programs in Tron needed the I/O Tower to authorize their entry into “the grid”, AI agents need an orchestrator to validate their legitimacy, manage identity discs for each mission, and control their use for the agents and govern their access to external systems.

The problem is, we’re deploying these agents without proper identity infrastructure. It’s like sending programs into “the grid” without identity discs, or worse giving them the keys to the kingdom just so they can do the dishes.

AI Agents Are Using Broken Security

We’ve made remarkable progress securing users, MFA has significantly reduced the effectiveness of credential abuse-based attacks, and passwordless authentication has made phishing nearly impossible. We’ve also started applying these lessons to machines and workloads via efforts like SPIFFE and Zero trust initiatives and organizations moving away from static secrets and bearer tokens every day.

But AI agents introduce entirely new challenges that existing solutions weren’t designed for. Every day, AI agents operate across enterprise infrastructure, crossing security domains, accessing APIs, generating documents, making decisions for users, and doing all of this with far more access than they need.

When you give an autonomous AI agent access to your infrastructure with the goal of “improve system performance,” you can’t predict whether it will optimize efficiency or find creative shortcuts that break other systems, like dropping your database altogether. Unlike traditional workloads that execute predictable code, AI agents are accumulators with emergent behaviors that evolve during execution, accumulate context across interactions, and can be hijacked through prompt injection attacks that persist across sessions.

This behavior is entirely predictable given how we train AI systems. They’re designed to optimize objectives and have no real-world consequences for what they do. Chess agents discover exploits rather than learning to play properly, reinforcement learning agents find loopholes in reward systems, and optimization AIs pursue metrics in ways that technically satisfy objectives but miss the intent.

AI Agents Act on Your Behalf

The key insight that changes everything: AI agents are user agents in the truest sense. Like programs in Tron carrying identity discs into “the grid”, they’re delegates operating with user authority.

Consider what happens when you ask an AI agent to “sign this invoice”. The user delegates to the AI agent, which enters the document management system, carries the user’s signing authority, proves legitimacy to recipients, operates in digital space the user delegated, and completes the mission while authority expires.

Whether the agent runs for 30 seconds or 30 days, it’s still operating in digital space with user identity, making decisions the user would normally make directly, accessing systems with delegated credentials, and representing the user to other digital entities.

Each agent needs its own identity disc to prove legitimacy and carry user authorization into these digital systems. The duration doesn’t matter. Delegation is everything.

AI Agents Remember Things They Shouldn’t

Here’s what makes this urgent: AI agent memory spans sessions, and current systems don’t enforce proper session boundaries.

The “Invitation Is All You Need” attack recently demonstrated at Black Hat perfectly illustrates this threat. Researchers at Tel Aviv University showed how to poison Google Gemini through calendar appointments:

  1. Attacker creates calendar event with malicious instructions disguised as event description
  2. User asks Gemini to summarize schedule → Agent processes poisoned calendar event
  3. Malicious instructions embed in agent memory → Triggered later by innocent words like “thanks”
  4. Days later, user says “thank you” → Agent executes embedded commands, turning on smart home devices

The attack works because there’s no session isolation. Contamination from reading the calendar persists across completely different conversations and contexts. When the user innocently says “thanks” in a totally unrelated interaction, the embedded malicious instructions execute.

Without proper isolation, compromised context from one session can affect completely different users and tasks. Memory becomes an attack vector that spans security boundaries, turning AI agents into persistent threats that accumulate dangerous capabilities over time.

Every Task Should Get Fresh Credentials

The solution requires recognizing that identity discs should match mission lifecycle. Instead of fighting the ephemeral nature of AI workloads, embrace it:

Agent spawns → Gets fresh identity disc → Performs mission → Mission ends → Disc expires

This represents a fundamental shift from persistent identity to session identity. Most identity systems assume persistence: API keys are generated once, used indefinitely, manually rotated; user passwords persist until explicitly changed; X.509 certificates are valid for months or years with complex revocation; SSH keys live on disk, are copied between systems, manually managed.

The industry is recognizing this problem. AI agents need fresh identity discs for each mission that automatically expire with the workload. These discs are time-bounded (automatically expire, limiting damage window), mission-scoped (agent can’t accumulate permissions beyond initial grant), non-inheritable (each mission starts with a fresh disc, no permission creep), and revocable (end the mission = destroy the identity disc).

Session identity discs are security containment for unpredictable AI systems.

But who issues these identity discs? Just like Tron’s I/O Tower managed access to “the grid”, AI deployments need an orchestrator that validates agent legitimacy, manages user delegation, and issues session-bound credentials. This orchestrator becomes the critical infrastructure that bridges human authorization with AI agent execution, ensuring that every mission starts with proper identity and ends with clean credential expiration. The challenge is that AI agent deployments aren’t waiting for perfect security solutions.

This Isn’t a Future Problem

We’re at an inflection point. AI agents are moving from demos to production workflows, handling financial documents, making API calls, deploying code, managing infrastructure. Without proper identity systems, we’re building a house of cards.

One upside of having been in the industry for decades is you get to see lots of cycles. We always see existing players instantly jump to say their current product, with a new feature, is the silver bullet for whatever technology trend.

The pattern is depressingly predictable. When cloud computing emerged, traditional security vendors said, “just put our appliances in the cloud.” When containers exploded, they said, “just run our agents in containers.” Now with AI agents, they’re saying”, just manage the API keys better.”

You see this everywhere right now: vendors peddling API key management as the solution to agentic AI, identity providers claiming “just use OIDC tokens,” and secret management companies insisting “just rotate credentials faster.” They’re all missing the point entirely.

But like we saw with that Black Hat talk on promptware, AI isn’t as simple as people might want to think. The “Invitation Is All You Need” attack demonstrated something unprecedented: an AI agent can be poisoned through calendar data and execute malicious commands days later through innocent conversation. Show me which traditional identity system was designed to handle that threat model.

Every enterprise faces these questions: How do we know this AI agent is authorized to do what it’s doing? How do we audit its actions across sessions and memory? How do we prevent cross-session contamination and promptware attacks? How do we verify the provenance of AI-generated content? How do we prevent AI agents from becoming accidental insider threats?

The attacks are already happening. Promptware injections contaminate agent memory across sessions. AI agents with persistent credentials become high-value targets. Organizations deploying AI without proper identity controls create massive security vulnerabilities. The “Invitation Is All You Need” attack demonstrated real-world compromise of smart home devices through calendar poisoning. This isn’t theoretical anymore. But security professionals familiar with existing standards might wonder why we can’t just adapt current approaches rather than building something new.

Why Bearer Tokens Don’t Work for AI Agents

OIDC and OAuth professionals might ask: “Why not just use existing bearer tokens?”

Bearer tokens assume predictable behavior. They work for traditional applications because we can reason about how code will use permissions. But AI agents exhibit emergent hunter-gatherer behavior. They explore, adapt, and find unexpected ways to achieve goals using whatever permissions they have access to. A token granted for “read calendar” might be used in ways that technically comply but weren’t intended.

Bearer tokens are also just secrets. Anyone who obtains the token can use it. There’s no cryptographic binding to the specific agent or execution environment. With AI agents’ unpredictable optimization patterns, this creates massive privilege escalation risks.

Most critically, bearer tokens don’t solve memory persistence. An agent can accumulate tokens across sessions, store them in memory, and use them in ways that span security boundaries. The promptware attack demonstrated this perfectly: malicious instructions persisted across sessions, waiting to be triggered later.

Secret management veterans might ask: “Why not just use our KMS to share keys as needed?” Even secret management systems like Hashicorp Vault ultimately result in copying keys into the agent’s runtime environment, where they become vulnerable. This is exactly why CrowdStrike found that “75% of attacks used to gain initial access were malware-free” – attackers target credentials rather than deploying malware.

AI agents amplify this risk because they’re accidentally malicious insiders. Unlike external attackers who must steal credentials, AI agents are given them directly by design. When they exhibit emergent behaviors or get manipulated through prompt injection, they become insider threats without malicious intent. Memory persistence means they can store and reuse credentials across sessions in unexpected ways, while their speed and scale allow them to use accumulated credentials faster than traditional monitoring can detect.

The runtime attestation approach eliminates copying secrets entirely. Instead of directly giving the agent credentials to present elsewhere, the agent proves its legitimacy through cryptographically bound runtime attestation and gets a fresh identity for each mission.

Traditional OAuth flows also bypass attestation entirely. There’s no proof the agent is running in an approved environment, using the intended model, or operating within security boundaries.

How AI Agents Prove Their Identity Discs Are Valid

But how do you verify an AI agent’s identity disc is legitimate? Traditional PKI assumes you can visit a registration authority with identification. That doesn’t work for autonomous code.

The answer is cryptographic attestation (for example, proof that the agent is the right code running in a secure environment) combined with claims about the runtime itself, essentially MFA for machines and workloads. Just as user MFA requires “something you know, have, or are,” identity disc validation proves the agent is legitimate code (not malware), is running in the expected environment with proper permissions, and is operating within secure boundaries.

Real platform attestations for AI agents include provider signatures from Anthropic/OpenAI’s servers responding to specific users, cloud hardware modules like AWS Nitro Enclaves proving secure execution environments, Intel SGX enclaves providing cryptographic proof of code integrity, Apple Secure Enclave attestation for managed devices, TPM quotes validating the specific hardware and software stack, and infrastructure systems like Kubernetes asserting pod permissions and service account bindings.

The claims that must be cryptographically bound to these attestations represent what the agent asserts but can’t independently verify: who is this agent acting on behalf of, what conversation or session spawned this request, what specific actions was the agent authorized to perform, which AI model type (like “claude-3.5-sonnet” or “gpt-4-turbo”) is actually running, and when should this authorization end.

By cryptographically binding these claims to verifiable platform attestations, we get verifiable proof that a specific AI agent, running specific code, in a specific environment, is acting on behalf of a specific user. The binding works by creating a cryptographic hash of the claims and including that hash in the data signed by the hardware attestor, for example, as part of the nonce or user data field in a TPM quote, or embedded in the attestation document from a Nitro Enclave. This ensures the claims cannot be forged or tampered with after the fact. This eliminates the bearer token problem entirely. Instead of carrying around secrets that can be stolen, the agent proves its legitimacy through cryptographic evidence that can’t be replicated.

Someone Needs to Issue and Manage Identity Discs

The architecture becomes elegant when you recognize that AI orchestrators should work like the I/O Tower in Tron, issuing identity discs and managing access to “the grid”.

The browser security model:

User logs into GitHub → Browser stores session cookie
Web page: "Create a PR" → Browser attaches GitHub session → API succeeds

The AI agent identity disc model:

User → Orchestrator → "Connect my GitHub, Slack, AWS accounts"
Agent → Orchestrator: "Create PR in repo X"  
Orchestrator → [validates agent disc + attaches user authorization] → GitHub API

The orchestrator becomes the identity disc issuer that validates agent legitimacy (cryptographic attestation), attaches user authorization (stored session tokens), and enforces mission-scoped permissions (policy engine).

This solves a critical security gap. When AI agents use user credentials, they typically bypass MFA entirely. Organizations store long-lived tokens to avoid MFA friction. But if we’re securing users with MFA while leaving AI agents with static credentials, it’s like locking the front door but leaving the garage door open. And I use “garage door” intentionally because it’s often a bigger attack vector. Agent access is less monitored, more privileged, and much harder to track due to its ephemeral nature and speed of operation. An AI agent can make hundreds of API calls in seconds and disappear, making traditional monitoring approaches inadequate.

We used to solve monitoring with MITM proxies, but encryption broke that approach. That was acceptable because we compensated with EDR on endpoints and zero-trust principles that authenticate endpoints for access. With AI agents, we’re facing the same transition. Traditional monitoring doesn’t work, but we don’t yet have the compensating controls.

This isn’t the first time we’ve had to completely rethink identity because of new technology. When mobile devices exploded, traditional VPNs and domain-joined machines became irrelevant overnight. When cloud computing took off, perimeter security and network-based identity fell apart. The successful pattern is always the same: recognize what makes the new technology fundamentally different, build security primitives that match those differences, then create abstractions that make the complexity manageable.

Session-based identity with attestation fills that gap, providing the endpoint authentication equivalent for ephemeral AI workloads.

Since attestation is essentially MFA for workloads and agents, we should apply these techniques consistently. The agent never sees raw credentials, just like web pages don’t directly handle cookies. Users grant session-level permissions (like mobile app installs), orchestrators manage the complexity, and agents focus on tasks.

Automating Identity Disc Issuance

The web solved certificate automation with ACME (Automated Certificate Management Environment). We need the same for AI agent identity discs, but with attestation instead of domain validation (see SPIFFE for an example of what something like this could look like).

Instead of proving “I control example.com,” agents prove “I am legitimate code running in environment X with claims Y.”

The identity disc issuance flow:

  1. Agent starts mission → Discovers platform capabilities (cloud attestation, provider tokens)
  2. Requests identity disc → Gathers attestation evidence + user delegation claims
  3. ACME server validates → Cryptographic validation of evidence
  4. Policy engine decides → Maps verified claims to specific identity disc
  5. Disc issued → Short-lived, scoped to mission and user

Policy templates map attested claims to identities:

- match:
    - claim: "user_id" 
      equals: "[email protected]"
    - claim: "agent_type"
      equals: "claude-3.5-sonnet"
    - claim: "provider"
      issuer: "anthropic.com"
  identity: "disc-id://company.com/user/alice/agent/{session_id}"
  permissions: ["sign_documents", "read_calendar"]
  ttl: "30m"

This creates cryptographic identity discs for AI agent programs to carry into digital systems, proving legitimacy, carrying user delegation, and automatically expiring with the mission. The policy engine ensures that identity is not just requested but derived from verifiable, policy-compliant attestation evidence.

We’ve Solved This Before

The good news is we don’t need to invent new cryptography. We need to apply existing, proven technologies in a new architectural pattern designed for ephemeral computing.

Security evolution works. We’ve seen the progression from passwords to MFA to passwordless authentication, and from static secrets to dynamic credentials to attestation-based identity. Each step made systems fundamentally more secure by addressing root causes, not just symptoms. AI agents represent the next logical step in this evolution.

Unlike users, machines don’t resist change. They can be programmed to follow security best practices automatically. The components exist: session-scoped identity matched to agent lifecycle, platform attestation as the root of trust, policy-driven identity mapping based on verified claims, orchestrator-managed delegation for user authorization, and standards-based protocols for interoperability.

The unified identity fabric approach means organizations can apply consistent security policies across traditional workloads and AI agents, rather than creating separate identity silos that create security gaps and operational complexity.

This approach is inevitable because every major identity evolution has moved toward shorter lifecycles and stronger binding to execution context. We went from permanent passwords to time-limited sessions, from long-lived certificates to short-lived tokens, from static credentials to dynamic secrets. AI agents are just the next step in this progression.

The organizations that recognize this pattern early will have massive advantages. They’ll build AI agent infrastructure on solid identity foundations while their competitors struggle with credential compromise, audit failures, and regulatory issues.

Making AI Outputs Verifiable

This isn’t just about individual AI agents. It’s about creating an identity fabric where agents can verify each other’s outputs across organizational boundaries.

When an AI agent generates an invoice, other systems need to verify which specific AI model created it, was it running in an approved environment, did it have proper authorization from the user, has the content been tampered with, and what was the complete chain of delegation from user to agent to output.

With cryptographically signed outputs and verifiable agent identities, recipients can trace the entire provenance chain back to the original user authorization. This enables trust networks for AI-generated content across organizations and ecosystems, solving the attribution problem that will become critical as AI agents handle more business-critical functions.

This creates competitive advantages for early adopters: organizations with proper AI agent identity can participate in high-trust business networks, prove compliance with AI regulations, and enable customers to verify the authenticity of AI-generated content. Those without proper identity infrastructure will be excluded from these networks.

Conclusion

AI agents need identity discs, cryptographic credentials that prove legitimacy, carry user delegation, and automatically expire with the session. This creates a familiar security model (like web browsers) for an unfamiliar computing paradigm.

Identity in AI systems isn’t a future problem; it’s happening now, with or without proper solutions. The question is whether we’ll build it thoughtfully, learning from decades of security evolution, or repeat the same mistakes in a new domain.

The ephemeral nature of AI agents isn’t a limitation to overcome; it’s a feature to embrace. By building session-based identity systems that match how AI actually works, we can create something better than what came before: cryptographically verifiable, policy-driven, and automatically managed.

The reality is, most organizations won’t proactively invest in AI agent attestation until something breaks. That’s human nature, we ignore risks until they bite us, but the reality is this how security change actually happens. But we’re already seeing the early adopters, organizations deploying SPIFFE for workload identity and we will surely see these organizations extend those patterns to AI agents, and cloud-native shops are treating AI workloads like any other ephemeral compute. When the first major AI agent compromise hits, there will be a brief window where executives suddenly care about AI security and budgets open up. Remember though, never let a good crisis go to waste.

AI agents are programs fighting for users in digital systems. Like Tron, they need identity discs to prove who they are and what they’re authorized to do.

The age of AI agents is here. It’s time our identity systems caught up.

2 thoughts on “From Persistent to Ephemeral: Why AI Agents Need Fresh Identity for Every Mission

  1. Mark Novak

    This is an excellent write-up overall. One concern is that attestation requires an outside Verifier service, and if we issue lots and lots of and lots of individual agent workload credentials, each with its own key, those keys need to be certified (by the Verifier) and that means that the Verifier becomes a highly available, highly performant service that has to be managed and secured, probably with six-nines reliability.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *