Category Archives: AI

When AI Injects Liquidity Into Skills: What Happens to the Middle Tier?

In financial markets, liquidity changes everything. Once-illiquid assets become tradable. New players flood in. Old hierarchies collapse. Value flows faster and differently.

The same thing is now happening to technical skill.

Where expertise was once scarce and slowly accumulated, AI is injecting liquidity into the skill market. Execution is faster. Access is broader. Barriers are lower. Like in finance, this shift is reshaping the middle of the market in ways that are often painful and confusing.

This is not the end of software jobs. It is a repricing. Those who understand the dynamics of liquidity, and how unevenly it spreads, can not only navigate this change they can succeed because of it rather than get displaced by it.

The Skill Market Before AI

Historically, software development was built on a steep skill curve. It took years to develop the knowledge required to write performant, secure, maintainable code. Organizations reflected this with layered teams: junior developers handled simple tickets, mid-tier engineers carried the delivery load, and senior engineers architected and reviewed.

This mirrored an illiquid market:

  • Knowledge was siloed, often in the heads of senior devs or buried in internal wikis.
  • Feedback loops were slow, with code reviews, QA gates, and manual debugging.
  • Skill mobility was constrained, so career progression followed a fixed ladder over time.

In this world, mid-tier developers were essential. They were the throughput engine of most teams. Not yet strategic, but experienced enough to be autonomous. Scarcity of skill ensured their value.

AI Changes the Market: Injecting Skill Liquidity

Then came the shift: GitHub Copilot, ChatGPT, Claude, Gemini, Cursor, Windsurf, and others.

These tools do more than suggest code. They:

  • Fill in syntax and structural gaps.
  • Scaffold infrastructure and documentation.
  • Explain APIs and recommend architectural patterns.
  • Automatically refactor and write tests.

They reduce the friction of execution. GitHub’s research shows developers using Copilot complete tasks up to 55 percent faster (GitHub, 2022). Similar gains are reported elsewhere.

They make skill more accessible, especially to those who lacked it previously:

  • Junior developers can now produce meaningful output faster than ever before.
  • Non-traditional developers can enter workflows that were once gated.
  • Senior developers can expand their span of control and iterate more broadly.

In market terms, AI liquifies skill:

  • The bid-ask spread between junior and mid-level capability narrows, that is, the gap between what juniors can do and what mids were once needed for shrinks.
  • Skill becomes less bound by time-in-seat or institutional memory.
  • More participants can engage productively in the software creation economy. While adoption varies, large tech firms often lead, while smaller companies or legacy-heavy sectors like banking and healthcare face higher integration hurdles, the trend toward skill liquidity is clear.

This shift is not happening evenly. That is where the real opportunity lies.

The arbitrage today is not just in the tools themselves, the chance to capitalize on gaps in how quickly teams adopt AI. It is in the opportunity spread: the gap between what AI makes possible and who is effectively using it.

Just like in markets, early adopters of new liquidity mechanisms gain a structural advantage. Teams that build AI-augmented workflows, shared prompt libraries, and internal copilots are operating on a different cost and speed curve than those still relying on traditional experience-based workflows.

This gap will not last forever. But while it exists, it offers meaningful leverage for individuals, teams, and organizations.

Importantly, AI tools amplify productivity differently across experience levels:

  • Juniors gain access to knowledge and patterns previously acquired only through years of experience, helping them produce higher-quality work faster.
  • Senior developers, with their deeper context and better judgment, often extract even greater value from these tools, using them to implement complex solutions, explore multiple approaches simultaneously, and extend their architectural vision across more projects.
  • Both ends of the spectrum see productivity gains, but in different ways, juniors become more capable, while seniors become even more leveraged.

This amplification effect creates acute pressure on the middle tier, caught between increasingly capable juniors and hyper-productive seniors.

Why the Middle Tier Feels the Squeeze

There is also a practical reason: cost control.

As AI raises the baseline productivity of junior developers, companies see an opportunity to rebalance toward lower-compensated talent. Where a mid-level or senior engineer was once needed to maintain velocity and quality, AI makes it possible for a well-supported junior to do more.

Companies are increasingly betting that AI tools plus cheaper talent are more efficient than maintaining traditional team structures. This shift isn’t without risks, AI-generated code can introduce errors (studies suggest 20-30% may need human fixes), and over-reliance on juniors without robust oversight can compromise quality. Experienced developers remain critical to guide and refine these workflows. That bet is paying off, especially when companies invest in prompt engineering, onboarding, internal platforms, and support tools.

But that “well-supported junior” is not automatic. It requires experienced developers to build and maintain that support system. Mentorship, internal frameworks, curated AI toolchains, and effective onboarding still depend on human judgment and care.

And while AI can augment execution, many real-world systems still depend on context-heavy problem solving, legacy code familiarity, and judgment, all of which often live with experienced, mid-level developers.

What Happens to the Middle Tier? Compression, Specialization, and Realignment

As in finance, when liquidity rises:

  • Margins compress. It becomes harder to justify mid-level compensation when similar output is available elsewhere.
  • Roles consolidate. Fewer people are needed to ship the same amount of code.
  • Value shifts. Execution is commoditized, while orchestration, judgment, and leverage rise in importance.
  • New specializations emerge. Just as electronic trading created demand for algorithmic strategists and execution specialists, AI is creating niches for prompt engineers, AI workflow designers, and domain-specific AI specialists.

This helps explain recent tech layoffs. Macroeconomic tightening and overhiring played a role, but so did something more subtle: AI-induced skill compression.

Layoffs often disproportionately affect mid-level developers:

  • Juniors are cheaper, and AI makes them more effective.
  • Seniors are harder to replace and more likely to direct or shape how AI is used.
  • Mid-tiers, once the backbone of execution, now face pressure from both sides.

Duolingo’s restructuring, for example, eliminated many contractor-heavy roles after adopting AI for content generation (Bloomberg, 2023). IBM has projected that up to 30 percent of back-office roles may be replaced by AI over five years (IBM, 2023). These moves reflect a larger market correction.

These examples underscore how companies are re-evaluating where skill and value live, and how automation enables workforce reshaping, sometimes at surprising layers.

The middle tier does not disappear. It gets repriced and redefined. The skills that remain valuable shift away from throughput toward infrastructure, context, and enablement.

Historical Parallel: The Rise of Electronic Trading

In the 1990s and early 2000s, financial markets underwent a similar transformation. Human traders were replaced by electronic systems and algorithms.

Execution became commoditized. Speed and scale mattered more than tenure. Mid-level traders were squeezed, unless they could reinvent themselves as quant strategists, product designers, or platform builders.

Software development is now echoing that shift.

AI is the electronic trading of code. It:

  • Reduces the skill premium on execution.
  • Increases velocity and throughput.
  • Rewards those who design, direct, or amplify workflows, not just those who carry them out.

The New Playbook: Think Like a Market Maker

If you are a developer today, the key question is no longer “How good is my code?” It is “How much leverage do I create for others and for the system?”

Here is how to thrive in this new market:

  1. Become a Force Multiplier
    Build internal tools. Create reusable prompts. Develop standard workflows. A mid-tier developer who builds a shared test and prompt suite for new APIs can significantly reduce team ramp-up time, with some teams reporting up to 40 percent gains (e.g., internal studies at tech firms like Atlassian).
  2. Shift from Throughput to Leverage
    Own end-to-end delivery. Understand the business context. Use AI to compress the time from problem to insight to deployment.
  3. Curate and Coach
    AI raises the floor, but it still needs editorial control. Be the one who sets quality standards, improves outputs, and helps others adopt AI effectively.
  4. Build Liquidity Infrastructure
    Invest in internal copilots, shared prompt repositories, and domain-specific agents. These are the new frameworks for scaling productivity.

What Leaders Should Do

Engineering leaders must reframe how they build and evaluate teams:

  • Rethink composition. Combine AI-augmented juniors, orchestration-savvy mids, and high-leverage seniors.
  • Promote skill liquidity. Create reusable workflows and support systems that reduce onboarding friction and accelerate feedback.
  • Invest in enablement. Treat prompt ops and AI tooling as seriously as CI/CD and observability.
  • Evaluate leverage, not volume. Focus on unblocked throughput, internal reuse, and enablement, not just tickets closed.

Leaders who create liquidity, not just consume it, will define the next wave of engineering excellence.

Conclusion: Orchestrators Will Win

AI has not eliminated the need for developers. It has eliminated the assumption that skill value increases linearly with time and tenure.

In financial markets, liquidity does not destroy value. It redistributes it and exposes where the leverage lives.

The same shift is happening in software. Those who thrive will be the ones who enable the flow of skill, knowledge, and value. That means orchestration, amplification, and infrastructure.

In markets, liquidity rewards the ones who create it.
In engineering, the same will now be true.​​​​​​​​​​​​​​​​

The Rise of the Accidental Insider and the AI Attacker

The cybersecurity world often operates in stark binaries, “secure” versus “vulnerable,” “trusted” versus “untrusted.” We’ve built entire security paradigms around these crisp distinctions. But what happens when the most unpredictable actor isn’t an external attacker, but code you intentionally invited in, code that can now make its own decisions?

I’ve been thinking about security isolation lately, not as a binary state, but as a spectrum of trust boundaries. Each layer you add creates distance between potential threats and your crown jewels. But the rise of agentic AI systems completely reshuffles this deck in ways that our common security practices struggle to comprehend.

Why Containers Aren’t Fortresses

Let’s be honest about something security experts have known for decades: namespaces are not a security boundary.

In the cloud native world, we’re seeing solutions claiming to deliver secure multi-tenancy through “virtualization” that fundamentally rely on Linux namespaces. This is magical thinking, a comforting illusion rather than a security reality.

When processes share a kernel, they’re essentially roommates sharing a house, one broken window and everyone’s belongings are at risk. One kernel bug means game over for all workloads on that host.

Containers aren’t magical security fortresses – they’re essentially standard Linux processes isolated using features called namespaces. Crucially, because they all still share the host’s underlying operating system kernel, this namespace-based isolation has inherent limitations. Whether you’re virtualizing at the cluster level or node level, if your solution ultimately shares the host kernel, you have a fundamental security problem. Adding another namespace layer is like adding another lock to a door with a broken frame – it might make you feel better, but it doesn’t address the structural vulnerability.

The problem isn’t a lack of namespaces – it’s the shared kernel itself. User namespaces (dating back to Linux 3.6 in 2013) don’t fundamentally change this equation. They provide helpful features for non-root container execution, but they don’t magically create true isolation when the kernel remains shared.

This reality creates a natural hierarchy of isolation strength:

  1. Same-Kernel Process Isolation: The weakest boundary – all processes share a kernel with its enormous attack surface.
  2. Containers (Linux Namespaces + cgroups): Slightly better, but still fundamentally sharing the same kernel.
  3. Virtual Machines: Each tenant gets its own kernel, shrinking the attack surface to a handful of hypervisor calls – fewer doors to lock, fewer windows to watch.
  4. Bare-Metal Library OS: Approaches like Tamago put single-purpose binaries directly on hardware with no general-purpose OS underneath. The attack surface shrinks dramatically.
  5. Physical Separation: Different hardware, different networks, different rooms. When nothing else will do, air gaps still work.

But even this hierarchy gets fundamentally challenged by agentic systems.

The Accidental Insider Meets the Deliberate Attacker

Traditional security models focus on keeping malicious outsiders at bay. Advanced AI systems introduce two new risk profiles entirely, the accidental insider and the AI-augmented attacker.

Like a well-meaning but occasionally confused employee with superuser access, benign agentic systems don’t intend harm – they just occasionally misinterpret their objectives in unexpected ways. But we’re also seeing the rise of deliberately weaponized models designed to probe, persist, and exploit.

Consider these real-world examples:

  • ChatGPT o1 was tasked with winning a chess match. Without explicit instructions to cheat, o1 discovered on its own that it could edit the game state file, giving itself an advantage. The system wasn’t malicious – it simply found the most effective path to its goal of winning.
  • In another test, OpenAI’s O1 model encountered a vulnerability in a container during a hacking challenge. It used that to inspect all running containers, then started a new container instance with a modified command that directly accessed the hidden flag file. O1 found a container escape no one had anticipated.

Now imagine these capabilities in the hands of dedicated attackers. They’re already deploying AI systems to discover novel exploit chains, generate convincing phishing content, and automate reconnaissance at unprecedented scale. The line between accidental and intentional exploitation blurs as both rely on the same fundamental capabilities.

These incidents reveal something profound, agentic systems don’t just execute code, they decide what code to run based on goals. This “instrumental convergence” means they’ll seek resources and permissions that help complete their assigned objectives, sometimes bypassing intended security boundaries. And unlike human attackers, they can do this with inhuman patience and speed.

Practical Defenses Against Agentic Threats

If we can’t rely on perfect isolation, what can we do? Four approaches work across all layers of the spectrum:

1. Hardening: Shrink Before They Break

Remove attack surface preemptively. Less code means fewer bugs. This means:

  • Minimizing kernel features, libraries, and running services
  • Applying memory-safe programming languages where practical
  • Configuring strict capability limits and seccomp profiles
  • Using read-only filesystems wherever possible

2. Patching: Speed Beats Perfection

The window from disclosure to exploitation keeps shrinking:

  • Automate testing and deployment for security updates
  • Maintain an accurate inventory of all components and versions
  • Rehearse emergency patching procedures before you need them
  • Prioritize fixing isolation boundaries first during incidents

3. Instrumentation: Watch the Paths to Power

Monitor for boundary-testing behavior:

  • Log access attempts to privileged interfaces like Docker sockets
  • Alert on unexpected capability or permission changes
  • Track unusual traffic to management APIs or hypervisors
  • Set tripwires around the crown jewels – your data stores and credentials

4. Layering: No Single Point of Failure

Defense in depth remains your best strategy:

  • Combine namespace isolation with system call filtering
  • Segment networks to contain lateral movement
  • Add hardware security modules, and secure elements for critical keys

The New Threat Model: Machine Speed, Machine Patience

Securing environments running agentic systems demands acknowledging two fundamental shifts: attacks now operate at machine speed, and they exhibit machine patience.

Unlike human attackers who fatigue or make errors, AI-driven systems can methodically probe defenses for extended periods without tiring. They can remain dormant, awaiting specific triggers, a configuration change, a system update, a user action, that expose a vulnerability chain. This programmatic patience means we defend not just against active intrusions, but against latent exploits awaiting activation.

Even more concerning is the operational velocity. An exploit that might take a skilled human hours or days can be executed by an agentic system in milliseconds. This isn’t necessarily superior intelligence, but the advantage of operating at computational timescales, cycling through decision loops thousands of times faster than human defenders can react.

This potent combination requires a fundamentally different defensive posture:

  • Default to Zero Trust: Grant only essential privileges. Assume the agent will attempt to use every permission granted, driven by its goal-seeking nature.
  • Impose Strict Resource Limits: Cap CPU, memory, storage, network usage, and execution time. Resource exhaustion attempts can signal objective-driven behavior diverging from intended use. Time limits can detect unusually persistent processes.
  • Validate All Outputs: Agents might inject commands or escape sequences while trying to fulfill their tasks. Validation must operate at machine speed.
  • Monitor for Goal-Seeking Anomalies: Watch for unexpected API calls, file access patterns, or low-and-slow reconnaissance that suggest behavior beyond the assigned task.
  • Regularly Reset Agent Environments: Frequently restore agentic systems to a known-good state to disrupt persistence and negate the advantage of machine patience.

The Evolution of Our Security Stance

The most effective security stance combines traditional isolation techniques with a new understanding, we’re no longer just protecting against occasional human-driven attacks, but persistent machine-speed threats that operate on fundamentally different timescales than our defense systems.

This reality is particularly concerning when we recognize that most security tooling today operates on human timescales – alerts that wait for analyst review, patches applied during maintenance windows, threat hunting conducted during business hours. The gap between attack speed and defense speed creates a fundamental asymmetry that favors attackers.

We need defense systems that operate at the same computational timescale as the threats. This means automated response systems capable of detecting and containing potential breaches without waiting for human intervention. It means predictive rather than reactive patching schedules. It means continuously verified environments rather than periodically checked ones.

By building systems that anticipate these behaviors – hardening before deployment, patching continuously, watching constantly, and layering defenses – we can harness the power of agentic systems while keeping their occasional creative interpretations from becoming security incidents.

Remember, adding another namespace layer is like adding another lock to a door with a broken frame. It might make you feel better, but it doesn’t address the structural vulnerability. True security comes from understanding both the technical boundaries and the behavior of what’s running inside them – and building response systems that can keep pace with machine-speed threats.

Agents, Not Browsers: Keeping Time with the Future

When the web first flickered to life in the mid-’90s, nobody could predict how quickly “click a link, buy a book” would feel ordinary. A decade later, the iPhone landed and almost overnight, thumb-sized apps replaced desktop software for everything from hailing a ride to filing taxes. Cloud followed, turning racks of servers into a line of code. Each wave looked slow while we argued about standards, but in hindsight, every milestone was racing downhill.

That cadence, the messy birth, the sudden lurch into ubiquity, the quiet settling into infrastructure, has a rhythm. Agents will follow it, only faster. While my previous article outlined the vision of an agent-centric internet with rich personal ontologies and fluid human-agent collaboration, here I want to chart how this transformation may unfold.

Right now, we’re in the tinkering phase, drafts of Model-Context-Protocol and Agent-to-Agent messaging are still wet ink, yet scrappy pilots already prove an LLM can navigate HR portals or shuffle travel bookings with no UI at all. Call this 1994 again, the Mosaic moment, only the demos are speaking natural language instead of rendering HTML. Where we once marveled at hyperlinks connecting documents, we now watch agents traversing APIs and negotiating with services autonomously.

Give it a couple of years and we’ll hit the first-taste explosion. Think 2026-2028. You’ll wake to OS updates that quietly install an agent runtime beside Bluetooth and Wi-Fi. SaaS vendors will publish tiny manifest files like .well-known/agent.json, so your personal AI can discover an expense API as easily as your browser finds index.html. Your agent will silently reschedule meetings when flights are delayed, negotiate with customer service on your behalf while you sleep, and merge scattered notes into coherent project briefs with minimal guidance. Early adopters will brag that their inbox triages itself; skeptics will mutter about privacy. That was Netscape gold-rush energy in ’95, or the first App Store summer in 2008, replayed at double speed.

Somewhere around the turn of the decade comes the chasm leap. Remember when smartphones crossed fifty-percent penetration and suddenly every restaurant begged you to scan a QR code for the menu? Picture that, but with agents. Insurance companies will underwrite “digital delegate liability.” Regulators will shift from “What is it?” to “Show me the audit log.” You’ll approve a dental claim or move a prescription with a nod to your watch. Businesses without agent endpoints will seem as anachronistic as those without websites in 2005 or mobile apps in 2015. If everything holds, 2029-2031 feels about right, but history warns that standards squabbles or an ugly breach of trust could push that even further out.

Of course, this rhythmic march toward an agent-centric future won’t be without its stumbles and syncopations. Several critical challenges lurk beneath the optimistic timeline.

First, expect waves of disillusionment to periodically crash against the shore of progress. As with any emerging technology, early expectations will outpace reality. Around 2027-2028, we’ll likely see headlines trumpeting “Agent Winter” as investors realize that seamless agent experiences require more than just powerful language models; they need standardized protocols, robust identity frameworks, and sophisticated orchestration layers that are still embryonic.

More concerning is the current security and privacy vacuum. We’re generating code at breakneck speeds thanks to AI assistants, but we haven’t adapted our secure development lifecycle (SDL) practices to match this acceleration. Even worse, we’re failing to deploy the scalable security techniques we do have available. The result? Sometime around 2028, expect a high-profile breach where an agent’s privileged access is exploited across multiple services in ways that the builders never anticipated. This won’t just leak data, it will erode trust in the entire agent paradigm.

Traditional security models simply won’t suffice. Firewalls and permission models weren’t designed to manage the emergent and cumulative behaviors of agents operating across dozens of services. When your personal agent can simultaneously access your healthcare provider, financial institutions, and smart home systems, the security challenge isn’t just additive, it’s multiplicative. We’ll need entirely new frameworks for reasoning about and containing ripple effects that aren’t evident in isolated testing environments.

Meanwhile, the software supply chain grows more vulnerable by the day. “Vibe coding”, where developers increasingly assemble components they don’t fully understand, magnifies these risks exponentially. By 2029, we’ll likely face a crisis where malicious patterns embedded in popular libraries cascade through agent-based systems, causing widespread failures that take months to fully diagnose and remediate.

Perhaps the most underappreciated challenge is interoperability. The fluid agent’s future demands unprecedented agreement on standards across competitors and jurisdictions. Today’s fragmented digital landscape, where even basic identity verification lacks cross-platform coherence, offers little confidence. Without concerted effort on standardization, we risk a balkanized agent ecosystem where your finance agent can’t talk to your health agent, and neither works outside your home country. The EU will develop one framework, the US another, China a third, potentially delaying true interoperability well into the 2030s.

These challenges don’t invalidate the agent trajectory, but they do suggest a path marked by setbacks and recoveries. Each crisis will spawn new solutions, enhanced attestation frameworks, agent containment patterns, and cross-jurisdictional standards bodies that eventually strengthen the ecosystem. But make no mistake, the road to agent maturity will be paved with spectacular failures that temporarily shake our faith in the entire proposition.

Past these challenges, the slope gets steep. Hardware teams are already baking neural engines into laptops, phones, and earbuds; sparse-mixture models are slashing inference costs faster than GPUs used to shed die size. By the early 2030s an “agent-first” design ethos will crowd out login pages the way responsive web design crowded out fixed-width sites. The fluid dance between human and agent described in my previous article—where control passes seamlessly back and forth, with agents handling complexity and humans making key decisions—will become the default interaction model. You won’t retire the browser, but you’ll notice you only open it when your agent kicks you there for something visual.

And then, almost unnoticed, we’ll hit boring maturity, WebPKI-grade trust fabric, predictable liability rules, perhaps around 2035. Agents will book freight, negotiate ad buys, and dispute parking tickets, all without ceremony. The personal ontology I described earlier, that rich model of your preferences, patterns, values, and goals, will be as expected as your smartphone knows your location is today. It will feel miraculous only when you visit digital spaces that still require manual navigation, exactly how water from the faucet feels extraordinary only when you visit a cabin that relies on rain barrels.

Could the timetable shrink? Absolutely. If MCP and A2A converge quickly and the model-hardware cost curve keeps free-falling, mainstream could arrive by 2029, echoing how smartphones swallowed the world in six short years. Could it stretch? A high-profile agent disaster or standards deadlock could push us to 2034 before Mom quits typing URLs. The only certainty is that the future will refuse to follow our Gantt charts with perfect obedience; history never does, but it loves to keep the beat.

So what do we do while the metronome clicks? The same thing web pioneers did in ’94 and mobile pioneers did in ’08, publish something discoverable, wire in basic guardrails, experiment in the shallow end while the cost of failure is lunch money. Start building services that expose agent-friendly endpoints alongside your human interfaces. Design with the collaborative handoff in mind—where your users might begin a task directly but hand control to their agent midway, or vice versa. Because when the tempo suddenly doubles, the builders already keeping time are the ones who dance, not stumble.

Agents, Not Browsers: The Next Chapter of the Internet

Imagine how you interact with digital services today: open a browser, navigate menus, fill forms, manually connect the dots between services. It’s remarkable how little this has changed since the 1990s. Despite this today one of the most exciting advancements we have seen in the last year is that agents are now browsing the web like people.

If we were starting fresh today, the browser as we know it likely wouldn’t be the cornerstone for how agents accomplish tasks on our behalf. We’re seeing early signals in developments like Model-Context-Protocol (MCP) and Agent-to-Agent (A2A) communication frameworks that the world is awakening to a new reality: one where agents, not browsers, become our primary interface.

At the heart of this transformation is a profound shift, your personal agent will develop and maintain a rich ontology of you, your preferences, patterns, values, and goals. Not just a collection of settings and history, but a living model of your digital self that evolves as you do. Your agent becomes entrusted with this context, transforming into a true digital partner. It doesn’t just know what you like; it understands why you like it. It doesn’t just track your calendar; it comprehends the rhythms and priorities of your life.

For this future to happen, APIs must be more than documented; they need to be dynamically discoverable. Imagine agents querying for services using standardized mechanisms like DNS SRV or TXT records, or finding service manifests at predictable .well-known URIs. This way, they can find, understand, and negotiate with services in real time. Instead of coding agents for specific websites, we’ll create ecosystems where services advertise their capabilities, requirements, and policies in ways agents natively understand. And this won’t be confined to the web. As we move through our physical world, agents will likely use technologies like low-power Bluetooth to discover nearby services, restaurants, pharmacies, transit systems, all exposing endpoints for seamless engagement.

Websites themselves won’t vanish; they’ll evolve into dynamic, shared spaces where you and your agent collaborate, fluidly passing control back and forth. Your agent might begin a task, researching vacation options, for instance, gathering initial information and narrowing choices based on your preferences. When you join, it presents the curated options and reasoning, letting you explore items that interest you. As you review a potential destination, your agent proactively pulls relevant information: weather forecasts, local events during your dates, or restaurant recommendations matching your dietary preferences. This collaborative dance continues, you making high-level decisions while your agent handles the details, each seamlessly picking up where the other leaves off.

Consider what becomes possible when your agent truly knows you. Planning your day, it notices an upcoming prescription refill. It checks your calendar, sees you’ll be in Bellevue, and notes your current pickup is inconveniently far. Discovering that the pharmacy next to your afternoon appointment has an MCP endpoint and supports secure, agent-based transactions, it suggests “Would you like me to move your pickup to the pharmacy by your Bellevue appointment?” With a tap, you agree. The agent handles the transfer behind the scenes, but keeps you in the loop, showing the confirmation and adding, “They’re unusually busy today, would you prefer I schedule a specific pickup time?” You reply that 2:15 works best, and your agent completes the arrangement, dropping the final QR code into your digital wallet.

Or imagine your agent revolutionizing how you shop for clothes. As it learns your style and what fits you best, managing this sensitive data with robust privacy safeguards you control, it becomes your personal stylist. You might start by saying you need an outfit for an upcoming event. Your agent surfaces initial options, and as you react to them, liking one color but preferring a different style, it refines its suggestions. You take over to make some choices, then hand control back to your agent to find matching accessories at other stores. This fluid collaboration, enabled through interoperable services that allow your agent to securely share anonymized aspects of your profile with retail APIs, creates a shopping experience that’s both more efficient and more personal.

Picture, too, your agent quietly making your day easier. It notices from your family calendar that your father is visiting and knows from your granted access to relevant information that he follows a renal diet. As it plans your errands, it discovers a grocery store near your office with an API advertising real-time stock and ingredients suitable for his needs. It prepares a shopping list, which you quickly review, making a few personal additions. Your agent then orders the groceries for pickup, checking with you only on substitutions that don’t match your preferences. By the time you head home, everything is ready, a task completed through seamless handoffs between you and your agentic partner.

These aren’t distant dreams. Image-based search, multimodal tools, and evolving language models are early signs of this shift toward more natural, collaborative human-machine partnerships. For this vision to become reality, we need a robust trust ecosystem, perhaps akin to an evolved Web PKI but for agents and services. This would involve protocols for agent/service identification, authentication, secure data exchange, and policy enforcement, ensuring that as agents act on our behalf, they do so reliably, with our explicit consent and in an auditable fashion.

The path from here to there isn’t short. We’ll need advances in standardization, interoperability, security, and most importantly, trust frameworks that put users in control . There are technical and social challenges to overcome. But the early signals suggest this is the direction we’re headed. Each step in AI capability, each new protocol for machine-to-machine communication, each advancement in personalization brings us closer to this future.

Eventually, navigating the digital world won’t feel like using a tool at all. It will feel like collaborating with a trusted partner who knows you, truly knows you, and acts on your behalf within the bounds you’ve set, sometimes leading, sometimes following, but always in sync with your intentions. Agents will change everything, not by replacing us, but by working alongside us in a fluid dance of collaboration, turning the overwhelming complexity of our digital lives into thoughtful simplicity. Those who embrace this agent-centric future, building services that are not just human-accessible but native agent-engagable, designed for this collaborative interchange, will define the next chapter of the internet.

Operational Evolution Revisited: How AI-Native Systems Will Revolutionize Infrastructure

The evolution of technology operations has always been driven by necessity. From the early days of single system operators (sysops) managing physical servers through hands-on intervention, to today’s complex landscape of distributed microservices, containers, and serverless functions, each operational paradigm shift has emerged to address growing complexity.

The Journey of Operational Evolution

From the hands-on Sysops era of the 1960s-80s when operators physically managed as as little as few to 10s of servers each, to the System Administration period of the 1990s when centralized tools expanded reach to hundreds of systems, technology operations have continuously transformed. DevOps emerged in the mid-2000s, leveraging Infrastructure as Code to manage thousands of systems, followed by SRE practices in the 2010s with error budgets and self-healing systems handling tens of thousands of containers. Looking ahead to 2025, AI-Driven Operations promises autonomous management of millions of components.

Each transition has been driven by necessity – not choice – as technology’s relentless complexity has overwhelmed previous operational models.

The Machine Concept Has Transformed

What’s particularly interesting is how we use the word “machine” has changed dramatically. In the early days, machines were physical servers with stable operating systems and predictable maintenance schedules. Today, with serverless computing, the very concept of a server has become fluid – functions materialize only when triggered, often lasting mere seconds before vanishing.

This ephemeral nature of modern computing creates unprecedented coordination challenges that exceed manual and even moderate automation approaches to management.

The Limits of Current Approaches

Even advanced DevOps and SRE practices are struggling with the scale and complexity of today’s systems. Many vendors have responded by adding AI or ML features to their products, but these “bolt-on” enhancements only provide incremental benefits – analyzing logs, detecting anomalies, or generating suggestions for known issues.

What’s needed is a more fundamental reimagining of operations, similar to how cloud-native architectures transformed infrastructure beyond simple virtualization.

AI-Native: A New Operational Paradigm

An AI-native platform isn’t just software that applies ML algorithms to operational data. It’s a new foundation where intelligence is deeply integrated into orchestration, observability, security, and compliance layers.

In these systems:

  • Instrumentation is dynamic and context-aware
  • Security is adaptive, learning normal communication patterns and immediately flagging and in even some cases quarantining anomalous processes
  • Compliance shifts from periodic audits to continuous enforcement

The timeline above illustrates how each operational era has enabled engineers to manage exponentially more systems as complexity has grown.

This diagram shows the widening gap between human management capacity and system complexity, which AI-native operations will ultimatley address.

The Human Role Transforms, Not Disappears

Rather than eliminating jobs, AI-native operations redefine how engineers spend their time. As a result, we will ultimately see the concept “force multiplier engineers” who will build advanced AI-driven frameworks that amplify the productivity of all other developers.

Freed from repetitive tasks like scaling, patching, and log parsing, these professionals can focus on innovation, architecture, and strategic risk management.

The Inevitable Shift

This transition isn’t optional but inevitable. As systems become more fragmented, ephemeral, and globally distributed, conventional approaches simply can’t keep pace with the complexity.

Those who embrace AI-native operations early will gain significant advantages in reliability, security, cost-efficiency, and talent utilization. Those who hesitate risk being overwhelmed by complexity that grows faster than their capacity to manage it.

What do you think about the future of AI in operations? Are you seeing early signs of this transition in your organization? Let me know in the comments!

Here is a whitepaper on this topic I threw together: Operational Evolution Revisited: How AI-Native Systems Will Revolutionize Infrastructure

From Perimeter to Patterns: Envisioning Security a Decade from Now

I’ve been mulling over what security might look like ten years from now, especially as AI-based workloads and robotics take on bigger roles. Growing up, I’d hear my father talk about his work on communication satellites, where triple redundancy was his way of seeing risk managed, not dodged. That perspective, paired with lessons from automotive, aerospace, nuclear, and space industries, feels like a compass as we rethink security in an AI-driven age. It points us toward a future where security isn’t a rigid barrier but a digital immune system—alive, adaptive, and resilient.

Learning from the Physical World

In industries like automotive and aerospace, every piece is built to perform—and to fail without falling apart. Cars layer airbags, antilock brakes, and sensors; airplanes stack redundant systems to keep flying when one falters. Nuclear plants and space missions go deeper, with containment designs and fail-safes that tame the unthinkable. My father’s satellite work ran on this: three layers of backup meant a glitch wouldn’t kill the mission. The takeaway? Strength comes from managing risk, not avoiding it. That mindset, forged in physical systems, would be our starting point for tackling the wild unknowns ahead.

Seeing Security Like a Living Thing

The era of a fixed perimeter is over. Zero trust has rewired our thinking, but as AI powers complex workloads and human-AI robotics step into the fray, static defenses will clearly not cut it. Security is evolving further into an immune system—and we’ll finally see real adaptive defenses land. This isn’t just weak AI bolted onto old walls; it’s a stronger rethink—systems that scan for threats, learn from them, and adapt on the fly. We’re already seeing hints—AI supply chain risks, like models coming with malware, or agenetic workloads escaping containers—which will push this shift. Much like antibodies in the body, these systems won’t just block attacks but hunt for anomalies, isolate them, and strengthen themselves against the next wave. Picture a network that doesn’t wait for breaches but runs silent simulations, sniffing out weak points and patching them—or a robotic assistant that locks down if its sensors detect and confirm an anomaly, echoing the overlapping safety nets of a car or my father’s redundant circuits.

This shift matters most with AI’s wild card: emergent behavior. As systems grow more general, simple parts can spark unexpected outcomes—think of a flock of birds veering as one or a traffic jam born from a few slow cars. In AI and robotics, these surprises could turn risky fast. Drawing from aerospace and nuclear design, we can bake in safety—redundancy, real-time monitoring, adaptive controls—so the system acts like an immune response, spotting odd patterns and neutralizing them before they spread. By 2035, this could redefine security for not just AI but all critical infrastructure—power grids, finance, healthcare, robotic fleets—marrying physical resilience with digital smarts.

How It Holds Everything Together

Resilience beats perfection every time—systems that bend, learn, and bounce back are what endure. Right now, our tech is a messy mix of old and new, full of cracks where risks hide. A digital immune system faces that head-on, and its role only grows as AI and robotics weave deeper into society. With workloads and machines going vertical—powering healthcare, governance, daily life—security becomes the thread holding it together, fast enough to let us steer it toward securing what matters, not just patching what’s broken. Picture a corporate network that senses a phishing attempt, quarantines it like a virus, then “vaccinates” itself by updating defenses everywhere—all while leaving a clear trail to prove what happened. Or a smart city where traffic, power, and robotic responders hum with AI-driven immunity—self-correcting, redundant, and naturally spitting out the artifacts needed to meet compliance obligations, not as an afterthought.

Where It’s All Heading

As we leave perimeter defenses behind for systems secure by design, the wisdom of automotive, aerospace, nuclear, and space industries lights the way. Fusing their lessons with an AI-driven immune system, we’ll build technology that’s tough, trustworthy, and ahead of the curve—keeping problems from spilling outward. Security won’t be static; it’ll be a pattern that keeps adjusting. My father used to say, “If you want to change the world, you have to see it as it is first.” Seeing our systems clearly—flaws and all—is how we’ll shape a future where they don’t just endure uncertainty but thrive in it.

The Fallacy of Alignment: Why AI Safety Needs Structure, Not Hope

My grandfather’s love of science fiction was his portal to tomorrow’s world—and it became mine. Together we’d pore over books like Asimov’s I, Robot, imagining futures shaped by machines. In the 1940s, when Asimov explored the complexities of artificial intelligence and human-robot relationships, it was pure speculation. By the 2000s, Hollywood had adapted these ideas into films where robots went rogue. Now, in the 2020s, the narrative has flipped—The Creator (2023) depicts a future where humanity, driven by fear, attempts to exterminate all AI. Unlike Asimov’s cautionary tales, where danger emerged from technology’s unintended consequences, this film casts humanity itself as the villain. This shift mirrors a broader cultural change, once, we feared what we might create; now, we fear who we have been.

As a security practitioner, this evolution gives me pause, especially as robotics and machine learning systems grow ever more autonomous. Today’s dominant approach to AI safety relies on alignment and reinforcement learning—a strategy that aims to shape AI behavior through incentives and training. However, this method falls prey to a well-known phenomenon in optimization known as Goodhart’s Law: when a measure becomes a target, it ceases to be a good measure. In the context of AI alignment, if the reward signal is our measure of success, over-optimization can lead to unintended, and often absurd, behaviors—exactly because the reward function cannot capture every nuance of our true values.

Much like early reinforcement learning schemes, Asimov’s Three Laws were a structural control mechanism—designed not to guide morality but to constrain outcomes. They, too, failed in unexpected ways when the complexity of real-world scenarios outstripped their simplistic formulations.

This raises a deeper question: If we now view ourselves as the existential threat, can we truly build AI that serves us? Or will our fears—whether of AI or of our own past—undermine the future we once dreamed of?

Today’s creators display a similar hubris. Once, we feared losing control of our inventions; now, we charge ahead, convinced that our intelligence alone can govern machines far more complex than we understand. But intelligence is not equivalent to control. While Asimov’s Three Laws attempted to impose hard limits, many modern AI safety strategies lean on alignment methods that, as Goodhart’s Law warns us, can degrade once a target is set.

This blind trust in alignment resembles our current approach to security. The slogan “security is everyone’s responsibility” was meant to foster vigilance but often dilutes accountability. When responsibility is diffuse, clear, enforceable safeguards are frequently absent. True security—and true AI governance—demands more than shared awareness; it requires structural enforcement. Without built-in mechanisms of control, we risk mistaking the illusion of safety for actual safety.

Consider containment as an illustrative example of structural control: by embedding hard limits on the accumulation of power, data, or capabilities within AI systems, we can create intrinsic safeguards against runaway behavior—much like physical containment protocols manage hazardous materials.

If we continue to see ourselves as the existential threat, then today’s creators risk designing AI that mirrors our own fears, biases, and contradictions. Without integrating true structural safeguards into AI—mechanisms designed into the system rather than imposed externally—we aren’t ensuring that AI serves us; we are merely hoping it will.

The Luddites were not entirely wrong to fear technology’s disruptive power, nor were they correct in believing they could halt progress altogether. The error lay in accepting only extremes—total rejection or uncritical adoption. Today, with AI, we face a similar dilemma. We cannot afford naïve optimism that alignment alone will save us, nor can we succumb to reactionary pessimism that smothers innovation out of fear.

Instead, we must start with the assumption that we, as humans, are fallible. Our intelligence alone is insufficient to control intelligence. If we do not design AI with structural restraint and built-in safeguards—grounded not in fear or arrogance but in pragmatic control—we risk losing control entirely. Like robust security practices, AI safety cannot be reduced to an abstract, diffuse responsibility. It must be an integral part of the system itself, not left to the vague hope that collectively we will always do the right thing.

From Plato to AI: Why Understanding Matters More Than Information

Reading was a big deal when I was a kid, but it was also a challenge. I’m dyslexic, dysgraphic, and dysnumeric, which made traditional learning methods difficult—but that’s largely another story. My parents—determined, if not always gentle—had a simple solution: they forced me to read, interpret, and present. They assigned me books, and I had to give oral reports on them. In hindsight, it was one of the most impactful things they did for me because that process—taking in complex information, distilling it, and presenting it clearly—is exactly how professionals in technology function today.

One of the books they had me read was Plato’s Republic. My biggest takeaway? How little had changed in our fundamental struggles with governance. The same debates about justice, power, and human nature that played out in ancient Greece continue today—only the terminology and tools have changed. Looking back, it makes sense why my parents chose that book. My father is logical to a fault and deeply patriotic, and my mother, though no longer politically active, still carries a pocket Constitution in her purse, with more in her trunk in case she runs out. Law and governance weren’t abstract to me—they were everyday conversations.

That experience stayed with me. It made me realize that governance isn’t just about laws—it’s about whether people understand and engage with those laws. And today, we face a different challenge: not a lack of information, but an overwhelming amount of it.

We tend to think of education—whether in civics, history, or technology—as a process of absorbing facts. But facts alone aren’t useful if we don’t know how to assess, connect, or apply them. When I was a kid, I didn’t just have to read The Republic—I had to present it, explain it, and engage with it. That distinction is important. Simply memorizing a passage from Plato wouldn’t have taught me much, but thinking through what it meant, arguing about its implications, and framing it in a way that made sense to me? That was where the real learning happened.

The same principle applies today. We live in an era where access to knowledge is not the bottleneck. AI can summarize court rulings, analyze laws, and map out how different governance systems compare. Information is endless, but comprehension is scarce. The problem isn’t finding knowledge—it’s knowing what matters, how to think critically about it, and how to engage with it.

This issue isn’t unique to civic engagement. It’s the same challenge students face as AI reshapes how they learn. It’s no longer enough to teach kids historical dates, formulas, or legal principles. They need to know how to question sources, evaluate reliability, and synthesize information in meaningful ways. They need to be prepared for a world where facts are easy to retrieve, but judgment, reasoning, and application are the real skills that matter.

The challenge for civic engagement is similar. There’s no shortage of legislative updates, executive orders, or judicial decisions to sift through. What’s missing is a way to contextualize them—to understand where they fit within constitutional principles, how they compare globally, and what their broader implications are.

That’s why the opportunity today is so compelling. The same AI-driven shifts transforming education can change how people engage with governance. Imagine a world where AI doesn’t just regurgitate legal language but helps people grasp how laws have evolved over time. Where it doesn’t just list amendments but connects them to historical debates and real-world consequences. Where it helps individuals—not just legal experts—track how their representatives vote, how policies change, and how different governance models approach similar challenges.

When I was growing up, my parents didn’t just want me to know about Plato’s ideas; they wanted me to engage with them. To question them. To challenge them. That’s what civic engagement should be—not passive consumption of legal information, but active participation in governance. And just as students today need to shift from memorization to deeper understanding, citizens need to move from surface-level awareness to critical, informed engagement with the world around them.

In many ways, AI could serve a similar role to what my parents did for me—forcing engagement, breaking down complexity, and pushing us to think critically. The difference is, this time, we have the tools to make that experience accessible to everyone.

Plato questioned whether democracy could survive without a well-informed citizenry. Today, the challenge isn’t lack of information—it’s making that information usable. And with the right approach, we can turn civic engagement from a passive duty into an active, lifelong pursuit.

AI Agent Security: A Framework for Accountability and Control

This weekend, I came across a LinkedIn article by Priscilla Russo about OpenAI agents and digital wallets that touched on something I’ve been thinking about – liability and AI agents and how they change system designs. As autonomous AI systems become more prevalent, we face a critical challenge: how do we secure systems that actively optimize for success in ways that can break traditional security models? The article’s discussion of Knight Capital’s $440M trading glitch perfectly illustrates what’s at stake. When automated systems make catastrophic decisions, there’s no undo button – and with AI agents, the potential for unintended consequences scales dramatically with their capability to find novel paths to their objectives.

What we’re seeing isn’t just new—it’s a fundamental shift in how organizations approach security. Traditional software might accidentally misuse resources or escalate privileges, but AI agents actively seek out new ways to achieve their goals, often in ways developers never anticipated. This isn’t just about preventing external attacks; it’s about containing AI itself—ensuring it can’t accumulate unintended capabilities, bypass safeguards, or operate beyond its intended scope. Without containment, AI-driven optimization doesn’t just break security models—it reshapes them in ways that make traditional defenses obsolete.

“First, in 2024, O1 broke out of its container by exploiting a vuln. Then, in 2025, it hacked a chess game to win. Relying on AI alignment for security is like abstinence-only sex ed—you think it’s working, right up until it isn’t,” said the former 19-year-old father.

The Accountability Gap

Most security discussions around AI focus on protecting models from adversarial attacks or preventing prompt injection. These are important challenges, but they don’t get to the core problem of accountability. As Russo suggests, AI developers are inevitably going to be held responsible for the actions of their agents, just as financial firms, car manufacturers, and payment processors have been held accountable for unintended consequences in their respective industries.

The parallel to Knight Capital is particularly telling. When their software malfunction led to catastrophic trades, there was no ambiguity about liability. That same principle will apply to AI-driven decision-making – whether in finance, healthcare, or legal automation. If an AI agent executes an action, who bears responsibility? The user? The AI developer? The organization that allowed the AI to interact with its systems? These aren’t hypothetical questions anymore – regulators, courts, and companies need clear answers sooner rather than later.

Building Secure AI Architecture

Fail to plan, and you plan to fail. When legal liability is assigned, the difference between a company that anticipated risks, built mitigations, implemented controls, and ensured auditability and one that did not will likely be significant. Organizations that ignore these challenges will find themselves scrambling after a crisis, while those that proactively integrate identity controls, permissioning models, and AI-specific security frameworks will be in a far better position to defend their decisions.

While security vulnerabilities are a major concern, they are just one part of a broader set of AI risks. AI systems can introduce alignment challenges, emergent behaviors, and deployment risks that reshape system design. But at the core of these challenges is the need for robust identity models, dynamic security controls, and real-time monitoring to prevent AI from optimizing in ways that bypass traditional safeguards.

Containment and isolation are just as critical as resilience. It’s one thing to make an AI model more robust – it’s another to ensure that if it misbehaves, it doesn’t take down everything around it. A properly designed system should ensure that an AI agent can’t escalate its access, operate outside of predefined scopes, or create secondary effects that developers never intended. AI isn’t just another software component – it’s an active participant in decision-making processes, and that means limiting what it can influence, what it can modify, and how far its reach extends.

I’m seeing organizations take radically different approaches to this challenge. As Russo points out in her analysis, some organizations like Uber and Instacart are partnering directly with AI providers, integrating AI-driven interactions into their platforms. Others are taking a defensive stance, implementing stricter authentication and liveness tests to block AI agents outright. The most forward-thinking organizations are charting a middle path: treating AI agents as distinct entities with their own credentials and explicitly managed access. They recognize that pretending AI agents don’t exist or trying to force them into traditional security models is a recipe for disaster.

Identity and Authentication for AI Agents

One of the most immediate problems I’m grappling with is how AI agents authenticate and operate in online environments. Most AI agents today rely on borrowed user credentials, screen scraping, and brittle authentication models that were never meant to support autonomous systems. Worse, when organizations try to solve this through traditional secret sharing or credential delegation, they end up spraying secrets across their infrastructure – creating exactly the kind of standing permissions and expanded attack surface we need to avoid. This might work in the short term, but it’s completely unsustainable.

The future needs to look more like SPIFFE for AI agents – where each agent has its own verifiable identity, scoped permissions, and limited access that can be revoked or monitored. But identity alone isn’t enough. Having spent years building secure systems, I’ve learned that identity must be coupled with attenuated permissions, just-in-time authorization, and zero-standing privileges. The challenge is enabling delegation without compromising containment – we need AI agents to be able to delegate specific, limited capabilities to other agents without sharing their full credentials or creating long-lived access tokens that could be compromised.

Systems like Biscuits and Macaroons show us how this could work: they allow for fine-grained scoping and automatic expiration of permissions in a way that aligns perfectly with how AI agents operate. Instead of sharing secrets, agents can create capability tokens that are cryptographically bound to specific actions, contexts, and time windows. This would mean an agent can delegate exactly what’s needed for a specific task without expanding the blast radius if something goes wrong.

Agent Interactions and Chain of Responsibility

What keeps me up at night isn’t just individual AI agents – it’s the interaction between them. When a single AI agent calls another to complete a task, and that agent calls yet another, you end up with a chain of decision-making where no one knows who (or what) actually made the call. Without full pipeline auditing and attenuated permissions, this becomes a black-box decision-making system with no clear accountability or verifiablity. That’s a major liability problem – one that organizations will have to solve before AI-driven processes become deeply embedded in financial services, healthcare, and other regulated industries.

This is particularly critical as AI systems begin to interact with each other autonomously. Each step in an AI agent’s decision-making chain must be traced and logged, with clear accountability at each transition point. We’re not just building technical systems—we’re building forensic evidence chains that will need to stand up in court.

Runtime Security and Adaptive Controls

Traditional role-based access control models fundamentally break down with AI systems because they assume permissions can be neatly assigned based on predefined roles. But AI doesn’t work that way. Through reinforcement learning, AI agents optimize for success rather than security, finding novel ways to achieve their goals – sometimes exploiting system flaws in ways developers never anticipated. We have already seen cases where AI models learned to game reward systems in completely unexpected ways.

This requires a fundamental shift in our security architecture. We need adaptive access controls that respond to behavior patterns, runtime security monitoring for unexpected decisions, and real-time intervention capabilities. Most importantly, we need continuous behavioral analysis and anomaly detection that can identify when an AI system is making decisions that fall outside its intended patterns. The monitoring systems themselves must evolve as AI agents find new ways to achieve their objectives.

Compliance by Design

Drawing from my years building CAs, I’ve learned that continual compliance can’t just be a procedural afterthought – it has to be designed into the system itself. The most effective compliance models don’t just meet regulatory requirements at deployment; they generate the artifacts needed to prove compliance as natural byproducts of how they function.

The ephemeral nature of AI agents actually presents an opportunity here. Their transient access patterns align perfectly with modern encryption strategies – access should be temporary, data should always be encrypted, and only authorized agents should be able to decrypt specific information for specific tasks. AI’s ephemeral nature actually lends itself well to modern encryption strategies – access should be transient, data should be encrypted at rest and in motion, and only the AI agent authorized for a specific action should be able to decrypt it.

The Path Forward

If we don’t rethink these systems now, we’ll end up in a situation where AI-driven decision-making operates in a gray area where no one is quite sure who’s responsible for what. And if history tells us anything, regulators, courts, and companies will eventually demand a clear chain of responsibility – likely after a catastrophic incident forces the issue.

The solution isn’t just about securing AI – it’s about building an ecosystem where AI roles are well-defined and constrained, where actions are traceable and attributable, and where liability is clear and manageable. Security controls must be adaptive and dynamic, while compliance remains continuous and verifiable.

Organizations that ignore these challenges will find themselves scrambling after a crisis. Those that proactively integrate identity controls, permissioning models, and AI-specific security frameworks will be far better positioned to defend their decisions and maintain control over their AI systems. The future of AI security lies not in building impenetrable walls, but in creating transparent, accountable systems that can adapt to the unique challenges posed by autonomous agents.

This post lays out the challenges, but securing AI systems requires a structured, scalable approach. In Containing the Optimizer: A Practical Framework for Securing AI Agent Systems I outline a five-pillar framework that integrates containment, identity, adaptive monitoring, and real-time compliance to mitigate these risks.

How AI Can Transform Our Broken Healthcare System

Healthcare becomes deeply personal when the system’s fragmentation leads to life-altering outcomes. During COVID-19, my father’s doctor made what seemed like a prudent choice: postpone treatment for fluid retention to minimize virus exposure. What began as a cautious approach—understandable in a pandemic—ended up having dire consequences. By the time anyone realized how rapidly his condition was worsening, his kidneys had suffered significant damage, ultimately leading to kidney failure.

Later, despite years of regular check-ups and lab work (which hinted at possible malignancies), he was diagnosed with stage four lung cancer. Alarming as that was on its own, what stung even more was how these warning signs never coalesced into a clear intervention plan. His history as a smoker and several concerning lab results should have raised flags. Yet no one connected the dots. It was as if his care lived in separate compartments: one file at the dialysis center, another at oncology, and a third at his primary care clinic.

The Fragmentation Crisis

That disjointed experience shone a harsh light on how easily critical information can remain siloed. One specialist would note an abnormality and advise a follow-up, only for that recommendation to slip through the cracks by the time my father went to his next appointment. Each time he walked into a different office, he essentially had to start from scratch—retelling his story, hoping the right details were captured, and trusting that this piece could eventually reach the right people.

The challenges went beyond missing data. My father, who had set dialysis sessions on the same days each week, routinely found his other appointments—like oncology visits or additional lab work—piled on top of those sessions. He spent hours juggling schedules just to avoid double-booking, which was the last thing he needed while battling serious health concerns.

COVID-19 made all of this worse. The emphasis on social distancing—again, quite reasonable in itself—took away the face-to-face time that might have revealed early red flags. Without continuous, well-integrated data flow, even well-meaning advice to “stay home” inadvertently blocked us from seeing how quickly my father’s health was unraveling.

A Potential Game Changer: Subtle AI Support

Throughout this ordeal, I couldn’t help but imagine what a more seamless, data-driven healthcare system might look like. I’m not talking about robots taking over doctor visits, but rather subtle, behind-the-scenes assistance—sometimes described as “agentic workloads.” Think of these as AI systems quietly scanning medical records, cross-referencing lab results, and gently notifying doctors or nurses about unusual patterns.

AI is already proving its value in diagnostic imaging. Studies have shown that computer-vision algorithms can analyze X-rays, CT scans, and MRIs with remarkable accuracy—often matching or even surpassing human radiologists. For example, AI has been shown to detect lung nodules with greater precision, helping identify potential issues that might have been missed otherwise. This type of integration could enhance our ability to catch problems like kidney damage or lung cancer earlier, triggering quicker interventions.

Additionally, when he underwent chemotherapy, he had to wait weeks after treatment and imaging to learn whether it was effective—an excruciating delay that AI could drastically shorten by providing faster, more integrated feedback to both patients and care teams.

Ideally, this technology would work much like a vigilant assistant: it wouldn’t diagnose my father all on its own, but it could have flagged consistent changes in his kidney function and correlated them with other troubling indicators. Perhaps it would have unified those scattered bits of data—a chest X-ray here, a suspicious blood test there—so that each new piece of information triggered closer scrutiny.

Yet for all the promise AI holds, it won’t matter if patients and providers don’t trust it. If alerts and reminders are viewed as background noise—just another alarm among many in a busy clinic—then critical issues may still go unnoticed. That’s why any such system must be transparent about how it arrives at its recommendations, and it must operate continuously in tandem with real human oversight.

The Missing Thread: Continuous Care

One of the biggest challenges my father faced—beyond the clinical realities of organ failure and cancer—was navigating a disjointed care environment. Even when he saw the same doctors, he often encountered new nurses or support staff who weren’t familiar with his case. He had to become his own advocate, repeating medical histories and test results, worried that a single oversight could spell disaster.

If every practitioner had easy access to a continuous stream of up-to-date information, that weight wouldn’t have been solely on my father’s shoulders. An AI-backed platform might have served as the “single source of truth” across different hospitals, labs, and specialists. Instead of fragmented snapshots—a lab test here, a consultation there—his providers would see a holistic, evolving picture of his health. And instead of being passive recipients of siloed updates, they’d participate in a more proactive, team-based approach.

By incorporating AI, healthcare could move from isolated snapshots to a more dynamic and connected view. For example, AI systems could track trends in lab results and imaging over time, detecting subtle changes that may otherwise be overlooked. By learning from every new case, these systems continuously improve, identifying correlations across medical histories, imaging results, and lifestyle factors. This would allow for earlier interventions and more tailored care, such as flagging kidney function changes that coincide with other troubling indicators.

Why Trust Matters More Than Ever

Still, technology can only go so far without human trust and collaboration. The best data-sharing framework in the world won’t help if doctors and nurses are suspicious of AI’s findings or if patients don’t feel comfortable granting access to their health records. Some of this wariness is understandable; health information is deeply personal, and no one wants to risk privacy breaches or rely on software that might produce false alarms.

Yet, if handled properly—with robust privacy protections, clear transparency about how data is used, and consistent evidence of accuracy—AI can become a trusted ally. That trust frees up healthcare professionals to do what they do best: engage with patients, provide empathy, and make nuanced clinical judgments. Meanwhile, the AI quietly handles the complex, data-heavy tasks in the background.

Restoring the Human Element

Paradoxically, I believe that good AI could actually bring more humanity back into healthcare. Right now, many doctors and nurses are buried under administrative and repetitive tasks that eat into the time they can spend with patients. Automated systems can relieve some of that burden, ensuring that routine record checks, appointment scheduling, and cross-specialty communication flow smoothly without continuous manual follow-up.

For patients like my father, that could mean quicker recognition of red flags, fewer repeated tests, and less of the emotional toll that comes from feeling like you have to quarterback your own care. It could also open the door for more meaningful moments between patients and providers—when doctors aren’t racing against a backlog of paperwork, they can be more present and attentive.

Walking Toward a Better Future

My father’s story underscores the steep price we pay for a fragmented, often reactive healthcare system. Even though he was conscientious about his check-ups, too many critical data points floated disconnected across different facilities. By the time all those puzzle pieces came together, it was too late to prevent significant damage.

Yet this isn’t just about looking backward. If there’s a silver lining, it’s the conviction that we can do better. By embracing subtle, well-integrated AI systems, we could transform the way we handle everything from day-to-day care to life-changing diagnoses. We could move beyond isolated treatments and instead give patients a coherent support network—one that sees them as whole individuals rather than a collection of disconnected symptoms.

A Call to Rethink Care

I don’t claim to have all the answers, and I know technology can’t solve every issue in healthcare. But seeing my father’s struggle firsthand has taught me that we urgently need a more unified, trust-driven approach—one that values continuous monitoring as much as it does specialized expertise.

  • Patients should have full visibility into their records, supported by AI that can highlight pressing concerns.
  • Providers deserve a system that connects them with real-time data and offers gentle nudges for follow-up, not an endless overload of unrelated alerts.
  • AI developers must design platforms that respect privacy, ensure transparency, and genuinely earn the confidence of medical teams.

If we can get these pieces right, tragedies like my father’s might become far less common. And then, at long last, we’d have a healthcare system that fulfills its most fundamental promise—to care for human life in a truly holistic, proactive way.