I’ve been mulling over what security might look like ten years from now, especially as AI-based workloads and robotics take on bigger roles. Growing up, I’d hear my father talk about his work on communication satellites, where triple redundancy was his way of seeing risk managed, not dodged. That perspective, paired with lessons from automotive, aerospace, nuclear, and space industries, feels like a compass as we rethink security in an AI-driven age. It points us toward a future where security isn’t a rigid barrier but a digital immune system—alive, adaptive, and resilient.
Learning from the Physical World
In industries like automotive and aerospace, every piece is built to perform—and to fail without falling apart. Cars layer airbags, antilock brakes, and sensors; airplanes stack redundant systems to keep flying when one falters. Nuclear plants and space missions go deeper, with containment designs and fail-safes that tame the unthinkable. My father’s satellite work ran on this: three layers of backup meant a glitch wouldn’t kill the mission. The takeaway? Strength comes from managing risk, not avoiding it. That mindset, forged in physical systems, would be our starting point for tackling the wild unknowns ahead.
Seeing Security Like a Living Thing
The era of a fixed perimeter is over. Zero trust has rewired our thinking, but as AI powers complex workloads and human-AI robotics step into the fray, static defenses will clearly not cut it. Security is evolving further into an immune system—and we’ll finally see real adaptive defenses land. This isn’t just weak AI bolted onto old walls; it’s a stronger rethink—systems that scan for threats, learn from them, and adapt on the fly. We’re already seeing hints—AI supply chain risks, like models coming with malware, or agenetic workloads escaping containers—which will push this shift. Much like antibodies in the body, these systems won’t just block attacks but hunt for anomalies, isolate them, and strengthen themselves against the next wave. Picture a network that doesn’t wait for breaches but runs silent simulations, sniffing out weak points and patching them—or a robotic assistant that locks down if its sensors detect and confirm an anomaly, echoing the overlapping safety nets of a car or my father’s redundant circuits.
This shift matters most with AI’s wild card: emergent behavior. As systems grow more general, simple parts can spark unexpected outcomes—think of a flock of birds veering as one or a traffic jam born from a few slow cars. In AI and robotics, these surprises could turn risky fast. Drawing from aerospace and nuclear design, we can bake in safety—redundancy, real-time monitoring, adaptive controls—so the system acts like an immune response, spotting odd patterns and neutralizing them before they spread. By 2035, this could redefine security for not just AI but all critical infrastructure—power grids, finance, healthcare, robotic fleets—marrying physical resilience with digital smarts.
How It Holds Everything Together
Resilience beats perfection every time—systems that bend, learn, and bounce back are what endure. Right now, our tech is a messy mix of old and new, full of cracks where risks hide. A digital immune system faces that head-on, and its role only grows as AI and robotics weave deeper into society. With workloads and machines going vertical—powering healthcare, governance, daily life—security becomes the thread holding it together, fast enough to let us steer it toward securing what matters, not just patching what’s broken. Picture a corporate network that senses a phishing attempt, quarantines it like a virus, then “vaccinates” itself by updating defenses everywhere—all while leaving a clear trail to prove what happened. Or a smart city where traffic, power, and robotic responders hum with AI-driven immunity—self-correcting, redundant, and naturally spitting out the artifacts needed to meet compliance obligations, not as an afterthought.
Where It’s All Heading
As we leave perimeter defenses behind for systems secure by design, the wisdom of automotive, aerospace, nuclear, and space industries lights the way. Fusing their lessons with an AI-driven immune system, we’ll build technology that’s tough, trustworthy, and ahead of the curve—keeping problems from spilling outward. Security won’t be static; it’ll be a pattern that keeps adjusting. My father used to say, “If you want to change the world, you have to see it as it is first.” Seeing our systems clearly—flaws and all—is how we’ll shape a future where they don’t just endure uncertainty but thrive in it.
My grandfather’s love of science fiction was his portal to tomorrow’s world—and it became mine. Together we’d pore over books like Asimov’s I, Robot, imagining futures shaped by machines. In the 1940s, when Asimov explored the complexities of artificial intelligence and human-robot relationships, it was pure speculation. By the 2000s, Hollywood had adapted these ideas into films where robots went rogue. Now, in the 2020s, the narrative has flipped—The Creator (2023) depicts a future where humanity, driven by fear, attempts to exterminate all AI. Unlike Asimov’s cautionary tales, where danger emerged from technology’s unintended consequences, this film casts humanity itself as the villain. This shift mirrors a broader cultural change, once, we feared what we might create; now, we fear who we have been.
As a security practitioner, this evolution gives me pause, especially as robotics and machine learning systems grow ever more autonomous. Today’s dominant approach to AI safety relies on alignment and reinforcement learning—a strategy that aims to shape AI behavior through incentives and training. However, this method falls prey to a well-known phenomenon in optimization known as Goodhart’s Law: when a measure becomes a target, it ceases to be a good measure. In the context of AI alignment, if the reward signal is our measure of success, over-optimization can lead to unintended, and often absurd, behaviors—exactly because the reward function cannot capture every nuance of our true values.
Much like early reinforcement learning schemes, Asimov’s Three Laws were a structural control mechanism—designed not to guide morality but to constrain outcomes. They, too, failed in unexpected ways when the complexity of real-world scenarios outstripped their simplistic formulations.
This raises a deeper question: If we now view ourselves as the existential threat, can we truly build AI that serves us? Or will our fears—whether of AI or of our own past—undermine the future we once dreamed of?
Today’s creators display a similar hubris. Once, we feared losing control of our inventions; now, we charge ahead, convinced that our intelligence alone can govern machines far more complex than we understand. But intelligence is not equivalent to control. While Asimov’s Three Laws attempted to impose hard limits, many modern AI safety strategies lean on alignment methods that, as Goodhart’s Law warns us, can degrade once a target is set.
This blind trust in alignment resembles our current approach to security. The slogan “security is everyone’s responsibility” was meant to foster vigilance but often dilutes accountability. When responsibility is diffuse, clear, enforceable safeguards are frequently absent. True security—and true AI governance—demands more than shared awareness; it requires structural enforcement. Without built-in mechanisms of control, we risk mistaking the illusion of safety for actual safety.
Consider containment as an illustrative example of structural control: by embedding hard limits on the accumulation of power, data, or capabilities within AI systems, we can create intrinsic safeguards against runaway behavior—much like physical containment protocols manage hazardous materials.
If we continue to see ourselves as the existential threat, then today’s creators risk designing AI that mirrors our own fears, biases, and contradictions. Without integrating true structural safeguards into AI—mechanisms designed into the system rather than imposed externally—we aren’t ensuring that AI serves us; we are merely hoping it will.
The Luddites were not entirely wrong to fear technology’s disruptive power, nor were they correct in believing they could halt progress altogether. The error lay in accepting only extremes—total rejection or uncritical adoption. Today, with AI, we face a similar dilemma. We cannot afford naïve optimism that alignment alone will save us, nor can we succumb to reactionary pessimism that smothers innovation out of fear.
Instead, we must start with the assumption that we, as humans, are fallible. Our intelligence alone is insufficient to control intelligence. If we do not design AI with structural restraint and built-in safeguards—grounded not in fear or arrogance but in pragmatic control—we risk losing control entirely. Like robust security practices, AI safety cannot be reduced to an abstract, diffuse responsibility. It must be an integral part of the system itself, not left to the vague hope that collectively we will always do the right thing.
Reading was a big deal when I was a kid, but it was also a challenge. I’m dyslexic, dysgraphic, and dysnumeric, which made traditional learning methods difficult—but that’s largely another story. My parents—determined, if not always gentle—had a simple solution: they forced me to read, interpret, and present. They assigned me books, and I had to give oral reports on them. In hindsight, it was one of the most impactful things they did for me because that process—taking in complex information, distilling it, and presenting it clearly—is exactly how professionals in technology function today.
One of the books they had me read was Plato’s Republic. My biggest takeaway? How little had changed in our fundamental struggles with governance. The same debates about justice, power, and human nature that played out in ancient Greece continue today—only the terminology and tools have changed. Looking back, it makes sense why my parents chose that book. My father is logical to a fault and deeply patriotic, and my mother, though no longer politically active, still carries a pocket Constitution in her purse, with more in her trunk in case she runs out. Law and governance weren’t abstract to me—they were everyday conversations.
That experience stayed with me. It made me realize that governance isn’t just about laws—it’s about whether people understand and engage with those laws. And today, we face a different challenge: not a lack of information, but an overwhelming amount of it.
We tend to think of education—whether in civics, history, or technology—as a process of absorbing facts. But facts alone aren’t useful if we don’t know how to assess, connect, or apply them. When I was a kid, I didn’t just have to read The Republic—I had to present it, explain it, and engage with it. That distinction is important. Simply memorizing a passage from Plato wouldn’t have taught me much, but thinking through what it meant, arguing about its implications, and framing it in a way that made sense to me? That was where the real learning happened.
The same principle applies today. We live in an era where access to knowledge is not the bottleneck. AI can summarize court rulings, analyze laws, and map out how different governance systems compare. Information is endless, but comprehension is scarce. The problem isn’t finding knowledge—it’s knowing what matters, how to think critically about it, and how to engage with it.
This issue isn’t unique to civic engagement. It’s the same challenge students face as AI reshapes how they learn. It’s no longer enough to teach kids historical dates, formulas, or legal principles. They need to know how to question sources, evaluate reliability, and synthesize information in meaningful ways. They need to be prepared for a world where facts are easy to retrieve, but judgment, reasoning, and application are the real skills that matter.
The challenge for civic engagement is similar. There’s no shortage of legislative updates, executive orders, or judicial decisions to sift through. What’s missing is a way to contextualize them—to understand where they fit within constitutional principles, how they compare globally, and what their broader implications are.
That’s why the opportunity today is so compelling. The same AI-driven shifts transforming education can change how people engage with governance. Imagine a world where AI doesn’t just regurgitate legal language but helps people grasp how laws have evolved over time. Where it doesn’t just list amendments but connects them to historical debates and real-world consequences. Where it helps individuals—not just legal experts—track how their representatives vote, how policies change, and how different governance models approach similar challenges.
When I was growing up, my parents didn’t just want me to know about Plato’s ideas; they wanted me to engage with them. To question them. To challenge them. That’s what civic engagement should be—not passive consumption of legal information, but active participation in governance. And just as students today need to shift from memorization to deeper understanding, citizens need to move from surface-level awareness to critical, informed engagement with the world around them.
In many ways, AI could serve a similar role to what my parents did for me—forcing engagement, breaking down complexity, and pushing us to think critically. The difference is, this time, we have the tools to make that experience accessible to everyone.
Plato questioned whether democracy could survive without a well-informed citizenry. Today, the challenge isn’t lack of information—it’s making that information usable. And with the right approach, we can turn civic engagement from a passive duty into an active, lifelong pursuit.
This weekend, I came across a LinkedIn article by Priscilla Russo about OpenAI agents and digital wallets that touched on something I’ve been thinking about – liability and AI agents and how they change system designs. As autonomous AI systems become more prevalent, we face a critical challenge: how do we secure systems that actively optimize for success in ways that can break traditional security models? The article’s discussion of Knight Capital’s $440M trading glitch perfectly illustrates what’s at stake. When automated systems make catastrophic decisions, there’s no undo button – and with AI agents, the potential for unintended consequences scales dramatically with their capability to find novel paths to their objectives.
What we’re seeing isn’t just new—it’s a fundamental shift in how organizations approach security. Traditional software might accidentally misuse resources or escalate privileges, but AI agents actively seek out new ways to achieve their goals, often in ways developers never anticipated. This isn’t just about preventing external attacks; it’s about containing AI itself—ensuring it can’t accumulate unintended capabilities, bypass safeguards, or operate beyond its intended scope. Without containment, AI-driven optimization doesn’t just break security models—it reshapes them in ways that make traditional defenses obsolete.
“First, in 2024, O1 broke out of its container by exploiting a vuln. Then, in 2025, it hacked a chess game to win. Relying on AI alignment for security is like abstinence-only sex ed—you think it’s working, right up until it isn’t,” said the former 19-year-old father.
The Accountability Gap
Most security discussions around AI focus on protecting models from adversarial attacks or preventing prompt injection. These are important challenges, but they don’t get to the core problem of accountability. As Russo suggests, AI developers are inevitably going to be held responsible for the actions of their agents, just as financial firms, car manufacturers, and payment processors have been held accountable for unintended consequences in their respective industries.
The parallel to Knight Capital is particularly telling. When their software malfunction led to catastrophic trades, there was no ambiguity about liability. That same principle will apply to AI-driven decision-making – whether in finance, healthcare, or legal automation. If an AI agent executes an action, who bears responsibility? The user? The AI developer? The organization that allowed the AI to interact with its systems? These aren’t hypothetical questions anymore – regulators, courts, and companies need clear answers sooner rather than later.
Building Secure AI Architecture
Fail to plan, and you plan to fail. When legal liability is assigned, the difference between a company that anticipated risks, built mitigations, implemented controls, and ensured auditability and one that did not will likely be significant. Organizations that ignore these challenges will find themselves scrambling after a crisis, while those that proactively integrate identity controls, permissioning models, and AI-specific security frameworks will be in a far better position to defend their decisions.
While security vulnerabilities are a major concern, they are just one part of a broader set of AI risks. AI systems can introduce alignment challenges, emergent behaviors, and deployment risks that reshape system design. But at the core of these challenges is the need for robust identity models, dynamic security controls, and real-time monitoring to prevent AI from optimizing in ways that bypass traditional safeguards.
Containment and isolation are just as critical as resilience. It’s one thing to make an AI model more robust – it’s another to ensure that if it misbehaves, it doesn’t take down everything around it. A properly designed system should ensure that an AI agent can’t escalate its access, operate outside of predefined scopes, or create secondary effects that developers never intended. AI isn’t just another software component – it’s an active participant in decision-making processes, and that means limiting what it can influence, what it can modify, and how far its reach extends.
I’m seeing organizations take radically different approaches to this challenge. As Russo points out in her analysis, some organizations like Uber and Instacart are partnering directly with AI providers, integrating AI-driven interactions into their platforms. Others are taking a defensive stance, implementing stricter authentication and liveness tests to block AI agents outright. The most forward-thinking organizations are charting a middle path: treating AI agents as distinct entities with their own credentials and explicitly managed access. They recognize that pretending AI agents don’t exist or trying to force them into traditional security models is a recipe for disaster.
Identity and Authentication for AI Agents
One of the most immediate problems I’m grappling with is how AI agents authenticate and operate in online environments. Most AI agents today rely on borrowed user credentials, screen scraping, and brittle authentication models that were never meant to support autonomous systems. Worse, when organizations try to solve this through traditional secret sharing or credential delegation, they end up spraying secrets across their infrastructure – creating exactly the kind of standing permissions and expanded attack surface we need to avoid. This might work in the short term, but it’s completely unsustainable.
The future needs to look more like SPIFFE for AI agents – where each agent has its own verifiable identity, scoped permissions, and limited access that can be revoked or monitored. But identity alone isn’t enough. Having spent years building secure systems, I’ve learned that identity must be coupled with attenuated permissions, just-in-time authorization, and zero-standing privileges. The challenge is enabling delegation without compromising containment – we need AI agents to be able to delegate specific, limited capabilities to other agents without sharing their full credentials or creating long-lived access tokens that could be compromised.
Systems like Biscuits and Macaroons show us how this could work: they allow for fine-grained scoping and automatic expiration of permissions in a way that aligns perfectly with how AI agents operate. Instead of sharing secrets, agents can create capability tokens that are cryptographically bound to specific actions, contexts, and time windows. This would mean an agent can delegate exactly what’s needed for a specific task without expanding the blast radius if something goes wrong.
Agent Interactions and Chain of Responsibility
What keeps me up at night isn’t just individual AI agents – it’s the interaction between them. When a single AI agent calls another to complete a task, and that agent calls yet another, you end up with a chain of decision-making where no one knows who (or what) actually made the call. Without full pipeline auditing and attenuated permissions, this becomes a black-box decision-making system with no clear accountability or verifiablity. That’s a major liability problem – one that organizations will have to solve before AI-driven processes become deeply embedded in financial services, healthcare, and other regulated industries.
This is particularly critical as AI systems begin to interact with each other autonomously. Each step in an AI agent’s decision-making chain must be traced and logged, with clear accountability at each transition point. We’re not just building technical systems—we’re building forensic evidence chains that will need to stand up in court.
Runtime Security and Adaptive Controls
Traditional role-based access control models fundamentally break down with AI systems because they assume permissions can be neatly assigned based on predefined roles. But AI doesn’t work that way. Through reinforcement learning, AI agents optimize for success rather than security, finding novel ways to achieve their goals – sometimes exploiting system flaws in ways developers never anticipated. We have already seen cases where AI models learned to game reward systems in completely unexpected ways.
This requires a fundamental shift in our security architecture. We need adaptive access controls that respond to behavior patterns, runtime security monitoring for unexpected decisions, and real-time intervention capabilities. Most importantly, we need continuous behavioral analysis and anomaly detection that can identify when an AI system is making decisions that fall outside its intended patterns. The monitoring systems themselves must evolve as AI agents find new ways to achieve their objectives.
Compliance by Design
Drawing from my years building CAs, I’ve learned that continual compliance can’t just be a procedural afterthought – it has to be designed into the system itself. The most effective compliance models don’t just meet regulatory requirements at deployment; they generate the artifacts needed to prove compliance as natural byproducts of how they function.
The ephemeral nature of AI agents actually presents an opportunity here. Their transient access patterns align perfectly with modern encryption strategies – access should be temporary, data should always be encrypted, and only authorized agents should be able to decrypt specific information for specific tasks. AI’s ephemeral nature actually lends itself well to modern encryption strategies – access should be transient, data should be encrypted at rest and in motion, and only the AI agent authorized for a specific action should be able to decrypt it.
The Path Forward
If we don’t rethink these systems now, we’ll end up in a situation where AI-driven decision-making operates in a gray area where no one is quite sure who’s responsible for what. And if history tells us anything, regulators, courts, and companies will eventually demand a clear chain of responsibility – likely after a catastrophic incident forces the issue.
The solution isn’t just about securing AI – it’s about building an ecosystem where AI roles are well-defined and constrained, where actions are traceable and attributable, and where liability is clear and manageable. Security controls must be adaptive and dynamic, while compliance remains continuous and verifiable.
Organizations that ignore these challenges will find themselves scrambling after a crisis. Those that proactively integrate identity controls, permissioning models, and AI-specific security frameworks will be far better positioned to defend their decisions and maintain control over their AI systems. The future of AI security lies not in building impenetrable walls, but in creating transparent, accountable systems that can adapt to the unique challenges posed by autonomous agents.
This post lays out the challenges, but securing AI systems requires a structured, scalable approach. InContaining the Optimizer: A Practical Framework for Securing AI Agent SystemsI outline a five-pillar framework that integrates containment, identity, adaptive monitoring, and real-time compliance to mitigate these risks.
Healthcare becomes deeply personal when the system’s fragmentation leads to life-altering outcomes. During COVID-19, my father’s doctor made what seemed like a prudent choice: postpone treatment for fluid retention to minimize virus exposure. What began as a cautious approach—understandable in a pandemic—ended up having dire consequences. By the time anyone realized how rapidly his condition was worsening, his kidneys had suffered significant damage, ultimately leading to kidney failure.
Later, despite years of regular check-ups and lab work (which hinted at possible malignancies), he was diagnosed with stage four lung cancer. Alarming as that was on its own, what stung even more was how these warning signs never coalesced into a clear intervention plan. His history as a smoker and several concerning lab results should have raised flags. Yet no one connected the dots. It was as if his care lived in separate compartments: one file at the dialysis center, another at oncology, and a third at his primary care clinic.
The Fragmentation Crisis
That disjointed experience shone a harsh light on how easily critical information can remain siloed. One specialist would note an abnormality and advise a follow-up, only for that recommendation to slip through the cracks by the time my father went to his next appointment. Each time he walked into a different office, he essentially had to start from scratch—retelling his story, hoping the right details were captured, and trusting that this piece could eventually reach the right people.
The challenges went beyond missing data. My father, who had set dialysis sessions on the same days each week, routinely found his other appointments—like oncology visits or additional lab work—piled on top of those sessions. He spent hours juggling schedules just to avoid double-booking, which was the last thing he needed while battling serious health concerns.
COVID-19 made all of this worse. The emphasis on social distancing—again, quite reasonable in itself—took away the face-to-face time that might have revealed early red flags. Without continuous, well-integrated data flow, even well-meaning advice to “stay home” inadvertently blocked us from seeing how quickly my father’s health was unraveling.
A Potential Game Changer: Subtle AI Support
Throughout this ordeal, I couldn’t help but imagine what a more seamless, data-driven healthcare system might look like. I’m not talking about robots taking over doctor visits, but rather subtle, behind-the-scenes assistance—sometimes described as “agentic workloads.” Think of these as AI systems quietly scanning medical records, cross-referencing lab results, and gently notifying doctors or nurses about unusual patterns.
AI is already proving its value in diagnostic imaging. Studies have shown that computer-vision algorithms can analyze X-rays, CT scans, and MRIs with remarkable accuracy—often matching or even surpassing human radiologists. For example, AI has been shown to detect lung nodules with greater precision, helping identify potential issues that might have been missed otherwise. This type of integration could enhance our ability to catch problems like kidney damage or lung cancer earlier, triggering quicker interventions.
Additionally, when he underwent chemotherapy, he had to wait weeks after treatment and imaging to learn whether it was effective—an excruciating delay that AI could drastically shorten by providing faster, more integrated feedback to both patients and care teams.
Ideally, this technology would work much like a vigilant assistant: it wouldn’t diagnose my father all on its own, but it could have flagged consistent changes in his kidney function and correlated them with other troubling indicators. Perhaps it would have unified those scattered bits of data—a chest X-ray here, a suspicious blood test there—so that each new piece of information triggered closer scrutiny.
Yet for all the promise AI holds, it won’t matter if patients and providers don’t trust it. If alerts and reminders are viewed as background noise—just another alarm among many in a busy clinic—then critical issues may still go unnoticed. That’s why any such system must be transparent about how it arrives at its recommendations, and it must operate continuously in tandem with real human oversight.
The Missing Thread: Continuous Care
One of the biggest challenges my father faced—beyond the clinical realities of organ failure and cancer—was navigating a disjointed care environment. Even when he saw the same doctors, he often encountered new nurses or support staff who weren’t familiar with his case. He had to become his own advocate, repeating medical histories and test results, worried that a single oversight could spell disaster.
If every practitioner had easy access to a continuous stream of up-to-date information, that weight wouldn’t have been solely on my father’s shoulders. An AI-backed platform might have served as the “single source of truth” across different hospitals, labs, and specialists. Instead of fragmented snapshots—a lab test here, a consultation there—his providers would see a holistic, evolving picture of his health. And instead of being passive recipients of siloed updates, they’d participate in a more proactive, team-based approach.
By incorporating AI, healthcare could move from isolated snapshots to a more dynamic and connected view. For example, AI systems could track trends in lab results and imaging over time, detecting subtle changes that may otherwise be overlooked. By learning from every new case, these systems continuously improve, identifying correlations across medical histories, imaging results, and lifestyle factors. This would allow for earlier interventions and more tailored care, such as flagging kidney function changes that coincide with other troubling indicators.
Why Trust Matters More Than Ever
Still, technology can only go so far without human trust and collaboration. The best data-sharing framework in the world won’t help if doctors and nurses are suspicious of AI’s findings or if patients don’t feel comfortable granting access to their health records. Some of this wariness is understandable; health information is deeply personal, and no one wants to risk privacy breaches or rely on software that might produce false alarms.
Yet, if handled properly—with robust privacy protections, clear transparency about how data is used, and consistent evidence of accuracy—AI can become a trusted ally. That trust frees up healthcare professionals to do what they do best: engage with patients, provide empathy, and make nuanced clinical judgments. Meanwhile, the AI quietly handles the complex, data-heavy tasks in the background.
Restoring the Human Element
Paradoxically, I believe that good AI could actually bring more humanity back into healthcare. Right now, many doctors and nurses are buried under administrative and repetitive tasks that eat into the time they can spend with patients. Automated systems can relieve some of that burden, ensuring that routine record checks, appointment scheduling, and cross-specialty communication flow smoothly without continuous manual follow-up.
For patients like my father, that could mean quicker recognition of red flags, fewer repeated tests, and less of the emotional toll that comes from feeling like you have to quarterback your own care. It could also open the door for more meaningful moments between patients and providers—when doctors aren’t racing against a backlog of paperwork, they can be more present and attentive.
Walking Toward a Better Future
My father’s story underscores the steep price we pay for a fragmented, often reactive healthcare system. Even though he was conscientious about his check-ups, too many critical data points floated disconnected across different facilities. By the time all those puzzle pieces came together, it was too late to prevent significant damage.
Yet this isn’t just about looking backward. If there’s a silver lining, it’s the conviction that we can do better. By embracing subtle, well-integrated AI systems, we could transform the way we handle everything from day-to-day care to life-changing diagnoses. We could move beyond isolated treatments and instead give patients a coherent support network—one that sees them as whole individuals rather than a collection of disconnected symptoms.
A Call to Rethink Care
I don’t claim to have all the answers, and I know technology can’t solve every issue in healthcare. But seeing my father’s struggle firsthand has taught me that we urgently need a more unified, trust-driven approach—one that values continuous monitoring as much as it does specialized expertise.
Patients should have full visibility into their records, supported by AI that can highlight pressing concerns.
Providers deserve a system that connects them with real-time data and offers gentle nudges for follow-up, not an endless overload of unrelated alerts.
AI developers must design platforms that respect privacy, ensure transparency, and genuinely earn the confidence of medical teams.
If we can get these pieces right, tragedies like my father’s might become far less common. And then, at long last, we’d have a healthcare system that fulfills its most fundamental promise—to care for human life in a truly holistic, proactive way.
What does it take to prepare our children for a tomorrow where AI shapes how they get information, robots change traditional jobs, and careers transform faster than ever—a time when what they can memorize matters far less than how quickly they can think, adapt, and create? As a parent with children aged 29, 18, and 9, I can’t help wondering how to best prepare each of them. My oldest may have already found his way, but how do I ensure my younger two can succeed in a world so different from the one their brother entered just a few years before?
We’ve faced big changes like this before—moments that completely changed how we work and what opportunities exist. A century ago, Ford’s assembly line wasn’t just about making cars faster; it changed what skills workers needed and how companies treated employees. Decades later, Japan’s quality movement showed us that constant improvement and efficient thinking could transform entire industries. Each era required us to learn not just new facts, but new ways of thinking.
Today’s change, driven by artificial intelligence and robotics, is similar. AI will handle basic knowledge tasks at scale, and robots will take care of repetitive physical work. This means humans need to focus on higher-level skills: making sense of complex situations, evaluating information critically, combining ideas creatively, and breaking down big problems into solvable pieces. Instead of memorizing facts like a living library, our children need to know how to judge if information is trustworthy and connect ideas that might not seem related at first glance. They need to see knowledge not as something you collect and keep, but as something that grows and changes through questioning, discussion, and discovery.
Where can we find a guide for developing these new thinking skills? Interestingly, one already exists in our schools: the teaching strategies developed for gifted and twice-exceptional (2e) learners—students who are intellectually gifted but may also face learning challenges.
Gifted and 2e children think and learn in ways that are often intense, complex, and different from traditional methods. Teachers who work with these learners have refined approaches that develop multimodal thinking (using different ways to learn and understand), metacognition (thinking about how we think), and critical evaluation—exactly the skills all young people need in a future filled with smart machines and endless information.
Shift from Memorization to Meaning Instead of drilling facts, encourage your child to question sources. If you’re discussing a news article at dinner, ask: “How do we know this claim is accurate? What makes the source trustworthy?” Now they’re not just absorbing information; they’re actively working to understand it.
Foster Multimodal Exploration Make learning richer by using different approaches. Let them build a simple robot kit, draw a diagram of how it works, and then explain it in their own words. By connecting hands-on activity (tactile learning), visual learning, and verbal explanation, they develop deeper understanding.
Encourage Metacognition After solving a puzzle or coding a simple project, have them reflect: “What worked best? What would you try differently next time?” By understanding their own thought processes, they become better at adapting their approach to new challenges.
Highlight Interdisciplinary Connections and Global Outlook Show them that knowledge doesn’t exist in separate boxes. A math concept might connect beautifully with a musical pattern, or a historical event might be understood better through science. Help them see that good ideas and innovation come from everywhere in the world, not just one place or tradition.
Emphasize Emotional and Social Intelligence In a world where machines handle routine tasks, human qualities like empathy, communication, and teamwork become even more important. Encourage them to be comfortable with uncertainty, to see setbacks as chances to learn, and to develop resilience (the ability to bounce back from difficulties). These people skills will matter just as much as any technical knowledge.
Deep Learning and Entrepreneurial Thinking Like classical scholars who focused deeply on fewer subjects rather than skimming many, children benefit from spending more time thinking deeply about carefully chosen topics rather than rushing through lots of surface-level information. Consider teaching basic business and problem-solving skills early—like how to budget for a project or spot problems in their community that need solving—so they learn to create opportunities rather than just wait for them.
Finally, we’re raising children in an age where AI is becoming a constant helper and resource. While information is everywhere, the ability to understand it in context and make good judgments is rare and valuable. By using teaching techniques once reserved for gifted or 2e learners—multiple ways of learning, thinking about thinking, careful evaluation, global awareness, and creative combination of ideas—we prepare all children to be confident guides of their own learning. Instead of being overwhelmed by technology, they’ll learn to work with it, shape it, and use it to build meaningful futures.
This won’t happen overnight. But just as we adapted to big changes in the past, we can evolve again. We can model skepticism, curiosity, and flexible thinking at home. In doing so, we make sure that no matter how the world changes—no matter what new tools or systems appear—our children can stand on their own, resilient, resourceful, and ready to thrive in whatever tomorrow brings.
UPDATE [DEC 8,2024]: In the spirit of AI, I played with Claude 3.5 Sonnet yesterday and turned this post into a REACT presentation.