Why It’s Time to Rethink Machine and Workload Identity: Lessons from User Security

MFA slashed credential-based attacks. Passwordless authentication made phishing harder than ever. These breakthroughs transformed user security—so why are machines and workloads still stuck with static secrets and long-lived credentials?

While we’ve made remarkable progress in securing user identity, the same cannot always be said for machine and workload identity—servers, workloads, APIs, and applications. Machines often rely on static secrets stored in configuration files, environment variables, or files that are copied across systems. Over time, these secrets become fragmented, overly shared, and difficult to track, creating significant vulnerabilities. The good news? Machines and workloads are arguably easier to secure than humans, and applying the same principles that worked for users—like short-lived credentials, multi-factor verification, and dynamic access—can yield even greater results.

Let’s take the lessons learned from securing users and reimagine how we secure machines and workloads.

From Static Secrets to Dynamic Credentials

Machine and workload identity have long been built on the shaky foundation of static secrets—API keys, passwords, or certificates stored in configuration files, environment variables, or local files. These secrets are often copied across systems, passed between teams, and reused in multiple environments, making them not only overly shared but also hard to track. This lack of visibility means that a single forgotten or mismanaged secret can become a point of entry for attackers.

The lesson from user security is clear: static secrets must be replaced with dynamic, ephemeral credentials that are:

  • Short-lived: Credentials should expire quickly to minimize exposure.
  • Context-aware: Access should be tied to specific tasks or environments.
  • Automatically rotated: Machines and workloads should issue, validate, and retire credentials in real-time without human intervention.

This shift is about evolving from secret management to credential management, emphasizing real-time issuance and validation over static storage. Just as password managers gave way to passwordless authentication, dynamic credentialing represents the next step in securing machines and workloads.

Attestation: The MFA for Machines and Workloads

For users, MFA became critical in verifying identity by requiring multiple factors: something you know, have, or are. Machines and workloads need an equivalent, and attestation fills that role.

Attestation acts as the MFA for machines and workloads by providing:

  1. Proof of identity: Verifying that a machine or workload is legitimate.
  2. Proof of context: Ensuring the workload’s environment and posture align with security policies.
  3. Proof of trustworthiness: Validating the workload operates within secure boundaries, such as hardware-backed enclaves or trusted runtimes.

Just as MFA reduced compromised passwords, attestation prevents compromised machines or workloads from gaining unauthorized access. It’s a dynamic, context-aware layer of security that aligns perfectly with Zero Trust principles.

Zero Trust: Reclaiming the Original Vision

When Zero Trust was introduced, it was a design principle: “Never trust, always verify.” It challenged the idea of implicit trust and called for dynamic, contextual verification for every access request.

But somewhere along the way, marketers reduced Zero Trust to a buzzword, often pushing solutions like VPN replacements or network segmentation tools. 

To reclaim Zero Trust, we need to:

  1. Treat all access as privileged access: Every request—whether from a user, machine, or workload—should be verified and granted the least privilege necessary.
  2. Apply dynamic credentialing: Replace static secrets with short-lived credentials tied to real-time context.
  3. Extend MFA principles to machines and workloads: Use attestation to continuously verify identity, context, and trustworthiness.

Preparing for the Future: Agentic AI and the Need for Robust Machine and Workload Identity

As organizations increasingly adopt agentic AI systems—autonomous systems that execute tasks and make decisions on behalf of users—the need for robust machine and workload identity management becomes even more pressing. These systems often require delegated access to resources, APIs, and other identities. Without proper safeguards, they introduce new attack surfaces, including:

  • Over-permissioned access: Delegated tasks may unintentionally expose sensitive resources.
  • Static secrets misuse: Secrets stored in configuration files or environment variables can become high-value targets for attackers, especially when copied across systems.
  • Fragmented visibility: Secrets that are spread across teams or environments are nearly impossible to track, making it hard to detect misuse.

To securely deploy agentic AI, organizations must:

  1. Implement dynamic credentials: Ensure AI systems use short-lived, context-aware credentials that expire after each task, reducing the risk of abuse.
  2. Require attestation: Validate the AI’s environment, behavior, and identity before granting access, just as you would verify a trusted workload.
  3. Continuously monitor and revoke access: Apply zero standing privileges, ensuring access is granted only for specific tasks and revoked immediately afterward.

Building strong foundations in machine and workload identity management today ensures you’re prepared for the growing complexity of AI-driven systems tomorrow.

A Call to Action for Security Practitioners

For years, we’ve made meaningful progress in securing users, from deploying MFA to replacing passwords with strong authenticators. These changes worked because they addressed fundamental flaws in how identity and access were managed.

Now, it’s time to ask: Where else can we apply these lessons?

Look for parallels:

  • If replacing passwords reduced breaches for users, then replacing static secrets with dynamic credentials for machines and workloads can deliver similar results.
  • If MFA improved user authentication, then attestation for machines and workloads can add the same level of assurance to machine identity.
  • E2E encryption for personal communications vs. process-to-process security: End-to-end encryption has drastically improved the privacy of our personal communications, ensuring messages are secure from sender to recipient. Similarly, robust authentication and encryption between processes—ensuring that only trusted workloads communicate—can bring the same level of assurance to machine-to-machine communications, protecting sensitive data and operations.

By identifying these parallels, we can break down silos, extend the impact of past successes, and create a truly secure-by-default environment.

Final Thought

Security practitioners should always ask: Where have we already made meaningful progress, and where can we replicate that success?

If replacing passwords and adding MFA helped reduce user-related breaches, then replacing static secrets and adopting attestation for machines and workloads is a natural next step—one that is arguably quicker and easier to implement, given that machines and workloads don’t resist change.

Zero Trust was never meant to be a buzzword. It’s a call to rethink security from the ground up, applying proven principles to every layer of identity, human or machine. By embracing this approach, we can build systems that are not only resilient but truly secure by design.

What Makes a QR Code Verifiable?

QR codes are everywhere—tickets, ID cards, product packaging, menus, and even Wi-Fi setups. They’ve become a cornerstone of convenience, and most of us scan them without hesitation. But here’s the thing: most QR codes aren’t cryptographically signed. In practice, this means we’re trusting their contents without any way to confirm they’re authentic or haven’t been tampered with.

One reason QR codes are so useful is their data density. They can store much more information than simpler formats like barcodes, making them ideal for embedding cryptographic metadata, references, or signatures while remaining scannable. However, QR codes have size limits, which means the cryptographic overhead for signing needs to be carefully managed to maintain usability.

While unauthenticated QR codes are fine for low-stakes uses like menus, relying on them for sensitive applications introduces risk. Verifiable QR codes use cryptographic signatures to add trust and security, ensuring authenticity and integrity—even in a post-quantum future.

How Are Verifiable QR Codes Different?

The key difference lies in cryptographic signatures. Verifiable QR codes use them to achieve two things:

  • Authentication: They prove the QR code was generated by a specific, identifiable source.
  • Integrity: They ensure the data in the QR code hasn’t been altered after its creation.

This makes verifiable QR codes especially useful in scenarios where trust is critical. For instance, an ID card might contain a QR code with a cryptographic signature over its MRZ (Machine Readable Zone). If someone tampers with the MRZ, the signature becomes invalid, making forgery far more difficult.

Why Think About Post-Quantum Security Now?

Many systems already use signed QR codes for ticketing, identity verification, or supply chain tracking. However, these systems often rely on classical cryptographic algorithms like RSA or ECDSA, which are vulnerable to quantum attacks. Once quantum computers become practical, they could break these signatures, leaving QR codes open to forgery.

That’s where post-quantum cryptography (PQC) comes in. PQC algorithms are designed to resist quantum attacks, ensuring the systems we rely on today remain secure in the future. For QR codes, where size constraints matter, algorithms like UOV and SQISign are especially promising. While most standardized PQC algorithms (like CRYSTALS-Dilithium or Falcon) produce relatively large signatures, UOV and SQISign aim to reduce signature sizes significantly. This makes them better suited for QR codes, which have limited space to accommodate cryptographic overhead.

By adopting post-quantum signatures, verifiable QR codes can address today’s security needs while ensuring long-term resilience in a post-quantum world.

What’s Practical in Implementation?

For verifiable QR codes to work at scale, standard formats and easy-to-use verifiers are essential. Ideally, your smartphone’s default camera should handle verification without requiring extra apps, potentially deep-linking into installed applications. This kind of seamless integration is crucial for widespread adoption.

Verifiable QR codes don’t need to include all the data they validate. Instead, they can store a reference, an identifier, and a cryptographic signature. This approach stays within QR code size limits, accommodating cryptographic overhead while keeping the codes lightweight and usable.

Think of verifiable QR codes as digital certificates. They tie the QR code’s contents back to an issuer within a specific ecosystem, whether it’s a ticketing platform, a supply chain, or an identity system. To build transparency and trust, these signatures could even be logged in a transparency log (tlog), much like Certificate Transparency for web certificates. This would make the issuance of QR codes auditable, ensuring not only the validity of the signature but also when and by whom it was issued.

What About Purely Digital Use Cases?

Even without a physical object like a driver’s license, verifiable QR codes offer significant value. For instance, an online ticket or access pass can prove its issuer and verify its contents with contactless reading. Key benefits include:

  • Confirming the QR code came from a legitimate issuer (e.g., a trusted ticketing platform).
  • Ensuring the content hasn’t been altered, reducing phishing or tampering risks.

This assurance is especially critical in digital-only contexts where physical cross-checking isn’t an option, or additional information is needed to verify the object.

Where Verifiable QR Codes Shine

  • URL-Based QR Codes: Phishing is a growing problem, and QR codes are often used as bait. A verifiable QR code could cryptographically confirm a URL matches its intended domain, letting users know it’s safe before they click—a game-changer for consumers and enterprises.
  • Identity and Credentials: Driver’s licenses or passports could include QR codes cryptographically tied to their data. Any tampering, digital or physical, would break the signature, making counterfeits easier to detect.
  • Event Tickets: Ticket fraud costs billions annually. Verifiable QR codes could tie tickets to their issuing authority, allowing limited offline validation while confirming authenticity.
  • Supply Chain Security: Counterfeiting plagues industries like pharmaceuticals and luxury goods. Signed QR codes on packaging could instantly verify product authenticity without needing centralized databases.
  • Digital Proof of Vaccination: During the COVID-19 pandemic, QR codes became a common way to share vaccination records. A verifiable QR code would tie the data to an official source, simplifying verification while reducing counterfeit risks at borders, workplaces, or events.

Enhancing Trust in Everyday Interactions

Verifiable QR codes bridge the gap between convenience and trust. By incorporating cryptographic signatures—especially post-quantum ones—they add a necessary layer of security in an increasingly digital world.

While they won’t solve every problem, verifiable QR codes offer a practical way to improve the reliability of systems we already depend on. From verifying tickets and vaccination records to securing supply chains, they provide a scalable and effective solution for building trust into everyday interactions. As verification tools integrate further into devices and platforms, verifiable QR codes could become a cornerstone of authenticity in both physical and digital spaces.

How AI Can Transform Our Broken Healthcare System

Healthcare becomes deeply personal when the system’s fragmentation leads to life-altering outcomes. During COVID-19, my father’s doctor made what seemed like a prudent choice: postpone treatment for fluid retention to minimize virus exposure. What began as a cautious approach—understandable in a pandemic—ended up having dire consequences. By the time anyone realized how rapidly his condition was worsening, his kidneys had suffered significant damage, ultimately leading to kidney failure.

Later, despite years of regular check-ups and lab work (which hinted at possible malignancies), he was diagnosed with stage four lung cancer. Alarming as that was on its own, what stung even more was how these warning signs never coalesced into a clear intervention plan. His history as a smoker and several concerning lab results should have raised flags. Yet no one connected the dots. It was as if his care lived in separate compartments: one file at the dialysis center, another at oncology, and a third at his primary care clinic.

The Fragmentation Crisis

That disjointed experience shone a harsh light on how easily critical information can remain siloed. One specialist would note an abnormality and advise a follow-up, only for that recommendation to slip through the cracks by the time my father went to his next appointment. Each time he walked into a different office, he essentially had to start from scratch—retelling his story, hoping the right details were captured, and trusting that this piece could eventually reach the right people.

The challenges went beyond missing data. My father, who had set dialysis sessions on the same days each week, routinely found his other appointments—like oncology visits or additional lab work—piled on top of those sessions. He spent hours juggling schedules just to avoid double-booking, which was the last thing he needed while battling serious health concerns.

COVID-19 made all of this worse. The emphasis on social distancing—again, quite reasonable in itself—took away the face-to-face time that might have revealed early red flags. Without continuous, well-integrated data flow, even well-meaning advice to “stay home” inadvertently blocked us from seeing how quickly my father’s health was unraveling.

A Potential Game Changer: Subtle AI Support

Throughout this ordeal, I couldn’t help but imagine what a more seamless, data-driven healthcare system might look like. I’m not talking about robots taking over doctor visits, but rather subtle, behind-the-scenes assistance—sometimes described as “agentic workloads.” Think of these as AI systems quietly scanning medical records, cross-referencing lab results, and gently notifying doctors or nurses about unusual patterns.

AI is already proving its value in diagnostic imaging. Studies have shown that computer-vision algorithms can analyze X-rays, CT scans, and MRIs with remarkable accuracy—often matching or even surpassing human radiologists. For example, AI has been shown to detect lung nodules with greater precision, helping identify potential issues that might have been missed otherwise. This type of integration could enhance our ability to catch problems like kidney damage or lung cancer earlier, triggering quicker interventions.

Additionally, when he underwent chemotherapy, he had to wait weeks after treatment and imaging to learn whether it was effective—an excruciating delay that AI could drastically shorten by providing faster, more integrated feedback to both patients and care teams.

Ideally, this technology would work much like a vigilant assistant: it wouldn’t diagnose my father all on its own, but it could have flagged consistent changes in his kidney function and correlated them with other troubling indicators. Perhaps it would have unified those scattered bits of data—a chest X-ray here, a suspicious blood test there—so that each new piece of information triggered closer scrutiny.

Yet for all the promise AI holds, it won’t matter if patients and providers don’t trust it. If alerts and reminders are viewed as background noise—just another alarm among many in a busy clinic—then critical issues may still go unnoticed. That’s why any such system must be transparent about how it arrives at its recommendations, and it must operate continuously in tandem with real human oversight.

The Missing Thread: Continuous Care

One of the biggest challenges my father faced—beyond the clinical realities of organ failure and cancer—was navigating a disjointed care environment. Even when he saw the same doctors, he often encountered new nurses or support staff who weren’t familiar with his case. He had to become his own advocate, repeating medical histories and test results, worried that a single oversight could spell disaster.

If every practitioner had easy access to a continuous stream of up-to-date information, that weight wouldn’t have been solely on my father’s shoulders. An AI-backed platform might have served as the “single source of truth” across different hospitals, labs, and specialists. Instead of fragmented snapshots—a lab test here, a consultation there—his providers would see a holistic, evolving picture of his health. And instead of being passive recipients of siloed updates, they’d participate in a more proactive, team-based approach.

By incorporating AI, healthcare could move from isolated snapshots to a more dynamic and connected view. For example, AI systems could track trends in lab results and imaging over time, detecting subtle changes that may otherwise be overlooked. By learning from every new case, these systems continuously improve, identifying correlations across medical histories, imaging results, and lifestyle factors. This would allow for earlier interventions and more tailored care, such as flagging kidney function changes that coincide with other troubling indicators.

Why Trust Matters More Than Ever

Still, technology can only go so far without human trust and collaboration. The best data-sharing framework in the world won’t help if doctors and nurses are suspicious of AI’s findings or if patients don’t feel comfortable granting access to their health records. Some of this wariness is understandable; health information is deeply personal, and no one wants to risk privacy breaches or rely on software that might produce false alarms.

Yet, if handled properly—with robust privacy protections, clear transparency about how data is used, and consistent evidence of accuracy—AI can become a trusted ally. That trust frees up healthcare professionals to do what they do best: engage with patients, provide empathy, and make nuanced clinical judgments. Meanwhile, the AI quietly handles the complex, data-heavy tasks in the background.

Restoring the Human Element

Paradoxically, I believe that good AI could actually bring more humanity back into healthcare. Right now, many doctors and nurses are buried under administrative and repetitive tasks that eat into the time they can spend with patients. Automated systems can relieve some of that burden, ensuring that routine record checks, appointment scheduling, and cross-specialty communication flow smoothly without continuous manual follow-up.

For patients like my father, that could mean quicker recognition of red flags, fewer repeated tests, and less of the emotional toll that comes from feeling like you have to quarterback your own care. It could also open the door for more meaningful moments between patients and providers—when doctors aren’t racing against a backlog of paperwork, they can be more present and attentive.

Walking Toward a Better Future

My father’s story underscores the steep price we pay for a fragmented, often reactive healthcare system. Even though he was conscientious about his check-ups, too many critical data points floated disconnected across different facilities. By the time all those puzzle pieces came together, it was too late to prevent significant damage.

Yet this isn’t just about looking backward. If there’s a silver lining, it’s the conviction that we can do better. By embracing subtle, well-integrated AI systems, we could transform the way we handle everything from day-to-day care to life-changing diagnoses. We could move beyond isolated treatments and instead give patients a coherent support network—one that sees them as whole individuals rather than a collection of disconnected symptoms.

A Call to Rethink Care

I don’t claim to have all the answers, and I know technology can’t solve every issue in healthcare. But seeing my father’s struggle firsthand has taught me that we urgently need a more unified, trust-driven approach—one that values continuous monitoring as much as it does specialized expertise.

  • Patients should have full visibility into their records, supported by AI that can highlight pressing concerns.
  • Providers deserve a system that connects them with real-time data and offers gentle nudges for follow-up, not an endless overload of unrelated alerts.
  • AI developers must design platforms that respect privacy, ensure transparency, and genuinely earn the confidence of medical teams.

If we can get these pieces right, tragedies like my father’s might become far less common. And then, at long last, we’d have a healthcare system that fulfills its most fundamental promise—to care for human life in a truly holistic, proactive way.

The Laws of Stupidity and the Gaps in Your Security Posture

Carlo M. Cipolla, in his essay The Basic Laws of Human Stupidity, laid out a set of principles that are both hilarious and uncomfortably accurate when applied to everyday life. If you’ve ever watched a perfectly preventable security breach unfold and thought, “How did no one see this coming?” Cipolla has an explanation: stupidity—the kind that causes harm without benefiting anyone.

In security, stupidity isn’t just a human problem. It’s systemic. Your security posture is the sum of every decision you make—large or small, deliberate or “temporary.” Vulnerabilities don’t just happen; they’re created at the intersections of components and processes where decisions are made in isolation. And as Cipolla’s laws remind us, these decisions often externalize harm without yielding any real benefit to the decision-makers.

Cipolla’s Third Law states: “A stupid person is one who causes losses to another person or group of persons while deriving no gain and even possibly incurring losses themselves.” Unfortunately, this describes many decisions in security architecture. Consider a product team that ships a feature with hard-coded credentials because “it’s just for testing,” or an infrastructure team that approves open SSH access from anywhere because “we’ll lock it down later.” These decisions aren’t malicious, but they create cascading vulnerabilities that attackers are happy to exploit.

As Cipolla reminds us, the most dangerous kind of stupidity comes from ignoring the bigger picture. A classic example is teams measuring “success” by the number of CVEs closed or bugs fixed while ignoring metrics that actually reflect resilience, like lateral movement resistance or detection speed. It’s like polishing the hood of your car while leaving the gas tank open.

For a fun analogy, let’s turn to Star Wars. When the droids took over a ship’s trash system to gain access to more critical systems, they exploited what seemed like an insignificant component. As Adam Shostack highlights in his book Threats: What Every Engineer Should Learn from Star Wars, the trash system is a classic example of how attackers exploit overlooked parts of a system to achieve much bigger objectives. Security isn’t about protecting what seems important—it’s about understanding that any overlooked vulnerability can become critical. Whether it’s an unpatched library in your supply chain or a misconfigured process, attackers are happy to exploit your blind spots. If your trash system can sink your flagship, you’ve got bigger problems.

How do you avoid these mistakes? It starts by measuring the right things. Vanity metrics like “bugs closed” or “CVE counts” are security theater. They make you feel good but don’t tell you whether your system is truly secure. Engineers love optimizing for metrics—it’s in their blood. But optimizing for the wrong ones creates a false sense of security.

Instead, focus on metrics that reflect real resilience:

  • Lateral movement resistance: How hard is it for an attacker to move from one compromised system to another?
  • Detection speed: How quickly can you identify a breach? (And no, “when the customer calls” doesn’t count.)
  • Response effectiveness: Once detected, how quickly can you contain and neutralize the threat?
  • Minimized attack surfaces: How lean are your deployment images? Are you running unnecessary packages or services?
  • Key management hygiene: Are credentials rotated frequently? Are static secrets eliminated in favor of short-lived credentials?

These metrics focus on outcomes, not activity. While no single metric is sufficient, together they provide a clearer picture of how well security is embedded into the fabric of your organization.

Microsoft’s recent push to create division-wide Chief Security Officers is a good step toward addressing security silos. By embedding security leadership at the division level, they’re recognizing that vulnerabilities often arise between components, not just within them. But this alone isn’t enough. Security needs to be designed into the architecture itself, not just layered on as a management structure. It’s about ensuring every decision—from how APIs handle garbage inputs to how your CI/CD pipelines handle third-party code—is made with security in mind.

This is where proactive humility comes in: acknowledging that mistakes will happen, blind spots will exist, and systems must be designed to fail gracefully. Defense in depth isn’t just a buzzword—it’s an acknowledgment that your trash system will be attacked, and you’d better be ready for it.

Cipolla’s framework highlights a critical distinction:

  • Intelligent decisions benefit everyone—users, developers, and security teams—without externalizing harm. Think of secure defaults, automated safeguards, and least-privilege architectures.
  • Stupid decisions, on the other hand, create risk for everyone while providing no real gain. Hard-coded credentials, unnecessary privileges, or ignoring supply chain risks fall squarely into this category.

The challenge is to make intelligent decisions easier than stupid ones. This requires strong governance, effective tooling, and metrics that reward resilience over vanity. It’s not about avoiding mistakes altogether—that’s impossible—it’s about making it harder to make the big ones.

Cipolla’s laws might seem like a humorous take on human behavior, but they offer a sobering reminder of the gaps in security posture. Whether it’s overlooking the trash system in Star Wars or counting CVEs while ignoring systemic risks, stupidity in security is often the result of narrow thinking and poor measurement. The solution? Embed security into the fabric of your organization, focus on meaningful metrics, and foster a culture of proactive humility. By designing systems that make intelligent decisions easier than stupid ones, you can stop polishing the hood and start closing the gas tank.

Safeguarding Internet Trust: From Reactive to Continuous 

The internet rests on a foundation of core infrastructure components that make global communication possible. Among these load-bearing elements are DNS, DNSSEC, BGP, BGPsec, WebPKI, RPKI, transparency logs, IXPs, Autonomous Systems, and various registries. This includes critical governance bodies like ICANN and IANA, standards bodies like the CA/Browser Forum. These systems don’t just enable the internet – they are the internet, forming the critical backbone that allows us to establish secure connections, route traffic reliably, and maintain operational trust across the global network.

The PKI and transparency mechanisms that support these systems, particularly WebPKI, RPKI, and Certificate Transparency, are especially critical load-bearing elements essential to delivering the internet’s net value. When these foundational elements fail, they don’t just impact individual services – they can undermine the security and reliability of the entire internet infrastructure and erode the fundamental trust that billions of users and organizations place in the internet. This trust, once damaged, is difficult to rebuild and can have lasting consequences for how people interact with and rely upon digital services.

This foundational role makes the governance of root programs, which oversee these trust systems, absolutely critical. Yet recent incidents and historical patterns suggest we need to improve how we approach their governance. While no root program is perfect, and some have made significant strides in adopting product-focused practices and proactive risk management, there remains substantial room for improvement across the ecosystem. This framework isn’t meant to dismiss current efforts, but rather to synthesize best practices and push the conversation forward about how we can collectively strengthen these critical trust anchors.

To transform root programs from reactive administrative functions into proactive product-driven systems, we need a clear framework for change. This starts with defining our core mission, establishing a vision for the future state we want to achieve, and outlining the strategic pillars that will get us there.

Mission

To safeguard global trust in internet infrastructure by managing systemic risks, driving technical innovation, fostering transparent governance, and building durable systems that serve generations to come.

Vision

A resilient and adaptive trust ecosystem where:

  • Governance is proactive and risk-aware, balancing technical rigor with user-centric principles
  • Infrastructure and processes are continuously validated, transparent, and simplified
  • Collaboration fosters innovation to address emerging challenges and deliver long-term security

Strategy

1. Transparency and Accountability

  • Establish robust public verifiability for all CA operations, leveraging tools like transparency logs and continuous compliance monitoring
  • Communicate decisions on inclusion, removal, reentry, and policy changes openly, ensuring stakeholder trust
  • Build mechanisms for regular stakeholder feedback and confidence measurement, ensuring the ecosystem remains responsive to both technical and user needs

2. Integrated Risk Management

  • Apply blast radius management to minimize the impact of failures, for example by segmenting trust dependencies and ensuring risks remain contained
  • Use real-time monitoring and automated enforcement to detect and mitigate systemic risks
  • Implement standardized processes for risk assessment and mitigation

3. Proactive Governance

  • Shift from reactive to anticipatory governance by identifying potential risks and implementing early countermeasures
  • Leverage automated monitoring and enforcement to prevent and catch issues before they become incidents
  • Maintain clear lifecycle management processes for all ecosystem participants

4. Modernization and Simplification

  • Establish WebPKI governance primacy for included roots and minimize cross-ecosystem trust relationships
  • Limit what each CA is trusted for, reducing complexity and narrowing the scope of potential failures
  • Employ these measures as part of broader blast radius management strategies

5. Collaborative Ecosystem Building

  • Support and fund foundational open-source projects and critical infrastructure that the ecosystem depends on
  • Implement shared accountability mechanisms, ensuring all ecosystem participants bear responsibility for maintaining trust and integrity
  • Encourage CAs to align their policies not only with their own standards but also with aggregated internet governance policies, and best practices, especially for global use cases like TLS
  • Partner with browsers, CAs, and researchers to co-develop solutions for current and emerging threats
  • Foster an environment of mutual respect and constructive partnership

6. Commitment to Continuous Improvement

  • Drive decisions through data collection, measurement, and empirical analysis
  • Evolve policies based on quantitative feedback, incident analyses, and advancements in technology
  • Regularly reassess and refine program criteria to remain relevant and effective
  • Maintain clear processes for managing organizational transitions

The Stakes Are Higher Than Ever

The history of CA failures tells a sobering story – major CA distrust events occur on average every 1.23 years, each one threatening the foundation of trust that enables secure internet communication. These aren’t isolated incidents but rather represent recurring patterns of systemic failures in CA operations and governance.

Consider the range of critical failures we’ve seen: From DigiNotar’s complete infrastructure compromise in 2011 leading to rogue Google certificates to TURKTRUST’s “accidental” intermediate certificates in 2013, to government-affiliated CAs repeatedly undermining trust through deliberate actions or “accidents.” Take for example the ICP-Brasil case, where a root that had announced the end of SSL issuance continued to issue certificates months later – demonstrating how root programs’ decisions (or lack thereof) to maintain trust in roots that should no longer be part of the WebPKI can create unnecessary risks.

These incidents follow disturbingly consistent patterns:

  • Security breaches and infrastructure compromises that enable unauthorized certificate issuance
  • Systematic misissuance of certificates that undermine the entire trust model
  • Poor incident response handling that compounds initial failures
  • Non-compliance with industry standards despite clear requirements
  • Operational vulnerabilities that go unaddressed until it’s too late
  • Deceptive actions that breach the fundamental trust of the ecosystem

The Economic Reality

The current ecosystem suffers from fundamentally misaligned incentives. Root programs are typically run by browser vendors as a necessary cost of doing business, often competing with commercial priorities for resources and attention. Meanwhile, CAs face strong pressure to maintain their trusted status but weak incentives to uphold rigorous security practices. When security failures occur, users bear the cost while CAs often face minimal consequences. This economic reality is compounded by an ineffective auditing system where CAs select and pay their own auditors – reminiscent of the dynamics that enabled financial scandals like Wirecard and Enron.

The Long Tail Problem

A particularly concerning aspect of the current system is the “long tail” of rarely-used CAs. Many root certificates in browser trust stores belong to CAs that issue only dozens to hundreds of certificates annually, yet they maintain the same broad trust as major CAs issuing millions. These low-volume CAs pose risks that far outweigh their utility, creating unnecessary attack surfaces in our trust infrastructure. Regular assessment of each CA’s ongoing value to the ecosystem, balanced against their inherent risks, should inform continued inclusion in trust stores. This approach ensures the ecosystem maintains an appropriate balance between accessibility and security.

The Product-Centric Approach

To address these challenges, root programs must evolve from administrative oversight roles to become proactive, risk-managed entities. Here’s how a product-centric framework can transform root program governance:

1. Transparency and Accountability

  • Implement robust public verifiability for all CA operations
  • Leverage transparency logs and continuous compliance monitoring
  • Ensure open communication about inclusion, removal, and policy changes
  • Require automated reporting of security incidents and operational compliance

2. Blast Radius Management

  • Segment trust dependencies to contain potential failures
  • Implement dedicated hierarchies for specific use cases
  • Limit CA trust scope to reduce complexity and narrow failure impacts
  • Deploy real-time monitoring and automated enforcement

3. Risk-Based Governance

  • Move from reactive to anticipatory governance
  • Apply different levels of scrutiny based on CA context and risk profile
  • Regularly assess whether each CA’s utility justifies its risks
  • Implement meaningful technical restrictions on certificate issuance

4. Modernization and Simplification

  • Establish and maintain WebPKI governance primacy
  • Implement dedicated hierarchies for specific use cases
  • Limit CA trust scope to reduce complexity and narrow failure impacts
  • Deploy real-time monitoring and automated enforcement

5. Shared Accountability

  • Support and fund critical infrastructure and monitoring
  • Foster collaboration between browsers, CAs, and researchers
  • Establish clear responsibilities across all ecosystem participants
  • Create incentives that align with security goals
  • Balance rigorous oversight with constructive partnership
  • Develop clear processes for managing CA transitions and lifecycle events

Measuring Success

Like any product, root programs need clear metrics for success:

1. Risk Reduction

  • Track mis-issuance rates and time-to-remediate
  • Measure decrease in systemic vulnerabilities
  • Monitor adoption of proactive security measures
  • Track stakeholder confidence through regular surveys

2. Ecosystem Resistance

  • Assess recovery capabilities from disruptions
  • Track implementation and effectiveness of blast radius containment measures
  • Monitor CA inclusion, removal, and reentry success rates

3. Operational Excellence

  • Monitor CA inclusion and removal process efficiency
  • Track adoption of modern security and governance practices
  • Measure response times to security incidents and evaluate the thoroughness of incident handling
  • Evaluate lifecycle management process consistency and post-incident improvements

Lifecycle Excellence

The sustainability of root programs depends on having clear, repeatable processes for managing the complete lifecycle of CAs – from inclusion to potential removal. This includes:

  • Standardized onboarding and transition procedures
  • Regular assessment checkpoints
  • Clear criteria for maintaining trusted status
  • Efficient processes for handling CA turnover
  • Proactive planning for ecosystem evolution

The Trust Paradox

One of the most challenging aspects of root program governance is the inherent contradiction between trust and security. As we’ve seen with government-affiliated CAs and others, institutional incentives often directly conflict with security goals. A product-centric approach helps address this by:

  • Implementing consistent risk evaluation frameworks that account for different institutional incentives and constraints
  • Requiring proactive enforcement rather than post-incident reactions
  • Creating clear, measurable criteria for ongoing trust
  • Establishing automated compliance checks and monitoring
  • Establishing feedback loops between governance bodies, CAs, and end-users to maintain alignment

The Path Forward

Root programs must continue evolving beyond reactive governance and inconsistent enforcement. By adopting a product mindset that emphasizes continuous improvement, measurable outcomes, and proactive risk management, we can build an even more resilient trust ecosystem.

Immediate actions should include:

  • Implementing automated compliance monitoring
  • Establishing clear criteria for CA risk assessment
  • Developing robust blast radius management strategies
  • Creating transparent processes for trust decisions
  • Supporting proper funding for monitoring infrastructure
  • Implementing standardized CA lifecycle management processes
  • Building collaborative frameworks that balance accountability with mutual trust

Conclusion

The security of the internet depends on root programs functioning effectively. By treating them as products rather than administrative functions, we can build a more secure, transparent, and reliable trust ecosystem. This transformation won’t be easy, but the cost of maintaining the status quo – as evidenced by the long history of failures – is simply too high.

The question isn’t whether we need root programs – we absolutely do. The question is how we can continue evolving them to meet the security challenges of today’s internet. A product-centric approach, focused on proactive risk management and measurable outcomes, offers our best path forward.

Beyond Memorization: Preparing Kids to Thrive in a World of Endless Information

What does it take to prepare our children for a tomorrow where AI shapes how they get information, robots change traditional jobs, and careers transform faster than ever—a time when what they can memorize matters far less than how quickly they can think, adapt, and create? As a parent with children aged 29, 18, and 9, I can’t help wondering how to best prepare each of them. My oldest may have already found his way, but how do I ensure my younger two can succeed in a world so different from the one their brother entered just a few years before?

We’ve faced big changes like this before—moments that completely changed how we work and what opportunities exist. A century ago, Ford’s assembly line wasn’t just about making cars faster; it changed what skills workers needed and how companies treated employees. Decades later, Japan’s quality movement showed us that constant improvement and efficient thinking could transform entire industries. Each era required us to learn not just new facts, but new ways of thinking.

Today’s change, driven by artificial intelligence and robotics, is similar. AI will handle basic knowledge tasks at scale, and robots will take care of repetitive physical work. This means humans need to focus on higher-level skills: making sense of complex situations, evaluating information critically, combining ideas creatively, and breaking down big problems into solvable pieces. Instead of memorizing facts like a living library, our children need to know how to judge if information is trustworthy and connect ideas that might not seem related at first glance. They need to see knowledge not as something you collect and keep, but as something that grows and changes through questioning, discussion, and discovery.

Where can we find a guide for developing these new thinking skills? Interestingly, one already exists in our schools: the teaching strategies developed for gifted and twice-exceptional (2e) learners—students who are intellectually gifted but may also face learning challenges.

Gifted and 2e children think and learn in ways that are often intense, complex, and different from traditional methods. Teachers who work with these learners have refined approaches that develop multimodal thinking (using different ways to learn and understand), metacognition (thinking about how we think), and critical evaluation—exactly the skills all young people need in a future filled with smart machines and endless information.

Shift from Memorization to Meaning Instead of drilling facts, encourage your child to question sources. If you’re discussing a news article at dinner, ask: “How do we know this claim is accurate? What makes the source trustworthy?” Now they’re not just absorbing information; they’re actively working to understand it.

Foster Multimodal Exploration Make learning richer by using different approaches. Let them build a simple robot kit, draw a diagram of how it works, and then explain it in their own words. By connecting hands-on activity (tactile learning), visual learning, and verbal explanation, they develop deeper understanding.

Encourage Metacognition After solving a puzzle or coding a simple project, have them reflect: “What worked best? What would you try differently next time?” By understanding their own thought processes, they become better at adapting their approach to new challenges.

Highlight Interdisciplinary Connections and Global Outlook Show them that knowledge doesn’t exist in separate boxes. A math concept might connect beautifully with a musical pattern, or a historical event might be understood better through science. Help them see that good ideas and innovation come from everywhere in the world, not just one place or tradition.

Emphasize Emotional and Social Intelligence In a world where machines handle routine tasks, human qualities like empathy, communication, and teamwork become even more important. Encourage them to be comfortable with uncertainty, to see setbacks as chances to learn, and to develop resilience (the ability to bounce back from difficulties). These people skills will matter just as much as any technical knowledge.

Deep Learning and Entrepreneurial Thinking Like classical scholars who focused deeply on fewer subjects rather than skimming many, children benefit from spending more time thinking deeply about carefully chosen topics rather than rushing through lots of surface-level information. Consider teaching basic business and problem-solving skills early—like how to budget for a project or spot problems in their community that need solving—so they learn to create opportunities rather than just wait for them.

Finally, we’re raising children in an age where AI is becoming a constant helper and resource. While information is everywhere, the ability to understand it in context and make good judgments is rare and valuable. By using teaching techniques once reserved for gifted or 2e learners—multiple ways of learning, thinking about thinking, careful evaluation, global awareness, and creative combination of ideas—we prepare all children to be confident guides of their own learning. Instead of being overwhelmed by technology, they’ll learn to work with it, shape it, and use it to build meaningful futures.

This won’t happen overnight. But just as we adapted to big changes in the past, we can evolve again. We can model skepticism, curiosity, and flexible thinking at home. In doing so, we make sure that no matter how the world changes—no matter what new tools or systems appear—our children can stand on their own, resilient, resourceful, and ready to thrive in whatever tomorrow brings.

UPDATE [DEC 8,2024]: In the spirit of AI, I played with Claude 3.5 Sonnet yesterday and turned this post into a REACT presentation.

Technology bias—we all have it—but it often gets in the way.

An old saying goes, “When you’re a plumber, you fix everything with a wrench.” It highlights a truth: we naturally gravitate toward the tools, people, and methods we know and trust most. This tendency stems from cognitive biases like anchoring—our reliance on initial information—and confirmation bias, which pushes us to favor ideas that align with our existing beliefs. While these biases help us make quick decisions, they can also blind us to better alternatives.

Another saying, “To know thyself is to be true,” resonates here. Even with my deep experience in PKI, I consciously revisit first principles whenever I consider applying it to a new problem. Is this really the best solution? PKI, like many technologies, carries hidden baggage that isn’t always visible, and over-reliance on familiarity can obscure better approaches.

The danger of sticking to the familiar becomes evident in the adoption of Infrastructure as Code (IaC). When tools like Terraform and CloudFormation emerged, many teams resisted, clinging to manual infrastructure management because it felt familiar and unnecessary. Yet manual approaches introduced inconsistency, inefficiency, and even security risks. Teams that embraced IaC unlocked scalable, repeatable workflows that transformed operations. IaC not only streamlined processes but also embedded elements of compliance and best practices directly into code. What outdated practices might we be holding onto today that prevent us from unlocking similar benefits?

I recently encountered a similar situation during a meeting with the leader of a large IT organization. They were eager to adopt a technology developed by someone they trusted personally. However, when I asked fundamental questions like, “How much time do you have to deliver this project?” and “What other systems need to interoperate for this to be considered a success?” it became clear that the technology wasn’t the right fit—at least not yet. By breaking the problem down to its fundamentals, we uncovered insights that their initial bias had obscured.

Practicing first-principles thinking can help sidestep these pitfalls. Start by identifying the core problem: what is the actual goal? What constraints are truly fixed, and which are merely assumptions? From there, challenge each assumption. Is there an alternative approach that better addresses the need? This process not only reduces the influence of bias but also fosters creativity and more effective solutions.

Biases aren’t inherently bad—they help us move quickly—but as the example of IaC demonstrates, unchecked bias can limit us. By anchoring decisions in first principles, we can do more than solve problems; we open the door to better solutions. Asking, “Is this truly the best approach?” ensures we don’t just repeat old patterns but discover new opportunities to improve and thrive.

Government CAs and the WebPKI: Trust is Often the Opposite of Security

Following my recent post about another CA failing the “Turing test” with a likely MITM certificate issuance, let’s examine a troubling pattern: the role of government-run and government-affiliated CAs in the WebPKI ecosystem. This incident brings attention to Microsoft’s root program, what is clear is a fundamental contradiction persists: we’re trusting entities whose institutional incentives often directly conflict with the security goals of the WebPKI.

The Value Proposition

Let me be clear—CAs and root programs serve critical functions in the WebPKI. As I discussed in my article about Trust On First Use, attempting to build trust without them leads to even worse security outcomes. The issue isn’t whether we need CAs—we absolutely do. The question is whether our current trust model, which treats all CAs as equally trustworthy regardless of their incentives and constraints, actually serves our security goals.

The Core Contradiction

History has repeatedly shown that the temptation to abuse these capabilities is simply too great. Whether it’s decision-makers acting in their perceived national interest or CAs that fail to understand—or choose to ignore—the consequences of their actions, we keep seeing the same patterns play out.


Consider that a CA under government oversight faces fundamentally different pressures than one operating purely as a business. While both might fail, the failure modes and their implications for users differ dramatically. Yet our root programs largely pretend these differences don’t exist.

The DarkMatter Paradox

The removal of DarkMatter as a CA due to its affiliation with the UAE government, despite its clean record in this context, starkly contrasts with the continued trust granted to other government-affiliated CAs with documented failures. This inconsistency highlights a deeper flaw in root programs: Rules are often applied reactively, addressing incidents after they occur, rather than through proactive, continuous, and consistent enforcement.

A History of Predictable Failures

If you read yesterday’s post, you may recall my 2011 post on the number of government-run or affiliated CAs. The intervening years have given us a clear pattern of failures. Whether through compromise, willful action, or “accidents” (take that as you will), here are just the incidents I can recall off the top of my head—I’m sure there are more:

The Economics of (In)Security

The fundamental problem isn’t just technical—it’s economic. While some root programs genuinely prioritize security, inconsistencies across the ecosystem remain a critical challenge. The broader issue is not simply about convenience but about conflicting incentives—balancing compatibility, regulatory pressures, and market demands often at the expense of doing what is best for end users.


CAs face strong incentives to maintain their trusted status but relatively weak incentives to uphold the rigorous security practices users expect. The cost of their security failure is largely borne by users, while the benefits of looser practices accrue directly to the CA. Audits, much like those in financial scandals such as Wirecard or Enron, often serve as window dressing. With CAs selecting and paying their auditors, incentives rarely align with rigorous enforcement.


The long tail of rarely-discussed CAs is particularly concerning. Many root certificates in browser trust stores belong to CAs that issue only dozens to hundreds of certificates annually, not the thousands or millions that major CAs produce. Some haven’t issued a certificate in ages but retain the capability to do so—and with it, the ability to compromise security for months or longer. It wouldn’t be unreasonable to say these low-volume CAs pose risks far outweighing their utility.

Certificate Transparency: Necessary but Not Sufficient

While Certificate Transparency has been invaluable in helping identify incidents (including the latest ICP-Brasil case), it’s not a complete solution. Its limitations include:

  • Reactive nature: Violations are identified only after they occur.
  • Monitoring challenges: Effective oversight is resource-intensive and depends on a small community of volunteers.
  • Incomplete coverage: Not all certificates are logged, leaving gaps in visibility.
  • Poorly funded: We have too few logs and monitors to have confidence about the long-term survivability of the ecosystem.

The Limits of Technical Controls

Some browsers have implemented technical guardrails for some CA mistakes in their validation logic, such as basic certificate linting and rules, to reject certificates that don’t pass basic checks but nothing more granular. There have been discussions about imposing additional restrictions on CAs based on their relationship to government oversight or regulatory jurisdictions. However, these proposals face significant pushback, partly due to the political consequences for browser vendors and partly due to concerns about basing trust decisions on “future crime” scenarios. As a result, the WebPKI remains stuck with a one-size-fits-all approach to CA trust.

The Monitoring Gap

The challenges extend beyond malicious behavior to include operational oversight. For instance, in August 2024, ICP-Brasil formally announced they would cease issuing publicly trusted SSL/TLS certificates. Yet by November, they issued a rogue certificate for google.com. This outcome was predictable—public CT logs in 2020 revealed their consistent inability to handle basic operational and issuance requirements, including issuing certificates with invalid DNS names and malformed URLs. Despite these red flags, they remained trusted.


How many other CAs operate outside their stated parameters without detection? Patterns of technical incompetence frequently precede security incidents, but warnings are often ignored.

Required Reforms

To address these systemic issues, root programs must adopt the following measures:

  1. Consistent Standards: Apply appropriate scrutiny to CAs based on their operational and institutional context.
  2. Swift Response Times: Minimize delays between discovery and action.
  3. Proactive Enforcement: Treat red flags as early warnings, not just post-incident justifications.
  4. Technical Controls: Implement meaningful restrictions to limit the scope of certificate issuance.
  5. Automated Compliance: Require CAs to report security incidents, and operational, and ongoing compliance while continuingly to monitor them via automated checks for compliance.
  6. Value Assessment: Regularly evaluate whether each CA’s utility justifies its risks and remove those that do not.

Protecting Yourself

Until the ecosystem adopts consistent and enforceable security measures:

  • Windows users should monitor Microsoft’s root program decisions.
  • Enterprises should use the Microsoft distrust store and group policies.
  • Everyone should stay informed about CA incidents and their handling.

When Will We Learn?

The “Turing Test” reference in my previous post was somewhat tongue-in-cheek, but it points to serious questions: How many more failures will it take before we fundamentally reform the WebPKI? Even if we know what’s needed, can we realistically create a system that treats government-affiliated CAs differently – or even reliably identify such affiliations – given the complex web of international relations, corporate structures and potential diplomatic fallout?

With regulatory frameworks like eIDAS 2.0 potentially constraining security measures browsers can take, vigilance from the security community is more critical than ever. Stay vigilant, and keep watching those CT logs. Someone has to.

Another CA Fails the Turing Test?

In a concerning development, yet another Certificate Authority (CA) has issued what is likely a man-in-the-middle (MITM) certificate—something strictly prohibited by all root programs. This particular case is unique because the CA is trusted only by Microsoft, making the situation both frustratingly familiar and uniquely problematic. Details are emerging in this Bugzilla thread.

A Familiar Pattern

Back in 2011, I wrote about Microsoft’s trust in government-run CAs and the inherent risks (read here). More than a decade later, it’s clear little has changed. Browser distrust events happen with disappointing regularity—roughly every 1.25 years, according to my analysis (source). While MITM certificate issuance is far rarer, it’s far more serious, and a disturbing trend is evident: Many of the CAs responsible are government-run or affiliated.

Why This Matters to You

For Windows users, this is particularly relevant. Windows browsers like Edge (and others) rely on the Microsoft Root Program, which has unfortunately historically been overly permissive and slow to respond to incidents. You can learn more about the program and its requirements here. In the recent past, I can’t recall a CA responsible for willfully issuing an MITM certificate surviving, but the the timeline for Microsoft’s response is unclear. That said, when Microsoft does act, their AutoRoot Update feature—which I was the product manager for in the early 2000s—allows them to respond swiftly.

In the meantime, you can protect yourself by identifying and distrusting the offending certificate. Enterprises, in particular, can take a proactive stance by using the Microsoft distrust store. Through group policy, IT administrators can preemptively distrust the problematic CA across their organization, mitigating the risk before Microsoft formally acts.

The Lack of Technical Controls

It’s worth noting there are no technical controls that inherently prevent CAs from issuing MiTM certificates (though some browsers do have technical controls for some classes of misissuance). Instead, the WebPKI ecosystem relies on Certificate Transparency (CT) logs and a dedicated community of people closely monitoring CA issuance for violations of requirements. In a way, this incident serves as a smoke test for the system—but when it comes to these MITM certificates, it’s an awfully expensive test, with significant risks both for users of the web and reputational risks for the root programs, as well as questions about the trustworthiness of the WebPKI in general.

Predictable Chaos

If you’re following this story, keep an eye on the Bugzilla thread. Based on past experience, I’d wager the CA in question will bungle its incident response. MITM certificate issuance often reflects systemic issues, and such organizations typically don’t have the maturity to handle these crises well.

If this topic interests you, here’s some further reading:

For a deeper dive, here’s a class I run on the topic of WebPKI incident response and how they’re (mis)handled.

Lessons Unlearned

While it’s comforting to know mechanisms like Certificate Transparency exist to catch these incidents, the recurring nature of these failures raises the question: Are we doing enough to hold CAs accountable?

Trust in the web depends on the reliability of its foundational systems. It’s time we demand higher standards from the organizations entrusted with securing our online world. Until then, stay informed, protect yourself, and let’s hope the next CA at least manages to pass the “Turing Test.”

Proactive Security: Engineering Resilience from the Ground Up

Picture discovering your house has been robbed. Like many homeowners in this situation, your first instinct might be to invest in the latest security system with cameras and motion sensors. But what if the thief simply walked through an unlocked door, exploiting the most basic failure of security? No amount of surveillance would have prevented such a fundamental oversight.

This scenario mirrors how many organizations approach security today. Companies invest heavily in sophisticated detection and response tools and a patchwork of workarounds to basic design flaws while neglecting basic security practices, creating a false sense of security all built on a shaky foundation. According to Gartner, global cybersecurity spending reached $188.3 billion in 2023, yet breaches continue to rise because we’re treating symptoms while ignoring their root causes.

The Real Cost of Reactive Security

Detection and monitoring tools can provide valuable insights but cannot compensate for fundamental security weaknesses. Many organizations invest heavily in sophisticated detection capabilities while leaving basic architectural vulnerabilities unaddressed—much like a house with state-of-the-art cameras but unlocked doors.

The U.S. Government Accountability Office recently highlighted this problem in stark terms: ten critical federal IT systems, ranging from 8 to 51 years old, cost taxpayers $337 million annually to maintain. Many of them rely on obsolete technologies like COBOL, where maintenance costs continue to rise due to scarce expertise. The thing is that we’ve learned a lot about building secure systems in the last 51 years — as a result, these systems have no chance when faced with a moderately skilled attacker. While government systems make headlines, similar issues affect private enterprises, where legacy systems persist due to the perceived cost and risk of modernization.

The persistence of basic security flaws isn’t just a technical failure; it often represents a systemic underinvestment in foundational security architecture. Consider weaknesses such as:

  • Outdated Architectures
    Decades-old systems that cannot meet modern security demands.
  • Minimal Security Hygiene
    Poor patching practices, weak service-to-service authentication, and a lack of hardened or unikernel images.
  • Weak Design Principles
    Core concepts like zero trust and least privilege can not be bolted on later leaving systems exposed.
  • Lack of Lifecycle Planning
    Without clear modernization plans, organizations face costly and disruptive migrations when problems inevitably arise.

These issues are not just hypothetical. For example, the Salt Typhoon espionage campaign exploited foundational weaknesses to compromise major U.S. telecom firms, including Verizon, AT&T, and T-Mobile. Such systemic flaws make even the most advanced detection systems insufficient.

Building Security from the Ground Up

For years, the cybersecurity industry has embraced the mantra, “security is everyone’s problem.” While this has broadened awareness, it often leads to unintended consequences. When responsibility is shared by everyone, it can end up being truly owned by no one. This diffusion of accountability results in underinvestment in specialized security expertise, leaving critical vulnerabilities unaddressed. The Microsoft Storm-0558 incident serves as a prime example of the risks posed by this approach.

True security requires a fundamental shift from reactive to proactive approaches. Organizations must design systems assuming they will eventually be compromised. This means embedding zero trust principles, implementing proper system segmentation, and treating least privilege as foundational.

In practice, proactive measures include short-lived credentials, mutual TLS authentication, and granular access controls from the outset. For example, while a reactive approach might detect suspicious service-to-service communication, a proactive approach prevents such movement entirely through robust authentication.

Security in the Development Process

The development process itself should prioritize security through specific, measurable practices. Best-in-class organizations typically implement:

  • Infrastructure as code with built-in security policies.
  • Hardened containers or unikernel images to reduce attack surfaces.
  • Automated patch management integrated into deployment pipelines.
  • Continuous compliance monitoring and reporting for real-time security assurance.

These aren’t just best practices—they’re competitive advantages. Organizations that adopt them often see reduced incident costs and faster recovery times, transforming security from a cost center into an enabler of resilience and growth.

Regulatory Progress and Its Limitations

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) introduced its Secure by Design pledge to encourage security-first practices. While this initiative represents progress, it lacks critical components:

  • No Accountability
    There are no enforcement mechanisms to ensure organizations uphold their commitments.
  • No Tracking
    Without benchmarks or reporting requirements, evaluating progress is impossible.
  • No Timeline
    The absence of deadlines allows organizations to deprioritize these efforts indefinitely.

Without these elements, the pledge risks becoming aspirational rather than transformative. As seen with other voluntary efforts, real change often depends on market pressure. For instance, if cloud providers demanded stronger security controls from vendors, or if enterprises baked security requirements into procurement, the market would likely respond more effectively than through regulation alone.

A Balanced Security Strategy

Organizations must balance strong foundations with effective monitoring through clear, measurable steps:

  1. Thoroughly Evaluate Legacy Systems
    Identify critical systems, document dependencies, and create modernization plans with timelines.
  2. Embed Security Into Development
    Use security champions programs, establish clear ownership for each system, and incentivize proactive measures.
  3. Leverage Proactive Security Measures
    Implement short-lived credentials, granular privileges, and zero trust principles during design and operation.
  4. Strategically Deploy Reactive Tools
    Use detection and response systems to validate security assumptions and provide early warning of issues, not to compensate for poor design.

Proactive and reactive measures are complementary, not competing priorities. Installing advanced monitoring on a fundamentally weak system offers organizations only a false sense of security. By contrast, strong proactive foundations reduce the need for reactive interventions, cutting costs and lowering risks.

Conclusion: The Cost of Inaction

The choice between proactive and reactive security isn’t theoretical—it’s an urgent and practical decision. Systems designed with security in mind experience fewer breaches and cost less to maintain. The CISA Secure by Design pledge is a step in the right direction, but without accountability and market-driven enforcement, its impact will remain limited.

Organizations face a clear path forward: invest in proactive security measures to reduce systemic risks while leveraging reactive tools as a safety net. As cyber threats continue evolving, the question is not whether proactive security is necessary, but how soon organizations will act to implement it. Don’t wait until it’s too late—fix the house before adding stronger deadbolts.