Category Archives: Startups

Bundling and Unbundling in the NHI Market: Opportunities in Identity, Governance, and Cryptography

Jim Barksdale famously said “All money is made through bundling and unbundling,” and this dynamic is evident in the Non-Human Identity (NHI) market. Cryptography management, privileged access management, and certificate lifecycle solutions are being redefined under a higher-level taxonomy. These functions, once viewed as isolated, are increasingly integrated into broader frameworks addressing identity, governance, and security holistically, reflecting the market’s shift toward unified and specialized solutions.

Cloud providers dominate in offering integrated solutions across categories, but these are often limited and focus on cost-recovery pricing to encourage adoption of their real money-makers like compute, storage, network, databases, and these days AI. They frequently provide just enough to facilitate a single project’s adoption, leaving opportunities for other vendors. For instance, Microsoft’s push to migrate enterprises from on-premises Active Directory to its cloud offering presents an opportunity to unbundle within the NHI IAM space. By focusing narrowly on migrating existing infrastructures rather than reimagining solutions from first principles to meet modern usage patterns, Microsoft has created gaps that smaller, more agile providers can exploit. Similarly, regulatory pressures and the rise of AI-driven, agentic workloads are driving demand for advanced workload authentication, creating further opportunities for specialized providers to deliver tailored solutions. Meanwhile, established players like CyberArk and Keyfactor have pursued acquisitions, such as Keyfactor’s merger with PrimeKey, to bundle new capabilities and remain competitive. However, the integration complexity of these acquisitions often leaves room for focused providers to address modern, cloud-native demands more effectively.

At the same time, traditional cryptography management companies have been so focused on their existing Key Management System (KMS) and Hardware Security Module (HSM) offerings that they have largely ignored broader unmet needs in the market, prioritizing feature expansion and acquisitions aimed at chasing smaller competitors. This narrow focus has left significant gaps in visibility, particularly around cryptographic assets and risks, creating fertile ground for new solutions focused on cryptography discovery, automated inventory management, and preparation for post-quantum cryptography.

Capital allocation, on the other hand, highlights category focus and growth potential. Seed and Series A investments underscore the dynamic opportunities created by unbundling, as well as the constraints faced by larger vendors burdened with legacy products that make it harder to truly innovate due to existing commercial obligations in the same space. In contrast, private equity activity targets larger bundling opportunities, enabling less agile and more mature market leaders to remain relevant by scaling established solutions or consolidating fragmented players. These stages illustrate the market’s balance between early-stage innovation and late-stage consolidation, driven by the growing demand for unified, cloud-native identity and governance solutions.

These patterns of bundling and unbundling are organic and continual, offering just one lens on the evolving dynamics of this market. While the NHI market appears new, it is, in fact, a natural evolution of existing identity governance patterns, driven by the growing demand for unified, cloud-native identity and governance solutions. This evolution underscores the balance between early-stage innovation and late-stage consolidation, as new entrants and established players alike navigate the opportunities created by shifting market dynamics.

The Myth of Non-Technical Product Management

A common theme in conversations about product managers is the notion that they don’t need to be technical; they just need to bridge the gap between technical and non-technical teams. In my experience, particularly with enterprise and security products, this is a complete fallacy. Part of why this argument persists is the misconception that all product management is the same.

If you’re working on a 10-year-old product based on 20-year-old deployment patterns—and this might be hard to hear—chances are you’re not innovating. Instead, you’re managing customer requests and operating within the constraints of the bureaucracy you’re part of. Your roadmap likely consists of a mix of customer demands and features cloned from smaller competitors.

Another reason this perspective persists is that many organizations divide product managers into two categories: inbound and outbound. Outbound product managers are this decade’s version of product MBAs. They often have a limited understanding of their customers and their needs, instead focusing on systematizing a go-to-market strategy based on abstractions.

In the problem domain of enterprise and security—especially in small to medium-sized companies, where innovation tends to happen—there is no substitute for being an expert in what you’re building and selling. One of the most important things to understand is your customer: their pains, their constraints, and the schedules they operate within. The thing is, your customer isn’t just one person in an enterprise sale. As I’ve written before, at a minimum, you’re dealing with an economic buyer and a champion in any sale. If you’re lucky, you have many champions. And if you think strategically, you can even identify your champions’ champions within the sale.

This requires you to understand everyone’s job and perspective. If you don’t understand the technology or problem domain natively, you will always struggle—and likely fail—especially in smaller, early-stage companies.

Don’t get me wrong: once a company finds product-market fit and has a reproducible recipe for selling into organizations—or as the market evolves and expectations for a product in a given segment become standardized—it becomes less necessary. But even then, bringing that expertise to the table remains a powerful force multiplier that enables organizations lucky enough to have these resources to vastly outperform much larger and better-funded competitors.

Since I spend most of my time these days with smaller companies or very large companies looking to become more competitive again, all I can say is this: without the right product leaders, the best you can hope for is growing at the pace of your overall market and maintaining the status quo.

Navigating Public Reporting Obligations in WebPKI and Beyond

Incident response is notoriously challenging, and with the rise in public reporting obligations, the stakes have never been higher. In the WebPKI world, mishandling incidents can severely damage a company’s reputation and revenue, and sometimes even end a business. The Cyber Incident Reporting for Critical Infrastructure Act of 2022 has intensified this pressure, requiring some companies to report significant breaches to CISA within 72 hours. This isn’t just about meeting deadlines. The stakes are high, and the pressure is on. Look at the recent actions of the Cyber Safety Review Board (CSRB), which investigates major cyber incidents much like how plane crashes are scrutinized. The recent case of Entrust’s cascade of incidents in the WebPKI ecosystem, and the scrutiny they have gone under as a result, shows how critical it is to respond professionally, humbly, swiftly, and transparently. The takeaway? If you don’t respond adequately to an incident, someone else might do it for you, and even if not, mishandling can result in things spiraling out of control.

The Complexity of Public Reporting

Public reports attract attention from all sides—customers, investors, regulators, the media, and more. This means your incident response team must be thorough and meticulous, leaving no stone unturned. Balancing transparency with protecting your organization’s image is critical. A well-managed incident can build trust, while a poorly handled one can cause long-term damage.

Public disclosures also potentially come with legal ramifications. Everything must be vetted to ensure compliance and mitigate potential liabilities. With tight timelines like the CISA 72-hour reporting requirement, there’s little room for error. Gathering and verifying information quickly is challenging, especially when the situation is still unfolding. Moreover, public reporting requires seamless coordination between IT, legal, PR, and executive teams. Miscommunication can lead to inconsistencies and errors in the public narrative.

The Role of Blameless Post Mortems

Blameless post-mortems are invaluable. When there’s no fear of blame, team members are more likely to share all relevant details, leading to a clearer understanding of the incident. These post-mortems focus on systemic issues rather than pointing fingers, which helps prevent similar problems in the future. By fostering a learning culture, teams can improve continuously without worrying about punitive actions.

It’s essential to identify the root causes of incidents and ensure they are fixed durably across the entire system. When the same issues happen repeatedly, it indicates that the true root causes were not addressed. Implementing automation and tooling wherever possible is crucial so that you always have the information needed to respond quickly. Incidents that close quickly have minimal impact, whereas those that linger can severely damage a business.

Knowing they won’t be blamed, team members can contribute more calmly and effectively, improving the quality of the response. This approach also encourages thorough documentation, creating valuable resources for future incidents.

Evolving Public Reporting Obligations

New regulations demand greater transparency and accountability, pushing organizations to improve their security practices. With detailed and timely information, organizations can better assess and manage their risks. The added legal and regulatory pressure leads to faster and more comprehensive responses, reducing the time vulnerabilities are left unaddressed. However, these strict timelines and detailed disclosures increase stress on incident response teams, necessitating better support and processes. Additionally, when there are systemic failures in an organization, one incident can lead to others, overwhelming stakeholders and making it challenging to prioritize critical issues.

Importance of a Strong Communication Strategy

Maintaining trust and credibility through transparent and timely communication is essential. Clear messaging prevents misinformation and reduces panic, ensuring stakeholders understand the situation and response efforts. Effective communication can mitigate negative perceptions and protect your brand, even in the face of serious incidents. Proper communication also helps ensure compliance with legal and regulatory requirements, avoiding fines and legal issues. Keeping stakeholders informed supports overall recovery efforts by maintaining engagement and trust.

Implementing Effective Communication Strategies

Preparation is key. Develop a crisis communication plan that outlines roles, responsibilities, and procedures. Scenario planning helps anticipate and prepare for different types of incidents. Speed and accuracy are critical. Provide regular updates as the situation evolves to keep stakeholders informed.

Consistency in messaging is vital. Ensure all communications are aligned across all channels and avoid jargon. Transparency and honesty are crucial—acknowledge the incident and its impact, and explain the steps being taken to address it. Showing empathy for those affected and offering support and resources demonstrates that your organization cares. Keep employees informed about the incident and the organization’s response through regular internal briefings to ensure all teams are aligned and prepared to handle inquiries.

Handling Open Public Dialogues

Involving skilled communicators who understand both the technical and broader implications of incidents is crucial. Coordination between legal and PR teams ensures that messaging is clear and accurate. Implement robust systems to track all public obligations, deadlines, and commitments, with regular audits to ensure compliance and documentation. Prepare for potential delays or issues with contingency plans and pre-drafted communications, and proactively communicate if commitments cannot be met on time.

  • Communication with Major Customers: It often becomes necessary to keep major customers in the loop, providing them with timely updates and reassurances about the steps being taken. Build plans for how to proactively do this successfully.
  • Clear Objectives and Measurable Criteria: Define clear and measurable criteria for what good public responses look like and manage to this. This helps ensure that all communications are effective and meet the required standards.
  • External Expert Review: Retain external experts to review your incidents with a critical eye whenever possible. This helps catch misframing and gaps before you step into a tar pit.
  • Clarity for External Parties: Remember that external parties won’t understand your organizational structure and team dynamics. It’s your responsibility to provide them with the information needed to interpret the report the way you intended.
  • Sign-Off Process: Have a sign-off process for stakeholders, including technical, business, and legal teams, to ensure the report provides the right level of information needed by its readers.
  • Track Commitments and Public Obligations: Track all your commitments and public obligations and respond by any committed dates. If you can’t meet a deadline, let the public know ahead of time.

In the end, humility, transparency, and accountability are what make a successful public report.

Case Study: WoSign’s Non-Recoverable Loss of Trust

Incident: WoSign was caught lying about several aspects of their certificate issuance practices, leading to a total non-recoverable loss of trust from major browsers and ultimately their removal from trusted root stores.

Outcome: The incident led to a complete loss of trust from major browsers.

Impact: This example underscores the importance of transparency and honesty in public reporting, as once trust is lost, it may never be regained.

Case Study: Symantec and the Erosion of Trust

Incident: Symantec, one of the largest Certificate Authorities (CAs), improperly issued numerous certificates, including test certificates for domains not owned by Symantec and certificates for Google domains without proper authorization. Their non-transparent, combative behavior, and unwillingness to identify the true root cause publicly led to their ultimate distrust.

Outcome: This resulted in a significant loss of trust in Symantec’s CA operations. Both Google Chrome and Mozilla Firefox announced plans to distrust Symantec certificates, forcing the company to transition its CA business to DigiCert.

Impact: The incident severely damaged Symantec’s reputation in the WebPKI community and resulted in operational and financial setbacks, leading to the sale of their CA business.

Conclusion

Navigating public reporting obligations in WebPKI and other sectors is undeniably complex and challenging. However, by prioritizing clear, honest communication and involving the right professionals, organizations can effectively manage these complexities. Rigorous tracking of obligations, proactive and transparent communication, and a robust incident response plan are critical. Case studies like those of WoSign and Symantec underscore the importance of transparency and honesty—once trust is lost, it may never be regained.

To maintain trust and protect your brand, develop a crisis communication plan that prioritizes speed, accuracy, and empathy. Consistent, transparent messaging across all channels is vital, and preparing for potential incidents with scenario planning can make all the difference. Remember, how you handle an incident can build or break trust. By learning from past mistakes and focusing on continuous improvement, organizations can navigate public reporting obligations more effectively, ensuring they emerge stronger and more resilient.

Rethinking How We Assess Risk in the Software We Rely On

Despite today’s widespread use of open-source software, most software is still delivered in binary form. This includes everything from the foundational firmware of our computers to the applications we use for work, extending all the way to the containers running our server software in the cloud.

A significant challenge arises when even if the source code of the software is available, reproducing the exact binary from it is often impossible. Consequently, companies and users are essentially operating on blind faith regarding any qualitative or quantitative assurances received from software suppliers. This stark reality played a critical role in the rapid and broad spread of the SolarWinds incident across the industry.

The SolarWinds Wake-Up Call

The SolarWinds attack underscored the risks inherent in placing our trust in software systems. In this incident, attackers infiltrated build systems, embedding malware into the legitimate SolarWinds software. Customers updating to the latest software version unwittingly became victims in this attack chain. It’s crucial to acknowledge that targeting a software supply chain for widespread distribution is not a new tactic. Ken Thompson, in his 1984 Turing Award Lecture, famously stated, “No amount of source-level verification or scrutiny will protect you from using untrusted code.” Regrettably, our approaches to this challenge haven’t significantly evolved since then.

Progress in the domain of supply chain security was initially slow. In 1996, Microsoft began promoting the concept of code signing with its Authenticode support, allowing customers to verify that their software hadn’t been altered post-distribution. Subsequently, the open-source movement gained traction, particularly following the release of Netscape Navigator’s source code. Over the next two decades, the adoption of open source, and to a lesser extent, code signing increased. The use of interpreted languages aided in understanding software operations, but as software grew in size and complexity, the demand for software engineers began to outstrip the supply. The adage “Given enough eyeballs, all bugs are shallow” suggests that greater openness can enhance security, yet the industry has struggled to develop a talent pool and incentive models robust enough to leverage source code availability effectively.

Before the SolarWinds incident, the industry, apart from some security engineers advocating for practices like reproducible builds, memory-safe languages, and interpreted languages, largely overlooked the topic of supply chain security. Notable initiatives like Google’s work on Binary Transparency, which predates SolarWinds, began to create an environment for broader adoption of code signing-like technologies with efforts like Go SumDBSigStore, and Android’s Binary Transparency (each of which I had the opportunity to contribute to). However, even these solutions don’t fully address the challenge of understanding the issues within a binary, a problem that remains at the forefront of security.

The industry’s response to SolarWinds also included embracing the concept of Software Bill of Materials (SBOM). These artifacts, envisioned to be produced by the build system, document the, often third-party, components used in software. However, this approach faces challenges, such as the possibility of attackers manipulating SBOMs if they compromise the build system.

The complexity of compiled software adds another layer of difficulty. Each compiled dependency has its own dependencies, not all of which are publicly declared, as is the case with static dependencies. When software is compiled, only portions of the dependencies that are used get included, potentially incorporating multiple versions of a single dependency into the final binary. This complexity makes simple statements about software components, like “I use OpenSSL 1.0,” inaccurate for even moderately complex code. Moreover, the information derived from SBOMs is often not actionable. Without access to all sources or the ability to build binaries independently, users are left with CVE lists that provide more noise than actionable insight.

To make things worse compilers, through the optimization of builds can even remove security fixes that developers carefully put in to mitigate known issues, for example, freeing memory to keep keys cryptographic keys and passwords from getting paged to disk.

The Critical Role of Binary Analysis

If all we have is a binary, the only way to understand the risks it represents is to analyze it in the same way an attacker would. However, doing this at scale and making the analysis actionable is challenging. Recent advancements in machine learning and language development are key to addressing this challenge.

Currently, tools that operate on binaries alone fall into two categories. The first are solutions akin to 1990s antivirus programs – matching binaries to known issues. The second category helps skilled professionals reverse engineer the binary’s contents more quickly.

Both categories have struggled to keep pace with the rapid changes in software over the past few decades. A new category of tools is emerging, led by companies like Binarly, which I advise. Binarly’s approach to automated binary analysis began with key goals such as achieving processor architecture independence and language independence. This enables the analysis of binaries across different architectures without duplicating threat intelligence and identifying insecure patterns stemming from ported code or common insecure Stack Overflow examples. Identifying static dependencies and which parts of them are used in a binary is both challenging and crucial for understanding the security issues that lie beneath the surface.

Their approach is remarkable in its ability to detect “known unknowns,” enabling the identification of classes of security vulnerabilities within a binary alone. Furthermore, through symbolic execution, they can perform reachability analysis, ensuring that flagged issues are not just theoretical but can potentially be exploited by attackers.

Though their approaches are not firmware-specific, Firmware is a great example of the problems that come from binary-only distributions and customers’ reliance on blind faith that their vendors are making the right security investments. It is their unique approach to binary analysis that has enabled them to file and report more CVEs in the last two years than have ever been reported before.

Binary analysis of this kind is crucial as it scrutinizes software in its final, executable form—the form in which attackers interact with it.

Conclusion

The lesson from the SolarWinds attack is clear: no build system-based approach to articulate dependencies is entirely secure. Ken Thompson’s 1984 assertion about the limitations of trusting any code you didn’t produce yourself remains relevant. In a world where software vulnerabilities have extensive and far-reaching impacts, binary analysis is indispensable. Binarly’s approach represents a paradigm shift in how we secure software, offering a more robust and comprehensive solution in our increasingly connected world.

Farm boy sensibilities and the importance of contracts

I like to say that I was raised to have “Farm boy sensibilities“. For me this is a positive statement and talks to how my father and grandfather stressed axioms like “a man is only as good as his word“, “treat others the way you want to be treated” and no matter what “when you say you will do something come hell or high water you better do it.

As a security practitioner this is a little bit of a dichotomy in that the above exposes you to risk when you assume others live by the same rules as you do. Thats why I like the phrase “trust but verify” as I think it more accurately capture what “the modern farm boys” mantra should be.

I bring this up because I was just reminded through a personal experience that not everyone approaches their lives in the same way. This is why (amongst other reasons) having contracts or at a minimum memorandums of understanding that accurately represent not only the mutual understanding but how issues will be handled in the event of a dispute are so important in business.

It is easy to find yourself in a situation where you feel like both parties will respect each others position and “do what is right” and think its not necessary to spend the time to do these documents justice or to create them at all but in practice this only works if both parties play by the same rules which unfortunately is not always the case.

Though often times there is no substitute for proper legal council thankfully there are a few resources available to you online that can make things a little easier when creating  agreements, some of which include:

These can provide good templates for you to work from. When drafting any document you will use yourself though you want to make sure you think about all of the things that could go wrong. This is a lot like what a security practitioner does when they asking themselves where the weak links are in the design of a system they are reviewing.

In any event its important to keep in mind not everyone plays by the same rules and contracts play an important part in ensuring you don’t end up on the wrong end of a good deal.