author photo
By James Kimble
Fri | May 24, 2024 | 12:04 PM PDT

During World War II, statistician Abraham Wald provided a counterintuitive recommendation for reducing bomber losses. He suggested reinforcing sections of aircraft that showed no damage after missions. His rationale was clear: damage on returning bombers indicated where an aircraft could sustain hits and still survive. Thus, undamaged areas represented critical vulnerabilities; bombers hit in these sections likely never returned.

I know, what does this have anything to do with cybersecurity strategy?

Much like the bombers that Wald sought to protect on their missions, the enterprise takes on hits (threats) every day. Some of these hits are obvious, logged, and monitored—much like the returning bombers' damage history. But there are sections of every enterprise that are untested, maybe under-protected, and you see no alerts. These sections of the enterprise are like the bombers that never returned to base; the damage they received may be devastating and never seen.

Survivorship bias in cybersecurity: a multi-faceted challenge

Survivorship Bias manifests in significant ways throughout cybersecurity. This article is going to concentrate on tool selection, development, placement, and validation.

First, let's talk about tool selection. Cybersecurity leaders may default to time-tested tools, potentially overlooking innovative technologies tailored to their specific needs. Leaders and practitioners need to develop a process to regularly evaluate the effectiveness of their current tools and the potential of any new technologies. This should include two processes, Tool Rationalization and a Fit-for-Use Scorecard.

  • Tool Rationalization is the strategic process focused on evaluating and optimizing the tools and technologies used within an organization's security objectives.

  • The Fit-for-Use Scorecard is a systematic evaluation framework designed to assess the suitability of a cybersecurity tool or technology for an organization's specific needs. This scorecard should not only benchmark the capabilities a tool provides, but also the tool's performance in a real-world scenario.

Secondly, tools developed internally may be designed primarily to defend against known and frequent threats seen in your enterprise, overlooking obscure yet potentially harmful vulnerabilities. This mirrors the flawed practice of reinforcing the damaged sections of a bomber while ignoring the undamaged but equally critical sections. To combat this bias, developers need to incorporate diverse perspectives from teams across the enterprise and regularly update or expand scenarios to reflect emerging threats and obscure vulnerabilities that may not yet have manifested.

Thirdly, tool placement. Just like the bombers of the past, the enterprise will need standard protection across its entire surface; however, it may also need reinforcements in specific areas. Leaders and practitioners should avoid assuming "no news is good news" when certain areas of the enterprise are silent (not seeing hits). Are tools deployed there? Are they the right tools? Are the tools working?

Lastly, this brings us to tool validation. It is my belief that true cybersecurity tool validation requires Red and Purple Teams.

The critical role of Red and Purple Teams

The Red Team is tasked with simulating real-world attacks to probe the effectiveness of current tools and strategies, especially in less visible sections of the enterprise. The Purple Team complements this by merging the offensive tactics of the Red Team with the defensive strategies of the Blue Team, fostering a continuous cycle of refinement for both tools and tactics.

Let's look at an example: during their attack, the Red Team exploits a gap in an under-protected part of the enterprise. The Purple Team can dissect the breach, tweak some settings, and, if needed, collaborate with Security Architecture to provide alternative solutions that better protect these areas. These real-world test cases are essential for uncovering insights that conventional selection processes might miss.

Structuring effective feedback loops between the Red, Purple, and Blue Teams will be very beneficial. By fostering collaboration, knowledge sharing, and continuous improvement, organizations can greatly enhance their threat detection and response capabilities. Implementing best practices such as regular debriefings, automated tools, joint training, and clear communication channels will ensure that these feedback loops are efficient and effective, leading to a more resilient security posture.

Integrating Red and Purple Teams into tool selection

Incorporating these teams into the tool selection process requires strategic foresight and executive support. It involves recognizing that cybersecurity extends beyond simple tool procurement—it requires the rationalization, testing, validation, and strategic integration of these tools within the broader security architecture framework.

Here's a practical approach:

  • Early involvement
    Engage Red and Purple Teams early in the Proof of Value/Concept phase to assess tools under realistic conditions, particularly in areas of low visibility.

  • Continuous feedback
    Maintain ongoing feedback loops between the Red and Purple Teams and Security Architecture to ensure tools are continually evaluated against new and emerging threats.

  • Budget considerations
    If direct budget allocations for realistic testing and validation are challenging, a viable alternative can be partnering with tool vendors for necessary licensing during the evaluation phase.

Advocating for a shift in organizational culture

Leaders must champion a cultural shift that values thorough validation over simple compliance. This change is crucial for ensuring that cybersecurity defenses are not only robust and comprehensive, but also agile and responsive to the dynamic threat landscape.

The resulting culture should also advocate for continuous improvements, where security best practices are regularly reviewed and updated. This culture should also foster collaboration across all departments, ensuring that everyone understands their role in maintaining a secure enterprise. Finally, the culture should allow for open and transparent communications and free exchange of ideas and concerns related to cybersecurity.

Conclusion

Abraham Wald's insights on bomber damage during World War II certainly provide a powerful framework for understanding and overcoming survivorship bias in cybersecurity. Just as Wald recommended reinforcing the undamaged areas of returning bombers to account for unseen vulnerabilities, cybersecurity leaders must adopt a holistic approach to protect all areas of their enterprise, not just those that show obvious signs of threat.

•  Regularly evaluate tools and technologies
•  Incorporate diverse perspectives in development
•  Strategically place and validate tools
•  Foster a culture of continuous improvement

By integrating these strategies, cybersecurity leaders can ensure that their defenses are comprehensive and resilient. Just as the unexamined areas of bombers represented critical vulnerabilities, so too do the silent areas of an enterprise's cybersecurity landscape. By recognizing and addressing these hidden weaknesses, an organization can build a more robust and adaptive defense system.

Engage in dialogue

I am interested in hearing your insights. Has Survivorship Bias influenced your tool selection process? How does your organization handle tool selection and validation? Are Red and Purple Teams integral to your strategy? Share your experiences and thoughts on strengthening cybersecurity measures.

This post appeared originally on James Kimble's LinkedIn page.

Comments