In cybersecurity, as in many fields, decision-making often hinges on data—or more precisely, on the reduction of uncertainty through measurement. Yet, a common misconception persists: that measurement must be perfect, exhaustive, or universally accepted to be useful. This is a fallacy that leads organizations to avoid measuring at all, leaving them exposed to preventable risks.
A more practical approach, as I have long argued, is to view measurement as observations that quantitatively reduce uncertainty. Every metric, no matter how seemingly imperfect, helps refine our understanding and improve our ability to act.
Why Measurement Matters in Cybersecurity
Consider an organization deciding whether to invest in a new endpoint detection and response (EDR) solution. Many cybersecurity teams hesitate because they believe they need an exact calculation of breach probability before justifying the cost. But in reality, even a rough estimate—derived from historical breach data, attack surface analysis, or industry benchmarks—can significantly reduce uncertainty about the potential impact of an investment. The goal is not to achieve omniscience but to make better decisions with the information available.
This principle also applies to risk assessments. Many businesses struggle with qualitative risk matrices (e.g., “High,” “Medium,” “Low” ratings) that offer the illusion of precision without actually reducing uncertainty. A quantitative approach, even if built from incomplete data, often yields more actionable insights. For example, shifting from “High risk” to “There is a 15% chance of a breach costing $2 million this year” allows executives to compare cybersecurity risks against other business risks in financial terms.
Tackling the “We Can’t Measure That” Fallacy
Cybersecurity professionals often argue that certain risks are unmeasurable. Insider threats, supply chain vulnerabilities, and nation-state attacks are cited as challenges because they involve unpredictable human behavior. However, the absence of perfect data does not mean the absence of useful data.
By leveraging Bayesian inference, Monte Carlo simulations, or even structured expert judgment, organizations can refine their probability estimates over time. If an initial estimate says, “We believe there’s a 30% chance of an insider threat causing major damage within the next three years,” further observations—such as the frequency of access violations or results from employee security training—can adjust that probability up or down.
Measurement as a Continuous Process
Cybersecurity threats evolve, and so should measurement strategies. Security teams must embrace measurement as an iterative process rather than a one-time event. Regularly updating risk assessments based on new observations—failed phishing attempts, penetration testing results, or emerging threat intelligence—ensures that decisions remain grounded in the best available data.
Final Thought: Measuring for Action, Not Perfection
In cybersecurity, as in any field, decisions must be made under uncertainty. The key is to reduce that uncertainty enough to act rationally, rather than waiting for absolute certainty that will never come. The best security teams recognize that any measurement—even an imperfect one—is better than a gut feeling disguised as strategy.
The question isn’t whether we can measure cybersecurity risks, but whether we can afford not to.