Many organizations assume that all necessary security measures will eventually be baked into everyone’s job. However, in our experience, top cyber security experts express trepidation about relying too heavily on metrics, or their efficacy at all. "Metrics are hard", an oft repeated sentiment in interviews, is a view which originates from the overwhelming options for metrics today, a mixed record of proven efficacy, and a confusion over how to express value from measurements. We aim to identify some ground truths around metrics development in order to have a constructive conversation on the topic.
Firms in the US spend millions on cyber security portfolios each year. Contents of that portfolio may include, for example, cyber insurance policies, cyber security training during orientation for new workers, or an annual company-wide phishing test to ensure compliance. However, the efficacy of such expenditures can only be monitored if objectives are clearly set and appropriate data are collected on a continuous basis. Thus, resilience metrics for cyberattacks are important – they can illustrate whether expensive IT controls are functioning as intended – providing “return on controls” data.
Some security metrics experts argue "the risk management message of identifying and fixing things has the danger of becoming a zero-sum game when it misses the important parts of quantifying and triaging based on value" (Jaquith 2007). Therefore, a statistical orientation to cyber risk management should account for the costs of potential attack scenarios. For example, constructing a good numeric metric for value of a program could take the form of; identified issues (numerator) over the assets the issues apply to (denominator).
Best practices in metric development
Andrew Jaquith is a security metrics expert who presents an argument for a large focus on quantifying assets. He stresses the cost factor of various controls in a security portfolio, and creating metrics which allows managers to see how risk will change should the investment priorities in the security portfolio shift. Isolating the value of the individual information assets that reside on workstations, servers, and mobile devices, as well as viewing it in aggregate gives a better sense of what the firm is prioritizing in budget. If an organization does not currently have the resources to implement its own program in this regard, these guidelines can be adopted by the security vendor they are using.
General guideline to follow in metrics development:
- Incorporate measures of time or money (this is a language that public, investors, and C-suite understand)
- Ensure the numbers are well shared and understood throughout the company, and throughout the industry are consistently measured (lend themselves to benchmarking)
- Structure metrics so that they can be calculated in an automated/mechanical way
Additionally, metrics should be contextually specific and clear in order to be relevant enough to decision makers that they are able to take concrete actions; if there are no pre-set objectives it is hardly worth collecting performance data. For example, specifying an average number of attacks across an entire organization probably won’t help anyone do their job better.
Good metrics should be inexpensive to gather, expressed as numbers or percentages, and use at least one unit of measurement, but preferably more, in order to make benchmarking easier. In his book Security Metrics: Replacing Fear, Uncertainty, and Doubt Andrew Jaquith points out that a “single unit of measure for the “number of application security defects” metric makes it hard to compare dissimilar applications on an apples-to-apples basis. But if one unit of measure is good, two are better. For example, a better metric might be “number of application security defects per 1,000 lines of code. ” This provides two units of measurement. Incorporating a second dimension (dividing by 1,000 lines of code), generates a metric that can be used for benchmarking” (Jaquith 2007).
Visualization of metrics across a public dashboard can be useful in establishing a level of visibility. It can spark a competitive spirit among those being measured, especially for IT management. It is wise, though, to keep metrics from becoming too punitive. If tying pay to performance delays rewards or exposes managers and employees to undo risk, then the metric used could have serious unintended implications (Hauser-Katz 1998).
Who should define metrics?
Cybersecurity staff can use indicators to assess their organization’s performance while driving improvement. These metrics can also be used to benchmark internal comparisons across departments or externally with others in the same industry. Most information security teams have access to raw data that can provide a baseline against which improvements can be measured. This might include scraping data from tools that are likely running in the background 24/7, including anti-malware, firewalls, patch management, application security scanners, network access control, and the like. Deciding what to do with these data should not rest with the information security team. It is important to answer questions from a high-level business operations perspective, such as, what do we want to measure and why? What is our intent in collecting these data?
Looking at the number of malicious pings deterred over a period of time is not useful if there is not a pre-defined “so what” attached. Many large utilities log four billion "security events" a day. Strategic objective identification should be the product of a conversation involving C-Suite teams and non-IT vertical leads, along with the Information Security team leadership.
The next step is to develop quantifiable metrics that organizational leadership can use to move towards these goals. “More cyber incidents reported,” for example, is not necessarily a useful metric if the government is driving toward greater effectiveness, not an increased security force or a larger budget. How can metrics be used to help an organization work smarter, not necessarily harder?
We know ‘good’ metrics are usually quantitative, involve one or more indicators, can be normalized, and can be readily replicated. Critical infrastructure operators need to invest in security technology and security awareness campaigns to protect their systems, but it is difficult to assess such investments and drive performance improvement without the ability to measure progress. Clearly structured metrics can be an invaluable resource in driving an organization toward becoming more resilient to cyberattacks.
Black, P. et al. NIST. Cyber Security Metrics and Measures. https://www.nist.gov/publications/cyber-security-metrics-and-measures
Chickowski, E. 10 Ways to Measure IT Security Program Effectiveness. March 16, 2015. https://www.darkreading.com/analytics/10-ways-to-measure-it-security-program-effectiveness/d/d-id/1319494?
Hauser, J., Katz, G. Metrics: You Are What You Measure! 1998. http://web.mit.edu/~hauser/www/Papers/Hauser-Katz%20Measure%2004-98.pdf
Jaquith, A. Security Metrics: Replacing Fear, Uncertainty, and Doubt. Print. 2007
Ramsinghani, Mahendra. Measuring what matters in cybersecurity. November 29, 2016. https://www.csoonline.com/article/3144461/leadership-management/measuring-what-matters-in-cybersecurity.html
Rathbun, R. SANS Institute. Gathering Security Metrics and Reaping the Rewards. 2009 https://www.sans.org/reading-room/whitepapers/leadership/gathering-security-metrics-reaping-rewards-33234
UK Cabinet Office. National Cyber Security Strategy 2016 to 2021.