Application Security Blog | True Positives

Risk-based AppSec, But How?

Written by Brook S.E. Schoenfield | Apr 21, 2022 6:45:00 PM

Risk - everyone in AppSec or software security talks about it. Pundits advise that we base our decisions on it. AppSec manuals demand that our programs must be "risk-based". NIST's Risk Management Framework spells out in detail how to build an information security risk program. But I defy you to find a method for actually calculating risk in the document.

 

Do We Really Know How to Calculate Risk?

We throw around the term "risk" casually, but what do we actually mean? Many of us who've done the least bit of study of digital, cyber, or information security have most certainly seen the following risk equation: Risk = Probability*Annualized Loss

 

Risk Can Be Negative or Positive

Digging a little deeper reveals that risk involves the uncertainty of an event. "Event" can have either positive or negative consequences. To say something is "risky" is to say that there is uncertainty as to the event’s occurrence. Risk, in theory, is neutral.

In digital security, we typically concern ourselves with negative events. We leave positive risk to business strategists. Our focus is primarily on bad things that might happen and how we should go about preventing or mitigating occurrences while limiting negative impacts.

 

What’s Included in Risk of Loss?

It's generally fairly easy to calculate a Loss term for the standard risk equation. Many organizations, especially commercial ones, have a fair understanding about costs to purchase, to own, and to operate.

Depending upon the context, Loss might be expanded from financial to include things like customer goodwill, or employee trust and safety. One can build a scale (1–10, 1–100, etc.) to rate impacts.

 

We Lack the Data to Accurately Assess AppSec Risk

Unfortunately, we still have a major problem. At our current state-of-the-art, the Probability term is at best tricky with inherent complexities. At worst, arriving at a reasonable probability is near to impossible. We don't have actuarial tables, which are the standard statistical method for calculating probability. We’re operating on anecdotal data akin to 16th century European risk calculations for disease and mortality.

In 1606 an Englishman started collecting data on the total number of mortalities by each  disease, and through the centuries we've been using mortality actuarial tables while increasing our understanding of causes of death. The same cannot be said for compromised data.

 

Walled Gardens Lead to Poor Data

The majority of compromises or successful attacks do not get reported. There is no comprehensive archive of compromises which includes the associated damages. We only hear about major breaches causing wide-ranging impacts—anecdotes in the grand scheme.

Vendors that produce network and endpoint protections gather enormous amounts of data about which exploits are attempted, and a fair amount of data about which ones get through their protections. Access to this data would give us some basis for calculating probability. But the vast majority of the data are proprietary, typically used to feed machine learning algorithms intended to improve the efficacy of vendors' own products. Very little of those data are available for independent research.

 

What is the Common Vulnerability Scoring System (CVSS)?

Many throw up their hands and substitute Common Vulnerability Scoring System (CVSS), which is a potential severity rating but not intended as a risk method. There is a large body of research indicating that CVSS is a poor substitute for risk.

So what are we left with if we haven't got actuarial tables and CVSS is demonstrably ineffective?

 

Is it Fair to use FAIR?

I believe that Factor Analysis of Information Risk (FAIR) uses solid methodology. FAIR uses casino math instead of actuarial tables, and was standardized by the Open Group.  I encourage readers to explore FAIR, both as a method and as education about information risk.

However, my experience from using FAIR is that it takes significant effort to identify and  quantify the variables required for a FAIR probability derivation. The quality of the results is dependent upon the number of simulations; 50,000 is probably a minimum; more is better.

For many applications, especially in software security, there may not be sufficient time to set up a FAIR probability derivation. The often considerable effort may not be worth the rewards.

 

What About Risk Rating?

For instance, every attack scenario identified during the threat model will need to be risk rated. There will be multiple scenarios, sometimes upwards of a dozen or even more. Development teams will be anxiously waiting for decisions on what to build now, what can wait, and any scenarios that don't need mitigation, yet. There is considerable pressure to rate risk quickly. Too often, accuracy suffers as a result.

 

How to Get Just Good Enough

I keep improving Just Good Enough Risk Rating (JGERR), which Vinay Bansal and I invented together at Cisco around 2008. JGERR is based on FAIR. The latest printed version is found in Building In Security at Agile Speed, Auerbach 2021. We use a refined version of JGERR at True Positives.

JGERR includes the same factors that CVSS Base scores, while adding ratings for:

  • the amount of utility or leverage an attacker gains from exploitation and
  • the technical challenges that must be surmounted before exploitation can occur (not the defenses, which are covered by CVSS).

Adding these 2 dimensions makes up for an incorrect CVSS assumption: that every vulnerability is equally useful to attackers, which is demonstrably not true.

Readers need not use JGERR, though it may be useful to understand the latest version and the reasoning behind it. Two of my past software security programs successfully enhanced CVSS with the two scales mentioned above. That experience then led to the current revision of JGERR, in an attempt to create a simple method for arriving at a reasonable and rapid substitute for the Probability term.

 

Exploring the National Vulnerability Database

The Mitre Corporation's Cybersecurity Vulnerability data (CVE™) feeds the NIST National Vulnerability Database (NVD) and can both provide useful insight into what types of issues have been reported against a particular technology or product. Bear in mind that NVD archives only reported issues, not exploitations. As far as we know, at least 75% of the issues that get recorded in NVD do not get used by real-world attackers.

Nonetheless, we can at least see the kinds of issues that show up in a particular technology and issue distribution over time. While admittedly imperfect, I think it's better than trying to read tea leaves or waving a crystal wand.

 

A Practical Example of NVD Use

As an example, I had a client who’s system had been built with .NET. I wanted to get a sense of what kind of issues might appear in the future, based upon the history of issue reports.

An NVD search revealed that remote code execution (RCE) vulnerabilities have been regularly found in .NET— that typically demands our attention. Once an attacker can execute code of their own choosing, it can be assumed that compromise is not far behind.

The regular recurrence of RCE in the NVD data indicated that a .NET RCE has a non-zero chance of being discovered during the life of the software I was analyzing. Based upon  an understanding that RCE has a chance of recurring in .NET, we could then implement defensive strategies: how to prevent a future .NET RCE from having significant impact.

 

Bringing It Together for Better Threat Modeling

Using NVD's data shouldn't be mistaken for the actuarial data we need in order to calculate probabilities. Were you to adopt part or all of JGERR, that also shouldn't be mistaken for actuarial or even casino math derived probabilities. Still, after thousands of field-tested JGERR ratings, we have some confidence that methods like JGERR are most certainly better than a practitioner's gut feeling, or, dare I say it, CVSS.

Whatever you choose as a risk rating method, I advise you to make sure you're standing on a solid foundation. Understand what risk is, what it isn't, and the methods that produce workable results.