Skip to content
a false positive standing out amongst false negatives in application security

False Positives vs. False Negatives in Application Security

  • 3 mins

DAST vs. SAST: Is It Better to Know Too Much or Too Little? 

“In our new application security program, do we implement static analysis security testing (SAST) or dynamic analysis security testing (DAST) first?” This question is thrown out on many application security job interviews, informational sessions, and many “how do we build our program” consulting engagements.


Historically, DAST gets selected for a variety of reasons. The most common one is: “SAST is too slow and results in too much noise.” That is to say that there is too much effort sifting through the static analysis findings to make SAST analysis worthwhile.

This begs the question is it better to have too many or too few findings?



What Counts as a False Positive in AppSec Scans?

 A false positive is a finding that, when reviewed, is not a finding. There can be some nuance to the term – for example, it could be the risk is accepted due to other factors such as performance or business reasoning. Still, for simplicity's sake, a false positive is simply a finding which should not have been there. It’s essentially a result of an abundance of caution.


What Counts as a False Negative in AppSec Scans?

A false negative is a finding which should have been found but was not. An example here is that the AppSec scanning tool did not reach a specific part of the application where a cross-site scripting vulnerability existed. As a result, the test was unable to report the issue, creating the impression that all was safe.



What is the Preference Among Software Security Practitioners?

To answer this, we have to make a few assumptions.

  1. That someone would only pick one scan type. In a real secure development life cycle, you would use some combination of SAST and DAST, among other tools.
  2. That we can ignore the nuance around business risk, training either or both tools to do a better job, the skill level of developers, an organization’s interest in remediation, and more.

But, for this purpose, it is a simple thought experiment – would respondents prefer more false positives or false negatives? The poll was done over LinkedIn via one of True Positives contributors.


The poll is hardly quantitative – it is one random LinkedIn survey with few respondents – but it still provides some interesting results to think about.


How to Balance Knowledge and Action?

One might conclude from the survey that SAST would be the best starting point, but that's misleading.

Given a choice of not knowing vs not knowing what vulnerabilities exist, most people would choose knowledge. After all, who wants to fly blind? However, that simple statement ignores the more significant complexities of developing a software security program.


Building a Hybrid Program for Your Situation

There will always be some give and take. We have to live with both false positives and false negatives. Application security professionals must bear in mind the inherent tension between reducing false positives at the expense of false negatives, and the inverse. Figuring out which is dependent on your organization’s context.