Skip to content

Get the Easy Wins. Stop Hiding from the Hard Ones.

  • 3 mins

The price of remediating security findings can range from free to decimating the capabilities of a product. That may sound hyperbolic, but it really isn’t. The cost may not be in money or time, it can be in performance, complexity of code, usability, maintainability, or opportunity costs.

Historically, remediation effort has been ignored when prioritizing security finding remediation. This has really been a mistake. It is imperative for remediation to be a factor. That isn’t to say it must be the primary concern. Business impact must be placed above security controls. Mitigations are also options, but must be handled with care.

 

Decisions...Decisions

Let’s say a software package has a command injection vulnerability. In order to execute that vulnerability, however, a user must be authenticated with a multi-authentication process. The process runs as a user with limited privileges on the system. The system is also heavily audited. Typically, the “command injection” would signal a “high” finding.

Factoring in the controls, it would seem reasonable to say “we should drop this to a low”. It’s hard to execute, the impact might be severe, and it would be easy to identify all the information that would tie back to the user. Ultimately, it would probably land as a medium given the high impact.

Following through this thought experiment, it is then determined that there is a limited set of commands that are needed to run, and the parameter structure of those commands is well defined. Now, it seems like it would make sense to leave the finding a high and let the team patch it and get the easy win.

And then it turns out, there’s limited developers and the system has a legacy style deployment where downtime for the release has to be planned.

These conversations aren’t unusual. They happen frequently. And the decisions made are effectively judgment calls. And relatively speaking, these are the easy conversations to have.

 

The Other Extreme

A possible data exposure is identified where any user may access another users’ data using random ids. That means that while it might not be possible to get a specific user’s data, it is reasonably possible to get some user’s data.

Invariably these are considered high (or critical) findings in reports. The considerations of privileges, type of data, audits are not usually considered. Another user's data is able to be accessed. The only real meaningful conversation is around how quickly this can be fixed.

The challenge is that there are almost always architecture considerations to be taken into account (especially in a Software as a Service product which supports multi-tenancy - or multiple clients). The amount of time to fix these is way longer than most security programs would like. We’ll save the details for what all of these considerations are for a later post.

In this case, it doesn’t matter how many developers there are. The fix is typically major and will drastically have opportunity costs in terms of other bug fixes, new features, and other deliverables. This means that there is a high or critical finding which could theoretically have a long runway for remediation.

Even knowing the impact is high, it is impossible to have a conversation around priority without understanding the factors in the fix. The security team must be empathetic to the product team and find a solution and understand the mitigating controls and factor that into the discussions.

It will still be important to track and make as much progress on the findings as well as implement mitigating controls to assist in protecting against an exploit (potentially short-term firewall rules, or different deployment practices, or something else). The teams simply cannot avoid the complexity and potential secondary impacts of some changes in architecture.

 

Is it a Zero-Sum Game?

Remediating findings is not a zero-sum game. The product team and security team really share a goal. Both need to see the value in addressing easy to fix findings while recognizing the more complex findings need time to be addressed. It is just another example of application security being both art and science.

When both teams are willing to make concessions and realize the benefits of working with each other, vulnerabilities will be patched more rapidly and, more importantly, effectively.

Ultimately, that teamwork will benefit not just the product, but the culture of the teams.