When it comes to software security, one of the biggest challenges facing developers today is information overload. Thanks in part to the widespread proliferation and use of open-source code (a study by Red Hat showed that 36% of software in use at surveyed organizations was open source), as well as the increasing complexity of the average application, a given project can now be expected to have a massive amount of dependencies. In turn, each of these dependencies represents a potential opportunity for a vulnerability to arise if not properly secured.
Owing to this state of affairs, developers face a new challenge. Automated vulnerability reports generated by scanning tools are returning hundreds, if not thousands of vulnerabilities, and with a great deal of organizations reporting a lack of skilled cybersecurity professionals, teams are already stretched too thin to fix each one. The prospect of quickly remediating every single vulnerability identified by a scan is unfeasible.
In an effort to resolve this, developers and security professionals have traditionally relied on vulnerability scoring systems to help them prioritize the most critical flaws and streamline remediation efforts. And while this is a good way to get software out the door faster with fewer vulnerabilities, this methodology is too simplistic. Exploitability is a much more important benchmark when it comes to triaging efforts.
Why legacy scoring systems are no longer sufficient
The large number of vulnerabilities returned by automated scans is not a new problem. In fact, it is commonly cited by developers as an obstacle to security. To attempt to filter through these large data sets, developers conduct vulnerability triage where they categorize the flaws that have been detected in order of risk they pose to an application’s security or functionality. Then, developers can fix vulnerabilities that seem to be most pressing in order to get software out the door faster.
Currently, many developers rely on the Common Vulnerability Scoring System (CVSS). The system represents a basic standard for assessing the severity of a vulnerability. Scores range from 0-10, with 10 being the higher end of the scale (indicating the highest severity). Developers will often assign CVSS scores to the vulnerabilities they detect and order them from highest to lowest, focusing their efforts on those with the highest scores.
Unfortunately, this method is suboptimal, ultimately resulting in oversights and less “safe” code.
Don’t be a foo() when it comes to vulnerability remediation
A large part of getting the most out of security scanning tools comes down to a developer’s approach to triaging the vulnerabilities scans detect. Rather than focusing on severity as determined by CVSS scoring, developers should prioritize vulnerabilities by focusing on the potential path they offer for exploitability.
So, what exactly does it mean for a vulnerability to be exploitable? There are two core factors:
- The vulnerable method in the library must be called directly or indirectly from a user’s code
- An attacker needs a carefully crafted input to reach the method to trigger the vulnerability
As an example to illustrate this point, imagine a scenario where a vulnerability is triggered by a foo() method in a library being used, but when examined, it is determined that the code itself actually does not call foo() at all. This is an example of how a vulnerability might seem severe on its surface, but when considered in context, offers no real path for exploitation. With this knowledge, a developer can pass on fixing this vulnerability for others that are actually exploitable.
Taking a holistic approach to evaluating the probability of exploitation
The modern developer is equipped with entire libraries of code for a single API method out of dozens. Furthermore, the libraries they employ have other 3rd party libraries themselves that only partially use the available APIs. This means that if a vulnerability is detected in an application’s dependencies, the probability of it actually being exploited can be below five percent. This has a number of important implications for developers attempting to secure software while maintaining efficiency:
- Current approaches to prioritization are focused on the wrong areas, and valuable time is being spent fixing vulnerabilities that may not even be exploitable
- Vulnerabilities that cannot be exploited are, in essence, false positives. If a given code flow cannot be reached by an attacker, it can safely be ignored for other, more pressing issues
- Despite scans returning what appear to be insurmountable lists of vulnerabilities in software, the true number of vulnerabilities that need to be remediated after a scan is significantly lower than is commonly understood today
By leveraging automated scanning tools that can analyze the project’s source code, and the source code of all used packages, when examining the call graphics and data flows, exploitability and risk can be truly evaluated.
Vulnerability triage: Security at speed via exploitable path
Developers are facing a number of challenges in the form of a demand for both quicker software delivery and better security in finished code. As a result, we’ve seen the growth and adoption of open-source code and automated vulnerability scanning technologies. But while these tools serve a valuable purpose, vulnerability triage is equally essential in software development where agility can either be lost or gained.
To date, the more common triage strategy has been to leverage CVSS scoring, but the exploitable path philosophy offers a different alternative that can help achieve greater efficiency.
By focusing efforts on those vulnerabilities that represent a real threat to a piece of software’s security, developers can vastly cut down the number of bugs they need to address. This approach leads not only to faster delivery but should also result in greater confidence on the part of developers who are sure that those vulnerabilities which had actual paths for exploitation have been fixed.