In this Help Net Security interview, Henrik Plate, CISSP, security researcher, Endor Labs, discusses the complexities AppSec teams face in identifying vulnerabilities within software dependencies.
Plate also discusses the limitations of traditional software composition analysis (SCA) solutions and the need for robust vulnerability databases to ensure effective security management.
What are AppSec teams’ key challenges when identifying vulnerabilities in software dependencies?
The seemingly simple challenge, that is actually quite complex, is accurately detecting the presence of vulnerable code within dependencies, and then figuring out which findings are exploitable within the context of their applications. In other words, figuring out which vulnerabilities (in which functions and in which dependencies) should actually be prioritized for remediation.
There are many reasons why this is difficult, but prominent ones are:
In regard to the detection of vulnerable code: The fact that many projects produce multiple packages – some affected, others not; that code is also rebundled or repackaged from one open source project to another; that projects get renamed and forked; and that open source project maintainers often do not check whether a given vulnerability affects older, unmaintained releases. Because of those difficulties, it is not surprising that manually curated vulnerability databases contain inaccurate information.
In regard to exploitability: It is difficult to manually determine whether confirmed vulnerable code can be exploited in the context of a given downstream application, and its application-specific impact is another key challenge.
Being able to perform this research depends on context information that is not readily available in a structured, machine-readable form — especially at development or build time, which is when many SCA solutions run. Some of this information, like the nature of information processed and stored by the application, or application-specific safeguards, is captured in requirement and design documents, for example, or only in the minds of developers. Other context information, like the deployment environment or configuration is also unavailable to SCA solutions.
Because determining exploitability is difficult, code-centric SCA solutions aim at determining the reachability of known-vulnerable functions, which can be done on the basis of code available at development and build time. But it is important to understand that reachability is just a prerequisite for exploitability – not all reachable functions can actually be exploited in runtime environments.
Can you explain what phantom dependencies are and why they pose a risk?
A “phantom dependency” refers to a package used in your code that isn’t declared in the manifest. This concept is not unique to any one language (it’s common in JavaScript, NodeJS, and Python). This is problematic because you can’t secure what you can’t see.
Traditional SCA solutions focus on manifest files to identify all application dependencies, but those can both be under- or over-representative of the dependencies actually used by the application.
They can be under-representative if the analysis starts from a manifest file that only contains a subset of dependencies, e.g., when additional dependencies are installed in a manual, scripted or dynamic fashion. This can happen in Python ML/AI applications, for example, where the choice of packages and versions often depend on operating systems or hardware architectures, which cannot be fully expressed by dependency constraints in manifest files.
And they are over-representative if they contain dependencies not actually used. This happens, for example, if you dump the names of all the components contained in a bloated runtime environment into a manifest file (think “pip freeze”). A given Python environment, for example, may contain packages not actually used, e.g., when multiple services are developed in a Python monorepo, and both manifest files and environments contain the superset of all the services’ dependencies. This may be against best practices, but is the unfortunate reality of real-world environments.
To overcome those problems, organizations should implement technology that enables them to look into the code of both the first party application and all its dependencies to track import statements so they can find all the packages that really matter in the context of a given application.
Public advisories often need more code-level information. What implications does this have for organizations managing their software dependencies?
Code-centric SCA solutions require code-level information about vulnerable functions or fix commits. The lack of such information in public databases obliges corresponding providers to maintain a complementary database to enrich the public advisories, which is a costly exercise.
The quality of code-centric SCA solutions depends on the quality of their respective vulnerability databases. Users of SCA solutions should ask vendors what their specific approach is to maintaining a vulnerability database and ensuring its quality.
From the perspective of the open source community, the lack of code-level vulnerability databases (or their limited coverage) makes it difficult to develop competitive open source solutions.
Most version upgrades contain at least one breaking change. How should organizations approach these upgrades to minimize disruption while maintaining security?
It is true that many version upgrades do contain breaking changes, even minor or patch versions, which should be backwards-compatible according to Semver versioning convention.
However, in relation to the overall code base of those components, those breaking changes are often sparse, i.e. clients will actually not run into them.
Still, even if only a few APIs underwent breaking changes, developers often cannot take the risk of just upgrading and hoping for the best. And upgrading in a trial-and-error fashion is costly, ineffective and frustrating for developers.
Developers should implement tooling that allows them to find all the breaking changes in a library and check whether any of them matter for their specific application. They should leverage the same type of call graph-based reachability analysis that also supports the assessment of known vulnerable components.
Technology can identify whether there are any calls into those APIs, and flag such updates as “high risk” so that developers can seek alternative upgrade paths.
How can organizations mitigate the risks associated with AI and ML software project dependencies?
Endor Labs found significant discrepancies between the number of vulnerabilities reported for ML/AI components like TensorFlow across different vulnerability databases. But these discrepancies do not only concern ML/AI components, which brings us back to the problem of high-quality vulnerability databases.
Users of SCA solutions must ask their vendors about their sources of vulnerability information, and their approach and processes to maintain the quality of complementary information (if any).