The hidden risks inside open-source code


Open-source software is everywhere. It runs the browsers we use, the apps we rely on, and the infrastructure that keeps businesses connected. For many security leaders, it is simply part of the environment, not something they think about every day. That is where trouble can start.

James Cusick, a researcher at Ritsumeikan University, recently set out to answer a question: how secure is the code we depend on? His study looked at both open-source and proprietary software, scanning millions of lines of code to see where vulnerabilities hide and how serious they are. What he found shows why static code scanning should be a key part of every security strategy.

Comparing open-source and proprietary code

Two open-source projects were analyzed. Chromium, the foundation for browsers like Chrome, Edge, and Opera, represents a large, well-known project. Genann, a much smaller neural network library, provided a sharp contrast. The study also reviewed several proprietary SaaS applications built and maintained within one company, offering a direct comparison to open-source code.

The results showed differences. In the Chromium scan, there were 1,460 potential issues spread across almost six million lines of code, with only a handful rated as critical or high severity. Genann told a very different story: six potential issues in just 682 lines of code, or about one problem for every 27 lines.

The proprietary software fell somewhere in the middle. Across nearly three million lines of code, roughly 5,000 issues were found, most at medium or low severity. Even within this group, the level of risk varied widely between individual applications.

Supply chain implications

For CISOs, these findings underscore a growing supply chain challenge. Open-source components are often integrated into systems without careful review. Even highly visible projects like Chromium, which has a large and active contributor base, can contain hidden vulnerabilities.

Cusick said the lesson for security leaders is simple: never assume open-source software is safe without checking it yourself. “I would not trust any open-source code or product which I did not personally review or scan,” he said. “Integrating code into your product without knowing its state of quality or exposure to vulnerabilities is dangerous to say the least. That is like jumping in a car but not knowing if the brakes work. Finally, using the method I describe in the paper it is possible to scan a million lines of code in a matter of minutes. Why wouldn’t you take the time to assess the risk exposure? You might decide to accept some of the vulnerabilities but ‘fore-warned is fore-armed’ as they say.”

When organizations add open-source libraries without scanning them, they bring unknown weaknesses into their environments. Once deployed, these components become harder to track and update. The risk grows as teams rely more on microservices and cloud-native architectures that depend heavily on open-source code.

CISOs should ensure that every open-source component is scanned before it is deployed and re-scanned regularly as new versions are released. Just as importantly, teams need a process to prioritize and remediate findings so that the most serious issues are addressed quickly.

Building a secure development process

The study provides a step-by-step guide for making static scanning part of a secure development lifecycle which emerged from over ten years of industry practice. It covers tool selection, acquiring code from repositories, running scans, and working with developers to review and fix findings.

One key lesson is that scanning should be continuous. Every update, new feature, or code change has the potential to introduce vulnerabilities. Scanning tools can connect directly to development pipelines, helping teams catch problems earlier and at greater scale.

Cusick believes AI will play a bigger role in vulnerability detection, but he cautioned against viewing it as a silver bullet. “Regarding AI tools for scanning, they are certainly ready for prime time now,” he said. “However, they do not provide 100% detection (nor do most other tools). There is also no such thing as an automatic scan and fix toolchain. This is still an iterative approach and requires judgment calls, code creation or generation, retesting, and prioritization as not every vulnerability can be addressed if resources are limited and release schedules are to be met which is almost always the case.”

He added that while AI tools have promise, they will need to evolve before they can replace human expertise. “I think the Gestalt of competing factors will be a challenge for most AI tools for some time to come. Working in combination, however, they may be able to optimize for parts of the puzzle like vulnerability scanning first, then code remediation separately, etc., providing a net benefit compared to handcrafting.”

Open-source software will always be essential to business. This work shows that it should never be treated as risk-free. By building scanning into development and procurement, CISOs can gain visibility into their software supply chain and reduce the chance of hidden vulnerabilities causing serious harm.



Source link

About Cybernoz

Security researcher and threat analyst with expertise in malware analysis and incident response.