In this Help Net Security interview, Daniel Stenberg, lead developer od cURL, discusses how the widely used tool remains secure across billions of devices, from cloud services to IoT. He shares insights into cURL’s decades-long journey of testing, reviewing, and refining its code to minimize risks.
Stenberg also explains the team’s approach to handling vulnerabilities, ensuring transparency, and maintaining trust in the open-source ecosystem.
cURL is embedded in billions of devices, from cloud services to IoT. What keeps you up at night when you think about the security of such a widely used tool?
cURL’s journey and global adoption has been a gradual thing. As cURL has gotten more widely used, it has also become more tested, reviewed and scanned for problems by users and fellow developers. For decades we have worked on quality and have continuously been adding new tests and tweaking the architecture in order to reduce the risk that we introduce problems. We keep iterating and we keep tightening bolts all over and that is work that never ends.
That background, combined with our test suite and CI setup with all its tooling is what makes me feel fairly confident that cURL is okay. I feel fine with having our code in billions of devices! I think customers and users feel safe and secure with running cURL in all these devices, services and tools because we have a proven track record since decades of taking these subjects seriously and of shipping rock solid products.
Can you walk us through how your team handles a newly reported vulnerability, from discovery and verification to communicating fixes to downstream users?
Every security report starts out with someone submitting an issue to us. The reporter tells us what they suspect and what the problem is. This report is kept private, visible only to the cURL security team and the reporter. In recent months we have gotten 3-4 security reports per week.
The cURL security team right now consists of seven long time and experienced cURL maintainers. We immediately start to analyze and assess the received issue and its claims. Most reports do not identify security problems and are then quickly dismissed and closed. Some of them are identifying bugs that are not security issues and then we move the discussion over to the public bug tracker instead. If we think the issue might have merit, we ask follow-up questions, test reproducible code and discuss with the reporter.
A small fraction of the reports is considered security vulnerabilities. We work together with the reporter to reach a good understanding of what exactly is required for the bug to trigger and what the flaw can lead to. Together we set a *severity* for the problem (low, medium, high, critical) and we work out a first patch which also helps to make sure we understand the issue. Unless the problem is deemed serious we tend to sync the publication of the new vulnerability with the pending next release. Our normal release cycle is eight weeks so we are never farther than 56 days away from the next release.
For security issues we deem to be severity low or medium we create a pull request for the problem in the public repository but we don’t mention the security angle of the problem in the public communication of it. This way, we also make sure that the fix gets added test exposure and time to get polished before the pending next release. Over the last five or so years, only two in about eighty confirmed security vulnerabilities have been rated a higher severity than medium. Fixes for vulnerabilities we consider to be severity high or critical are instead merged into the git repository when there is approximately 48 hours left to the pending release to limit the exposure time before it is announced properly. We need to merge it into the public before the release because our entire test infrastructure and verification system is based on public source code.
Next, we write up a detailed security advisory that explains the problem and exactly what the mistake is and how it can lead to something bad including all the relevant details we can think of. This includes version ranges for affected cURL versions and the exact git commits that introduced the problem as well as which commit that fixed the issue plus credits to the reporter and to the patch author etc. We have the ambition to provide the best security advisories you can find in the industry. (We also provide them in JSON format etc on the site for the rare few users who care about that.) We of course want the original reporter involved as well so that we make sure that we get all the angles of the problem covered accurately.
As we are a CNA (CVE Numbering Authority), we reserve and manage CVE Ids for our own issues ourselves.
About a week before the pending release when we also will publish the CVE, we inform the distros@openwall mailing list about the issue, including the fix, and when it is going to be released. It gives open source operating systems a little time to prepare their releases and adjust for the CVE we will publish.
On the release day we publish the CVE details and we ship the release. We then also close the HackerOne report and disclose it to the world. We disclose all HackerOne reports once closed for maximum transparency and openness. We also inform all the cURL mailing lists and the oss-security mailing list about the new CVE. Sometimes we of course publish more than one CVE for the same release.
The vulnerability reporter can then claim a bug bounty from the Internet Bug Bounty which pays the researcher a certain amount of money based on the severity level of the cURL vulnerability.
Given how integral cURL is to other software, what steps are you taking to ensure the integrity of builds, libraries, and updates across such a massive ecosystem?
With some exceptions we only ship source code. We then rely on operating system makers and everyone else to build, package and ship the binary versions of cURL and libcurl as they see fit. This makes us distro agnostic and also unaware and uncaring about the state and checks of dependencies, libraries and build systems etc.
We work with signed commits, signed tags and we upload only signed release tarballs. The signatures can easily be verified.
I generate the release tarballs on my local machine. To reduce the risk that this introduces something bad into the tarballs (if I go rogue or if someone has maliciously planted malware in my release machine), I build them fully reproducible using a fixed and documented container image everyone is encouraged to verify the release tarball and we also provide an easy-to-use script for doing so. The tarball verification makes sure that it was generated with the proper tools and only contains contents from the git repository.
We have also in recent times worked on reducing the amount of binary and base64 encoded data in tests and example code, to further decrease the risk that someone can inject something encoded into the git repository xz-style.
As a side-note I can perhaps mention that we only run read-only CI jobs, so a breached cloud instance cannot infect or spread to our source repository.
Are there common mistakes you see developers make when using cURL or libcurl that lead to security issues? What best practices do you wish more people followed?
Every time we have a confirmed security vulnerability we try to learn from it. What detail or habit made us make this specific mistake. It is usually hard to see patterns among them and not just a series of seemingly random mistakes.
We have however identified a few things that have helped to contribute to previous vulnerabilities and we nowadays try to steer the code in ways that ideally help us avoid repeating them. For example we try hard to avoid direct memory function calls and instead offer a set of helper functions, we limit the places in the code where we do realloc(), we restrict the length of input strings to 8MB and we have a whole range of functions that are banned from use (gets, sprintf, sscanf, strncpy, strtol, etc).
It is hard to be sure if our methods and precautions help. The average age of reported security vulnerabilities (the time they have existed in the source code since the mistake was first merged) is still about eight years!
The primary method to maintain security is simply to do all the engineering steps we know we should: sensible and enforced code style, reviews, lots of tests, lots of CI, proper documentation so users know what to expect and how to behave, picky compiler warnings, warnings-as-errors, apply several source code analyzers and fix all nits, throw fuzzers at every API. If all projects that did this, we would be in a pretty good place.
cURL is a volunteer-driven open-source project at its core. How do you foster a security-aware contributor community, and what advice do you have for other open-source maintainers facing similar challenges?
Primarily I try to lead by example so that if and when contributors copy or mimic our style, it should be decent. I try to spread and share sensible concepts for how to think about and write code to make it as good as possible; often with review comments and sometimes with contributing helping additional commits to guide contributors, but also to make sure internal APIs and frameworks are comfortable and easy to use, and hard to get wrong.
Our source code rules are checked and fundamental quality is enforced to a decent extent with automatic tools so people are gently nudged into doing the right thing to get their submissions turned green.
Source link