Europe’s cyber security policy has an open source problem. Compared to the US, the UK and Europe have been playing catch-up on national security strategy for resiliency of open source software supply chains against malicious actors. Open source powers our critical software infrastructure, and can be used as a threat against it – Microsoft recently found vulnerable open source components being exploited to hack energy grids in India. In 2021, the Log4shell vulnerability – the largest spread security vulnerability in recent history – laid bare the risks of unmanaged software supply chains.
Because this is a global concern, governments are acting. Last year, the UK government issued a Proposal for Legislation to ‘Improve the UK’s Cyber Resilience,’ highlighting the immense impact even small security risks in the supply chain can have. Meanwhile, Germany issued the Information Security Act 2.0 (IT-SiG), and more recently, the European Union (EU) has proposed its Cyber Resilience Act (we’ll come back to that).
To put into perspective why this is a big deal, open source comprises between 80% and 90% of code in all modern applications. At least a quarter of identified hacks originating from the application layer can be attributed to a vulnerability in an open source component used to build it. Unfortunately, many commercial consumers of open source are not managing their software supply chain in any centralised fashion. Of the open source components being downloaded that are known to be vulnerable, 96% of the time, there’s been a better, non-vulnerable version available.
Even Log4j, the component that made applications vulnerable to Log4shell, was subject to similar behaviour. The average consumption of the vulnerable versions of Log4j stood at 38% for all of 2022. That means 38% of the time, someone is downloading and building into their software a version containing the most widely publicised and exploited vulnerability we’ve ever seen.
The problem stems from lack of incentive for corporations to act. Open source is a powerful tool that enables our modern economy, but not managing it leaves software development teams open to technical debt and bad security risk.
The UK problem with open source software security
We recently saw the UK’s National Audit Office highlight the dangers. It found that nearly a third of software used by the UK’s Department for Environment, Food, and Rural Affairs (Defra) is past its end of life, and Defra has no plans to update it. That means the UK’s public sector is vulnerable to cyber attacks, as the thousands of pieces of open source used to build it grow stale and more vulnerable to new types of vulnerabilities being discovered.
While the UK government has tried to recognise the importance of digital supply chain security, current policy doesn’t consider open source as part of that supply chain. Instead, regulation or proposed policies focus only on third-party software vendors in the traditional sense but fail to recognise the building blocks of all software today and the supply chain behind it.
To hammer the point, the UK’s 11,000+ word National Cyber Security Strategy does not include a single reference to open source. GCHQ guidance meanwhile remains limited, with little detailed direction beyond ‘pull together a list of your software’s open source components or ask your suppliers.’
Until significant emphasis is put on improving open source practices on a national level, the government is unlikely to deliver on its objectives to improve cyber resilience.
Is the EU handling open source software security any better?
In this sense, the EU has certainly been listening. The recently released Cyber Resilience Act (CRA) is its proposed regulation to combat threats affecting any digital entity and ‘bolster cyber security rules to ensure more secure hardware and software products’.
First, the encouraging bits: the CRA doesn’t just call for vendors and producers of software to have (among other things) a Software Bill of Materials (SBoM) – it demands companies have the ability to recall components. This crucial part has been missing from policies for a long time – as it forces companies to build awareness of what goes into their distributed software.
Commercial consumers of open source should be required to have the ability to do a targeted recall of known bad bits, just as we expect physical goods manufacturers such as the auto industry to do. Imagine if car manufacturers just had to print out the list of parts in a car’s service manual, leave it in the glove box, and make the car owner fix the vehicle. That would not fly with anyone, and we should not treat software dealing with critical infrastructure or data any differently.
The CRA even attempts to exempt open source software from these regulations. That’s great – OSS and project maintainers should be exempt from regulations applying liability to avoid quashing innovation and sharing of ideas. But there’s some problematic language with how the CRA draws a line between commercial and non-commercial OSS use, which could hurt the future of open source.
Currently, the text (Page 15, Paragraph 10; Page 43, Paragraph 4) implies a developer or supplier deriving commercial benefit from OSS would make it subject to the CRA. It even implies with regards to distribution of software that open source producers or developers might be held liable if their open source projects are used commercially.
What this means for open source software is very unclear. How would this apply to public repositories like PyPi, npm or Maven Central? They are the de-facto sources where the world consumes copies of open source components for Python, Javascript and Java respectively. Would organisations providing OSS and deriving commercial benefit suddenly want to shoulder potentially unlimited liability for the content?
Since not every open source vulnerability applies to all possible usage of a particular component, it’s impossible for a repository like Central or npm to assess the impact of every vulnerability. Rarely does a vulnerability apply to all uses of a component. Forcing policy encouraging upstreams to effectively remove a component to solve one user’s issue could cause irreparable damage to another who is using it perfectly safely.
Open source guidance must be clear, and exist
Cyber security policy across European legislation must acknowledge OSS, but generic guidance without specific, actionable guidelines won’t meaningfully increase cyber resilience.
If the CRA becomes EU law without clarification, the effect on OSS could bring devastatingly unintended consequences, including the balkanisation of open source that makes collaboration hard or impossible. It could make public repositories inaccessible to the EU – disastrous for the software ecosystem.
If liability against publishers stands, it’s not inconceivable specific projects may block contributors from affected countries, preventing talented developers from contributing to OSS projects. Projects could start changing licences to specifically exclude usage inside products shipped to the EU.
Still, this legislation comes from a constructive desire to increase the cyber security posture within digital products in a more advanced way than many of its counterparts. Current policy in the US, whose recent Securing Open Source Act of 2022 and NSA/CISA guidance, while a step in the right direction, remains too high-level. Any further regulations and guidance should follow GDPR’s prescriptiveness to enforce compliance with more specific standards for software supply chain management, in addition to certification requirements for all vendors’ software supply chains.
Introducing policy where none exists will cause significant effort, but if it means avoiding a catastrophe for open source and the European software industry, it’s worth it.
Ilkka Turunen is field CTO at Sonatype