ETSI Security Conference 2025 – Securing Open Source with Æva Black

ETSI Security Conference 2025 – Securing Open Source with Æva Black

At the ETSI Security Conference 2025, we spoke with Æva Black, cybersecurity expert and open source advocate, about the evolving landscape of open source security. Black shared insights on software supply chain traceability, the challenges of maintaining secure open source software, the role of SBOMs and intrinsic identifiers, and the importance of global collaboration in responding to vulnerabilities.

Can you tell us a bit about your background—where you’ve worked so far, and your path in cybersecurity?

It’s a long story, a bit of a weird one. Most recently, I was leading open source software security in the Azure CTO’s office at Microsoft for a couple of years. Then the U.S. government recruited me to work on open source security for CISA.

Currently, I’m part of a small consortium called Bow Shock Systems Consulting—it’s a group of us working together. My company is called Node Point Studio. We provide cybersecurity and open source consulting services.

Open source is built on trust, but it has also introduced several vulnerabilities—like Log4Shell, for example. What lessons should we take from these cases, and how can we rebuild and maintain that trust?

I’d actually say it’s because of the trustworthiness of open source communities—and their transparency—that we were able to defend against those vulnerabilities. That’s only possible through global collaboration and the openness of the community and its tools. We do need to keep evolving, though—especially when it comes to funding models right now.

We’ve seen more discussions lately about maintainer burnout. Since many of these are volunteer-led projects, how can the ecosystem better support the individual developers who build and maintain them in their free time?

A: Open source has quite a spectrum of contributors—from small projects to very large consortiums with thousands of developers working together around the world. Too often, attention is placed only on the large projects. Funding from companies goes to them easily because they’re visible and often have large marketing teams or foundations behind them.

It’s much harder to support a small volunteer team that doesn’t have a legal entity to receive donations, the right kind of tax status—or maybe they simply don’t want it. Maybe they just work on it in their spare time, or they’re still students. Yet some companies still make the choice to use their software.

That’s where some of the tension around transparency lies. The Cyber Resilience Act (CRA) shifts the requirements and makes it clear that manufacturers must take responsibility for all the software they include in their products. They can be held accountable, which means they need to do their due diligence—follow good, secure open-source usage practices, and contribute back.

It’s not just about looking at the top layer of their dependencies or their commercial vendors—it’s about understanding the full depth of their supply chain and all the open-source components they use, and figuring out how best to sustain them.

In your presentation, you mentioned the concepts of being secure by design and secure by demand. That suggests a shared responsibility between producers and consumers. How can that balance realistically be achieved?

On the consumer side, it’s only possible through greater transparency—through product labelling like the CE mark that the CRA introduces. That allows consumers of digital goods to demand better safety in their products. Of course, that’s not possible if people can’t look at the product and understand to what degree it was built with safety in mind. I’m really hoping the CRA will enable more of that on the “secure by demand” side.

When it comes to the manufacturing process, the same principle applies. Manufacturers can demand security from their commercial suppliers, but when they depend on open source, the onus is on them—not to make demands of the community, but to show up and help.

So when we talk about secure by design, it means that a company making a product should start with secure design from the outset. That includes thinking about how to sustain any open-source projects they choose to use—something that benefits so many products but also needs long-term support.

You also showed that operating systems have millions of lines of code and contributions from many different developers. Given this complexity, what practical strategies can organizations adopt to identify vulnerabilities efficiently?

In my talk, I used the example of the Linux kernel, which has around 20 million lines of code, and the Debian operating system, which has about 2 billion.

We already have standard bodies like ETSI and CEN with long histories of managing supply chain security in other industries. We often use the analogy of a modern car, which has about 40,000 parts—there are standards for managing that kind of supply chain. But now we’re dealing with a sudden, several-orders-of-magnitude increase in supply chain complexity for digital products.

The same techniques we’ve used in traditional manufacturing simply don’t work here. We need to move quickly toward new standards for identifying digital components throughout the supply chain. Without that traceability, we can’t reasonably do a “product recall” for software.

If your brake pads are defective, you can trace and recall them. If a can of soup is contaminated, there’s a food safety recall process. But right now, we don’t have that kind of traceability for digital components—and that’s something I’ve been advocating for several years.

Does that include SBOMs, or anything else?

SBOMs are good for that, but using just an SBOM isn’t enough. You can think of it like the ingredient list on a can of soup—it might tell you if you’re allergic to something, like peanuts, so you can make an informed choice. That’s useful, but it doesn’t provide enough information for full traceability—from the farm, through the manufacturers, to the store, to the consumer. That’s where something combining intrinsic identifiers, like OMNIBOR, NICs, or software hash IDs, can help.

What kind of policy or certification framework would you support to ensure open source security without stifling innovation?

The CRA has gone through several dialogues over the past few years. When it was approved at the end of last year, it explicitly exempts open source and tries to preserve innovation potential and flexibility for small and medium enterprises to contribute to and use open source without heavy-handed regulation. That’s very clear in the legal text and reinforced by the Commission’s public statements.

The CRA affects the use of open source in products and also tries to establish mechanisms to incentivize sustaining open source. I see a lot of initiatives across Europe, like the Asylum Tech Agency and, recently, a proposal for a Europe-wide “solving tech fund” to help secure and sustain open source infrastructure. There’s a lot of movement, but of course, there’s still work to be done.

If you could change one thing about how the global community handles open source software vulnerabilities—culturally, technically, or institutionally—what would it be?

I would actually change two things—they’re interconnected, so it’s kind of one overarching change, but each part depends on the other.

With my “magic wand,” I’d add intrinsic identifiers to compilers and build tools in open source in a way that’s optional but defaults to on. Over time, all open source software would become traceable. People could turn it off if they want, but this would enable coordinated vulnerability response to be far more rapid and precise than it is today.

For example, if a component is buried deep in the supply chain—so deep it doesn’t show up in your SBOM—and a developer realizes there’s a severe vulnerability like Log4Shell, they could attach that identifier for their component to the global CVE record.

The other change is to make the CVE program more globally collaborative, with more participants. The CVE program has been expanding for years and now includes several hundred CVE Numbering Authorities. That expansion needs to continue as well.

With a specific identifier in a traceable tree structure, the final products would have a complete graph of all the smaller components that went into it. As a consumer, I could look at the hash identifier and cross-reference it with public information to see if it’s safe—or if it might be affected because a vulnerable component is buried somewhere deep in the stack of my products. Even if I don’t know for certain it’s vulnerable, I’d at least know there’s potential risk and can take action. I think that’s incredibly useful information that nobody has right now because our current tools don’t enable it.

Going back to ETSI, what role do you see for organizations like ETSI in shaping more secure open source software for the world?

There’s been a tension for as long as I’ve been a software developer between the pace of traditional Standards Development Organizations (SDOs), like ISO, and the pace of open source software communities and standards, like IETF, Apache, or any of the other open source projects. Open source moves much faster than traditional SDOs, and that tension has now come to a crossroads.

ISO, a few years ago, created a fast track for more rapid standardization when open source needs it, and I think more of that—enabling open source projects to achieve standards—will be very beneficial.

Over the years, we now have all these software development groups, and there’s always some reaction, but the real challenge is that just because you can create software in ETSI doesn’t mean it can support the broader open source communities outside of ETSI.



Source link