In this Help Net Security interview, Alec Summers, MITRE CVE/CWE Project Lead, discusses how CWE is moving from a background reference into active use in vulnerability disclosure. More CVE records now include CWE mappings from CNAs, which tends to produce more precise root-cause data.
Automation tools help analysts map weaknesses faster, but can reinforce bad patterns if trained on poor examples. Summers argues that fixing weakness patterns reduces recurring work for security teams, even those operating on tight budgets. The core problem is framing: the industry defaults to vulnerability language, while CWE asks teams to focus on what made the bad outcome possible in the first place.
CWE has long existed as a reference taxonomy that many practitioners acknowledged but few actively used. For engineers who have been filing CVEs and ignoring CWE IDs for years, what has changed to make the taxonomy feel relevant to daily work?
CWE has long been embedded in how many organizations approach software development via its prevalence in static analysis tools, secure coding guidance, and internal review and training processes to identify and prevent vulnerabilities (including many vulnerabilities in bespoke, non-published software that never receives a CVE ID). What’s changed is that CWE is now becoming a more integral part of vulnerability disclosure itself, as the value of transparent root-cause mapping is more widely appreciated. As the volume of CVEs has grown, simply knowing that a vulnerability exists isn’t enough; teams need to understand why it exists in order to prioritize, remediate, and prevent recurrence.
At the same time, we’re seeing measurable improvement in how CWE is applied. A growing majority of CVE Records now include CNA-provided CWE mappings, which are generally more valuable as they come from those with direct knowledge of the vulnerability and access to the product for context. That proximity can lead to more accurate and precise mappings rather than broad or inferred classifications, making the data far more actionable for engineering teams.
Finally, there’s a broader shift toward “secure by design” and systemic risk reduction. CWE provides the common language to connect individual vulnerabilities to underlying development issues. CWE helps teams move from patching symptoms to fixing patterns.
CNAs and vendors are now expected to assign CWE IDs when disclosing vulnerabilities. What does the data tell you about the quality of those assignments? Are analysts mapping root causes, or are people picking the closest-sounding entry to check a box?
The data shows real progress, but also that we’re still in transition. The recent 2025 CWE Top 25 data analysis points to a positive trend: fewer mappings to overly abstract CWEs and a measurable shift toward more actionable, lower-level entries. We’re seeing increased use of Base and Variant-level CWEs for root cause mapping – the more precise entries that reflect actionable root cause and support greater understanding and remediation.
There’s still some variability in quality, particularly when mappings are inferred without full context. One of the biggest challenges we see when working with new community members is distinguishing between a vulnerability and its underlying weakness. Sometimes mappings are driven by the outcome or impact of a vulnerability rather than the condition that caused it, which can lead to broader or less precise classifications – especially when analysts and reporters don’t yet have deep experience framing vulnerabilities in terms of CWE root cause.
Overall, the trajectory is encouraging: more precise mappings, better alignment with guidance, and stronger participation. Continued emphasis on root-cause mapping will further strengthen the value of CWE data over time, both at the individual- and cross-organization perspective.
Where has automation helped analysts map to CWE more accurately, and where has it made the problem worse by laundering bad mappings at scale?
Automation and tooling are essential to advancing CWE adoption and improving the accuracy and precision of mappings at scale. In many cases, these capabilities have outpaced education and guidance – helping analysts identify likely weakness patterns, normalize mappings, and apply CWE more consistently than manual processes alone. Used well, they raise the baseline and make CWE more accessible in day-to-day workflows.
LLMs have shown an excellent ability to sift through lots of data and pick out critical details that would have been difficult or time-consuming for human analysts. That said, these systems are only as good as the data and assumptions behind them. If an LLM is trained on abstract or incorrect mappings, they tend to reproduce those same issues – just faster and at scale. In that sense, automation can reinforce imprecision if it’s not grounded in strong examples of accurate root-cause mapping. An LLM might also be more likely to use seemingly-equivalent terms that have distinct meanings within CWE itself, or – like inexperienced human mappers – sometimes conflate the outcome or impact of a vulnerability to guess at what weakness could have produced it.
The most effective approach is pairing tooling with human judgment – especially from those closest to the product. Automation can accelerate and guide, but accurate CWE mapping still depends on context and expertise.
Weakness-framing requires organizations to invest in finding and fixing conditions before exploitation occurs. That is a hard sell when security teams are already underwater responding to active threats. How do you make the economic and operational case to someone managing a SOC on a constrained budget?
It’s a real challenge. Most teams are operating in a reactive mode because that’s where the immediate risk is. But weakness-framing isn’t meant to be about adding more work but reducing recurring work. When the same underlying issue leads to multiple vulnerabilities over time, continuing to treat each one as a discrete event becomes more expensive than addressing the root cause.
Importantly, this isn’t something a SOC can absorb on its own. It requires support from decision-makers and the business. Investing in root-cause remediation means prioritizing engineering time, aligning incentives, and recognizing that prevention reduces long-term operational cost and risk.
It has been widely demonstrated that addressing issues earlier in the development lifecycle is more efficient and cost-effective than doing so later. From an economic standpoint, it’s about shifting from repeated incident response to more durable risk reduction. Fixing a class of weaknesses can eliminate entire categories of future vulnerabilities, which directly reduces alert volume, patching cycles, and operational churn in the SOC. Even small improvements at the root-cause level can have outsized downstream impact.
Operationally, this doesn’t have to be a wholesale shift. Many organizations start by identifying high-frequency or high-impact weakness patterns in their environment and targeting those first. CWE provides the structure to do that in a measurable way – connecting what the SOC is seeing today to what engineering teams can fix to prevent it tomorrow.
Shared language is only useful if both sides of a conversation are using it the same way. Where do you still see the biggest semantic gaps in how CWE is understood across researchers, vendors, and defenders?
The biggest gap is in how we frame the problem. For decades, the industry has centered its language on vulnerabilities and attacks: what was exploited, how it was triggered, and how to respond. CWE pushes us to think about why the bad outcome was possible in the first place. The shift from describing outcomes to understanding underlying weaknesses is where semantic misalignment still shows up across researchers, vendors, and defenders.
Variations in terminology also play an important role. For example, there are no universally accepted definitions of “authentication” and “authorization,” and people often confuse or conflate the two conceptually. Phrases such as “memory leak” have multiple distinct definitions. Use of CWEs in vulnerability reporting helps to remove this vagueness, but it is still an open question about how best to get full alignment across so many different participants and use cases. This problem appears to be prevalent across many disciplines, not just cybersecurity.
Language matters because it drives behavior. If we continue to frame cybersecurity purely in terms of vulnerabilities to fix and attacks to defend against, we stay locked in a reactive cycle. Framing issues as weaknesses to identify and eliminate earlier in the lifecycle changes the conversation. It prioritizes prevention, informs better design decisions, and ultimately reduces the volume of vulnerabilities we have to manage downstream.
![]()
Download: 2026 SANS Identity Threats & Defenses Survey

