OWASP Launches AI Vulnerability Scoring Framework

OWASP Launches AI Vulnerability Scoring Framework

A new vulnerability scoring system has just been announced. The initiative, called the AI Vulnerability Scoring System (AIVSS), aims to fill the gaps left by traditional models such as the Common Vulnerability Scoring System (CVSS), which were not designed to handle the complex, non-deterministic nature of modern AI technologies. 

AI security expert, author, and adjunct professor Ken Huang introduced the AIVSS framework, emphasizing that while CVSS has long been a cornerstone for assessing software vulnerabilities, it fails to capture the unique threat landscape presented by agentic and autonomous AI systems. 

“The CVSS and other regular software vulnerability frameworks are not enough,” Huang explained. “These assume traditional deterministic coding. We need to deal with the non-deterministic nature of Agentic AI.” 

Huang serves as co-leader of the AIVSS project working group alongside several prominent figures in cybersecurity and academia, including Zenity Co-Founder and CTO Michael Bargury, Amazon Web Services Application Security Engineer Vineeth Sai Narajala, and Stanford University Information Security Officer Bhavya Gupta.  

Together, the group has collaborated under the Open Worldwide Application Security Project (OWASP) to develop a framework that provides a structured and measurable approach to assessing AI-related security threats. 

According to Huang, Agentic AI introduces unique challenges because of its partial autonomy. “Autonomy is not itself a vulnerability, but it does elevate risk,” he noted. The AIVSS is designed specifically to quantify those additional risk factors that emerge when AI systems make independent decisions, interact dynamically with tools, or adapt their behavior in ways that traditional software cannot. 

A New Approach to AI Vulnerability Scoring 

The AI Vulnerability Scoring System builds upon the CVSS model, introducing new parameters tailored to the dynamic nature of AI systems. The AIVSS score begins with a base CVSS score and then incorporates an agentic capabilities assessment. This additional layer accounts for autonomy, non-determinism, and tool use, factors that can amplify risk in AI-driven systems. The combined score is then divided by two and multiplied by an environmental context factor to produce a final vulnerability score. 

A dedicated portal, available at aivss.owasp.org, provides documentation, structured guides for AI risk assessment, and a scoring tool for practitioners to calculate their own AI vulnerability scores. 

Huang highlighted a critical difference between AI systems and traditional software: the fluidity of AI identities. “We cannot assume the identities used at deployment time,” he said. “With agentic AI, you need the identity to be ephemeral and dynamically assigned. If you really want to have autonomy, you have to give it the privileges it needs to finish the task.”  

Top Risks in Agentic AI Systems 

The AIVSS project has also identified the ten most severe core security risks for Agentic AI, though the team has refrained from calling it an official “Top 10” list. The current risks include: 

  • Agentic AI Tool Misuse 
  • Agent Access Control Violation 
  • Agent Cascading Failures 
  • Agent Orchestration and Multi-Agent Exploitation 
  • Agent Identity Impersonation 
  • Agent Memory and Context Manipulation 
  • Insecure Agent Critical Systems Interaction 
  • Agent Supply Chain and Dependency Attacks 
  • Agent Untraceability 
  • Agent Goal and Instruction Manipulation 

Each of these risks reflects the interconnected and compositional nature of AI systems. As the draft AIVSS document notes, “Some repetition across entries is intentional. Agentic systems are compositional and interconnected by design. To date, the most common risks such as Tool Misuse, Goal Manipulation, or Access Control Violations, often overlap or reinforce each other in cascading ways.” 

Huang provided an example of how this manifests in practice: “For tool misuse, there shouldn’t be a risk in selecting a tool. But in MCP systems, there is tool impersonation, and also insecure tool usage.” 



Source link