Enhancing OWASP Noir with AI


Enhancing Security Testing with AI Integration (LLM)

Noir v0.19.0 introduces an exciting feature by integrating AI, specifically Large Language Models (LLMs), into its security testing toolkit. This enhancement allows Noir to perform deeper, more nuanced analysis by leveraging AI technologies, such as Ollama, uncovering previously inaccessible insights.

Setting Up AI Integration

Install and Configure Ollama

Before you can utilize the AI features in Noir, you need to install and configure Ollama:


https://ollama.com/download

# Install an LLM model
ollama pull llama3
# Start the model
ollama run llama3

Ensure your model is accessible at http://localhost:11434.

http http://localhost:11434

# HTTP/1.1 200 OK
# Content-Length: 17
# Content-Type: text/plain; charset=utf-8
# Date: Fri, 31 Jan 2025 14:41:30 GMT

# Ollama is running

Running Noir with AI

To integrate AI into your Noir analysis, run the following command:

noir -b . --ollama http://localhost:11434 --ollama-model llama3


Although noir couldn’t recognize the framework, it analyzed it using the power of AI.

This command activates AI-powered analysis, significantly enhancing Noir’s capabilities, especially when dealing with unknown or unsupported frameworks.

How Does It Work?

We have added the LLM Analyzer. It is commonly defined as Analyzer::AI and consists of individual analyzers that inherit from it, with Analyzer::AI::Ollama being the first analyzer.

We’ve introduced the Analyzer::AI component, with the first specific analyzer being Analyzer::AI::Ollama. Here’s how the process unfolds:

  1. File Selection: The LLM selects which files require analysis from the complete set.
  2. Analysis: Based on this selection, the LLM conducts detailed analysis.
  3. Integration: The results are formatted to be compatible with Noir’s existing analysis framework, merging seamlessly with other analyzer outputs.


For efficient processing, the selection process is applied only when there are more than 10 files to analyze.

Next Plan

Our roadmap includes expanding support to other local LLM interfaces like LM Studio, llamacpp/vLLM, and also integrating with online LLM services such as ChatGPT, Gemini, and Grok. We’re actively discussing these enhancements in issue #522.

Conclusion

The performance on my MacBook Air M1, equipped with an 8B parameter Llama 3 model, isn’t setting any speed records, but it’s still quite acceptable. However, as a Korean, I tend to expect things to be lightning-fast, so the wait might feel a bit frustrating. We’re committed to speeding things up through model enhancements, better hardware, and slicker LLM interfaces.

A big thank you goes to KSG for their invaluable contributions to this project.

References



Source link