Meta Launches LlamaFirewall Framework to Stop AI Jailbreaks, Injections, and Insecure Code

Meta Launches LlamaFirewall Framework to Stop AI Jailbreaks, Injections, and Insecure Code

Apr 30, 2025Ravie LakshmananSecure Coding / Vulnerability

Meta on Tuesday announced LlamaFirewall, an open-source framework designed to secure artificial intelligence (AI) systems against emerging cyber risks such as prompt injection, jailbreaks, and insecure code, among others.

The framework, the company said, incorporates three guardrails, including PromptGuard 2, Agent Alignment Checks, and CodeShield.

PromptGuard 2 is designed to detect direct jailbreak and prompt injection attempts in real-time, while Agent Alignment Checks is capable of inspecting agent reasoning for possible goal hijacking and indirect prompt injection scenarios.

Cybersecurity

CodeShield refers to an online static analysis engine that seeks to prevent the generation of insecure or dangerous code by AI agents.

“LlamaFirewall is built to serve as a flexible, real-time guardrail framework for securing LLM-powered applications,” the company said in a GitHub description of the project.

“Its architecture is modular, enabling security teams and developers to compose layered defenses that span from raw input ingestion to final output actions – across simple chat models and complex autonomous agents.”

Alongside LlamaFirewall, Meta has made available updated versions of LlamaGuard and CyberSecEval to better detect various common types of violating content and measure the defensive cybersecurity capabilities of AI systems, respectively.

Meta Launches LlamaFirewall Framework to Stop AI Jailbreaks, Injections, and Insecure Code

CyberSecEval 4 also includes a new benchmark called AutoPatchBench, which is engineered to evaluate the ability of a large language model (LLM) agent to automatically repair a wide range of C/C++ vulnerabilities identified through fuzzing, an approach known as AI-powered patching.

“AutoPatchBench provides a standardized evaluation framework for assessing the effectiveness of AI-assisted vulnerability repair tools,” the company said. “This benchmark aims to facilitate a comprehensive understanding of the capabilities and limitations of various AI-driven approaches to repairing fuzzing-found bugs.”

Cybersecurity

Lastly, Meta has launched a new program dubbed Llama for Defenders to help partner organizations and AI developers access open, early-access, and closed AI solutions to address specific security challenges, such as detecting AI-generated content used in scams, fraud, and phishing attacks.

The announcements come as WhatsApp previewed a new technology called Private Processing to allow users to harness AI features without compromising their privacy by offloading the requests to a secure, confidential environment.

“We’re working with the security community to audit and improve our architecture and will continue to build and strengthen Private Processing in the open, in collaboration with researchers, before we launch it in product,” Meta said.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.




Source link