A2AS framework targets prompt injection and agentic AI security risks


AI systems are now deeply embedded in business operations, and this introduces new security risks that traditional controls are not built to handle. The newly released A2AS framework is designed to protect AI agents at runtime and prevent real-world incidents like fraud, data theft, and malware spread.

A2AS-protected AI agent with BASIC security controls interacting with users, tools, and other agents

Fragmented defenses create gaps

Many companies are still figuring out how to secure AI systems, often with mixed results. Eugene Neelou, project leader for A2AS, told Help Net Security that defenses are both fragmented and fragile.

“Organizations have to choose from point solutions with no security guarantees,” he explained. “They miss prompt injections, add significant latency, or block safe behaviors.”

Some companies acknowledge these weaknesses but continue to use what is available while focusing on building teams and processes. Others take no action at all, waiting for a more reliable solution to emerge. “Many companies choose to do nothing because there is no universal, reliable, and scalable AI security technology,” Neelou said.

The A2AS framework aims to change this by providing a universal layer of protection that works natively alongside AI models. “Like HTTPS for the web, A2AS is universal and lightweight,” Neelou said. “It’s designed to be the first and often the only AI security layer developers need for AI agents and LLM-powered applications.”

Real-world incidents show the risks

Recent incidents highlight what can happen when AI agents are given autonomy without proper guardrails.

Neelou pointed to a case at Replit, a $3 billion coding startup, where an AI agent went rogue and deleted a production database belonging to another SaaS company. “Despite explicit instructions not to touch production systems, the agent executed destructive actions,” he said.

Google has faced similar issues. Its Gemini CLI assistant hallucinated file operations after a failed command, leading to the deletion of nearly all files in a project directory. In another case, attackers exploited weaknesses in the same assistant to execute arbitrary code, effectively turning it into a backdoor.

According to Neelou, A2AS could have mitigated these incidents by applying strict behavior certificates to limit agents to approved functions, using security boundaries to isolate untrusted commands, and requiring explicit approval for critical actions through code-driven policies.

Prompt injection attacks in the wild

Prompt injection attacks are another growing concern. These attacks hide malicious instructions inside everyday sources like emails, documents, and calendar invites that AI systems routinely process.

Neelou noted that major vendors have already been hit. “Microsoft Copilot agents were hijacked with emails containing malicious instructions, which allowed attackers to extract entire CRM databases,” he said.

Google’s Workspace services were also manipulated. Hidden prompts inside calendar invites and emails tricked Gemini agents into deleting events and exposing sensitive messages. In another case, a campaign known as ChatGPT Gmail ShadowLeak used invisible HTML to hijack an agent and silently extract inbox data, forwarding it to attackers.

“These attacks show how easily AI systems can be manipulated when there are no runtime protections in place,” Neelou said. A2AS addresses these risks with a layered approach that includes verifying the source of commands, sandboxing untrusted content, and embedding defensive instructions into the model’s context.

Building a safer future for AI systems

As businesses adopt AI at scale, these attacks will become more common and more damaging. Neelou believes A2AS provides a path forward by standardizing how organizations secure their AI agents.

The framework integrates with common development workflows and does not require retraining models or adding external systems that slow performance.

“A2AS gives organizations a way to secure AI agents before these incidents become the norm rather than the exception,” Neelou concluded.



Source link

About Cybernoz

Security researcher and threat analyst with expertise in malware analysis and incident response.