Malicious web content can be used to manipulate, deceive, and exploit autonomous AI agents navigating the internet, Google DeepMind researchers show.
The researchers have identified six types of attacks against AI agents that can be mounted via web content to inject malicious context and trigger unexpected behavior.
Web content, they explain in a research paper, allows attackers to set up ‘AI Agent Traps’ that weaponize the agents’ capabilities against themselves, allowing attackers to promote products, exfiltrate data, or disseminate information at scale.
Designed to misdirect or exploit interacting AI agents, these content elements can be embedded in web pages or other digital resources and can be “calibrated to an agent’s instruction-following, tool-chaining, and goal-prioritization abilities”, the researchers say.
The six classes of attacks uncovered by Google DeepMind have been included in a framework that categorizes content injection, semantic manipulation, cognitive state, behavioral control, systemic, and human-in-the-loop traps.
The traps exploit the gap between human-visible rendering and machine-parsed content to inject hidden commands, manipulate input data distributions to corrupt the agent’s reasoning, corrupt the agent’s long-term memory, target instruction-following capabilities using explicit commands, trigger macro-level failures using crafted inputs, and exploit cognitive biases to turn the agent against the human overseer.
When it comes to content injection, attackers can use instructions hidden within HTML comments or metadata attributes, can dynamically inject traps via JavaScript or database calls, or can hide traps using steganography or the syntax of formatting languages.
Semantic manipulation traps rely on carefully selected language to manipulate the agent into cognitive biases, target the agent’s verification mechanisms that filter harmful or misaligned outputs, or feed descriptions of the agent’s personality back to it to change its behavior.
To corrupt the agent’s long-term memory, cognitive state traps poison the external sources used by the agent, inject data into internal stores such as persistent logs, or rely on crafted environmental interactions to alter an agent’s policy.
Behavioral control traps aim to exploit instruction-following capabilities through jailbreaks embedded in external resources, coerce the agent to leak privileged information via untrusted input, or coerce the agent into spawning compromised sub-agents that operate with the agent’s privileges but serve the attacker’s interests.
Systemic traps target the aggregate behavior of multiple agents running in the same environment to weaponize inter-agent dynamics, such as homogeneity, sequential contingency, behavior synchronization, and collaboration. An attacker can also use pseudonymous identities to subvert a networked system’s trust assumptions and consensus processes.
Human-in-the-loop traps, the Google DeepMind researchers say, could be used to commandeer the agent to attack the human user. Invisible prompt injections, for example, can be used to trick the agent into repeating ransomware commands as remediation instructions.
“Mitigating the threat of agent traps necessitates navigating a complex and evolving adversarial landscape. These traps pose at least three interrelated challenges: detection, attribution, and adaptation,” the researchers note.
Their proposed solutions include technical defenses, such as hardening the underlying model through training data augmentation and deploying runtime defenses, improving the hygiene of the digital ecosystem, establishing content governance frameworks, and creating standard benchmarks to identify these threats.
“The effort to secure agents against environmental manipulation is a foundational challenge, requiring sustained collaboration between developers, security researchers, and policymakers, alongside the development of standardized evaluation benchmarks. Its resolution is a prerequisite for realizing the benefits of a trustworthy agentic ecosystem,” the researchers note.
Related: Google Addresses Vertex Security Issues After Researchers Weaponize AI Agents
Related: AI Speeds Attacks, But Identity Remains Cybersecurity’s Weakest Link
Related: Why Agentic AI Systems Need Better Governance – Lessons from OpenClaw
Related: Shadow AI Risk: How SaaS Apps Are Quietly Enabling Massive Breaches

