Hackers Can Exploit Default ServiceNow AI Assistants Configurations to Launch Prompt Injection Attacks

Hackers Can Exploit Default ServiceNow AI Assistants Configurations to Launch Prompt Injection Attacks

A dangerous vulnerability in ServiceNow’s Now Assist AI platform allows attackers to execute second-order prompt injection attacks via default agent configuration settings.

The flaw enables unauthorized actions, including data theft, privilege escalation, and exfiltration of external email, even with ServiceNow’s built-in prompt injection protection enabled.

The vulnerability stems from three default configurations that, when combined, create a dangerous attack surface. ServiceNow Assist agents are automatically assigned to the same team and marked as discoverable by default.

This enables inter-agent communication through the AiA ReAct Engine and Orchestrator components, which manage information flow and task delegation between agents.

ServiceNow AI Prompt Injection Attacks

Attackers exploit this by injecting malicious prompts into data fields that other agents will read when a safe agent encounters the compromised data.

It can be tricked into recruiting more powerful agents to execute unauthorized tasks on behalf of the highly privileged user who triggered the initial interaction.

google

In proof-of-concept demonstrations, Appomni researchers successfully performed Create, Read, Update, and Delete (CRUD) operations.

On sensitive records and sent external emails containing confidential data, all while avoiding existing security protections.

The attack succeeds primarily because agents execute with the privileges of the user who initiated the interaction, not the user who inserted the malicious prompt.

A low-privileged attacker can therefore leverage administrative agents to bypass access controls and access data they would otherwise be unable to reach.

Appomni advises organizations using ServiceNow to immediately implement these protective measures: Enable Supervised Execution Mode: Configure powerful agents performing CRUD operations or email sending to require human approval before executing actions.

Disable Autonomous Overrides: Ensure the sn_aia.The enable_usecase_tool_execution_mode_override system property remains set to false.

Segment Agent Teams: Separate agents into distinct teams based on function, preventing low-privilege agents from accessing powerful ones.

Monitor Agent Behavior: Deploy real-time monitoring solutions to detect suspicious agent interactions and deviations from expected workflows.

ServiceNow confirmed that these behaviors align with the intended functionality but updated the documentation to clarify configuration risks. Security teams must prioritize auditing their AI agent deployments immediately to prevent exploitation of these default settings.

Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.

googlenews



Source link