CISOOnline

Security agencies draw red lines around agentic AI deployments

CISA and its international partners also recommended integrating human control and oversight into agentic AI workflows to ensure they are approved for non-sensitive, low-risk tasks. For this, the agencies suggested live monitoring during task execution, human approval for decision-making steps, and auditing upon task execution.

Experts agree that visibility is critical. “Security teams need continuous visibility into how agents behave, what systems they touch, and when their actions deviate from expected patterns,” said Nick Tausek, Lead Security Automation Architect at Swimlane. “Building human approval into high-risk workflows and automating containment is paramount for taking action when agent behavior crosses a line.”

Putting it all together, the advisory detailed core risk areas, from prompt injection and data exposure to tool misuse and privilege creep, urging organizations to lock down privileged access, validate inputs and outputs, monitor agent behavior, and tightly control how these systems interact with data, tools, and other services.



Source link