Access Management
,
Agentic AI
,
Artificial Intelligence & Machine Learning
Proofpoint CEO Sumit Dhawan on Applying Human Insider Risk Safeguards to AI Agents
Artificial intelligence agents behave like humans and carry the same risk profile. They operate non-deterministically and can be manipulated through prompt engineering, so they require a purpose-built integrity framework to govern their behavior, said Sumit Dhawan, CEO at Proofpoint.
See Also: How Attackers Use AI to Outsmart Email Filters
Traditional security controls were designed for Boolean, pattern-based logic. AI agents don’t follow predictable paths. That makes behavioral drift detection the operative defense model. Dhawan compared this directly to enterprise insider risk programs: When a human’s behavior deviates from its expected pattern, controls escalate. AI agents demand the same mechanism.
“With AI, there is no code of conduct. There’s no form of integrity, per se – and it’s something that has to be coded up into a technology layer, which is an AI behavior safeguard layer,” he said.
In this video interview with Information Security Media Group at RSAC Conference 2026, Dhawan also discussed:
- Why CISOs are bifurcating into proactive and wait-and-see camps on AI safeguard implementation;
- How AI-driven threats have forced cybersecurity vendors to move from traditional ML to language model-based detection;
- How Proofpoint’s AI security platform extends its human insider risk model to AI agents.
Dhawan leads Proofpoint’s human-centric security strategy and growth, focusing on protecting people and data from evolving threats. He brings deep operating experience from VMware, Instart and Citrix, with a track record of scaling enterprise software businesses, go-to-market execution and transformation.

