VendorResearch

How AI Agents Are Redefining the Insider Risk Threat Model


Access Management
,
Agentic AI
,
Artificial Intelligence & Machine Learning

Proofpoint CEO Sumit Dhawan on Applying Human Insider Risk Safeguards to AI Agents

Sumit Dhawan, CEO, Proofpoint

Artificial intelligence agents behave like humans and carry the same risk profile. They operate non-deterministically and can be manipulated through prompt engineering, so they require a purpose-built integrity framework to govern their behavior, said Sumit Dhawan, CEO at Proofpoint.

See Also: How Attackers Use AI to Outsmart Email Filters

Traditional security controls were designed for Boolean, pattern-based logic. AI agents don’t follow predictable paths. That makes behavioral drift detection the operative defense model. Dhawan compared this directly to enterprise insider risk programs: When a human’s behavior deviates from its expected pattern, controls escalate. AI agents demand the same mechanism.

“With AI, there is no code of conduct. There’s no form of integrity, per se – and it’s something that has to be coded up into a technology layer, which is an AI behavior safeguard layer,” he said.

In this video interview with Information Security Media Group at RSAC Conference 2026, Dhawan also discussed:

Dhawan leads Proofpoint’s human-centric security strategy and growth, focusing on protecting people and data from evolving threats. He brings deep operating experience from VMware, Instart and Citrix, with a track record of scaling enterprise software businesses, go-to-market execution and transformation.





Source link