4 ways to prepare your SOC for agentic AI

When acting on an AI tool’s recommendation, analysts must understand what questions the agent asked, which data sources it queried, and what evidence informed its decision, according to Dov Yoran, co-founder and CEO of Command Zero. From there, they need to be able to pivot to additional data sources, pursue new artifacts, and extend the investigative timeline as needed. “Junior analysts who might not know how to start an investigation from scratch can become effective by learning how to extend and refine what the agent produced,” Yoran says. “It’s a different skill set from traditional SOC work, and in many ways, a more accessible one.”

In the SOC of the future, analysts must also act as adversarial reviewers of AI-driven conclusions. That’s because AI systems can introduce hallucinations, training-data bias, and other vulnerabilities while also being vulnerable to adversarial manipulation. Analysts need to recognize these risks to ensure decisions remain grounded and defensible, says Ensar Seker, CISO at SOCRadar. “Analysts need to be trained less as button-pushers and more as adversarial reviewers of AI output. That means understanding how models reason, where they fail, how bias and data gaps surface, and how to interrogate confidence levels and assumptions. The goal isn’t to ‘trust AI faster,’ but to develop the instinct to ask: What would make this conclusion wrong?” Seker says.

Analysts will also play a critical role in enabling organization-specific context into AI-driven workflows. Without that context, agents risk missing threats, amplifying noise, or triggering risky actions based on incomplete information. SOC leaders need to remember that “AI agents are only as smart as the context they have access to,” Yoran says. Analysts must learn to annotate identities, maintain watch lists, document recurring false-positive patterns, and build enrichment layers that strengthen future investigations, he said, “This is knowledge work, not data work.”



Source link