One in three security teams trust AI to act autonomously
While AI adoption is widespread, its impact on productivity, trust, and team structure varies sharply by role and region, according to Exabeam.
The findings confirm a critical divide: 71% of executives believe AI has significantly improved productivity across their security teams, yet only 22% of analysts — those closest to the tools — agree. This perception gap reveals more than a difference in opinion; it underscores a deeper issue with operational effectiveness and trust.
Executives are optimistic about AI
Executives often focus on AI’s potential to reduce costs, streamline operations, and enhance strategy. But analysts on the front lines report a very different experience — one shaped by false positives, increased alert fatigue, and the ongoing need for human oversight.
For many, AI hasn’t eliminated manual work; it’s simply reshaped it, often without reducing the burden. This suggests that some organizations may be overestimating the maturity and reliability of AI tools and underestimating the complexity of real-world implementation.
“There’s no shortage of AI hype in cybersecurity — but ask the people actually using the tools, and the story falls apart,” said Steve Wilson, Chief AI and Product Officer at Exabeam. “Analysts are stuck managing tools that promise autonomy but constantly need tuning and supervision. Agentic AI flips that script — it doesn’t wait for instructions, it takes action, cuts through the noise, and moves investigations forward without dragging teams down.”
AI impact in core security operations
While the findings reveal a difference in perception, they also demonstrate AI’s positive impact, most consistently in threat detection, investigation, and response (TDIR).
56% of security teams report that AI has improved productivity in these areas by offloading repetitive analysis, reducing alert fatigue, and improving time to insight. AI-driven solutions are strengthening security operations with enhanced anomaly detection, faster mean time to detect (MTTD), and more effective user behavior analytics.
Still, trust in AI autonomy remains low — 29% of teams trust AI to act on its own, and among analysts, that figure drops to 10%. 38% of executives are willing to let AI act independently in cyber defense.
The industry is aligned on one thing: performance precedes trust. In security operations, organizations aren’t looking to hand over the reins — they’re counting on AI to exceed the limits of the human mind at scale.
By consistently delivering accurate outcomes and automating tedious workflows, AI can become a force multiplier for analysts, enabling faster, smarter threat detection and response.
Security teams are restructuring
AI adoption is driving structural shifts in the security workforce. More than half of surveyed organizations have restructured their teams due to AI implementation. While 37% report workforce reductions tied to automation, 18% are expanding hiring for roles focused on AI governance, automation oversight, and data protection.
These changes reflect a new operational model for modern security operations centers (SOCs), one where agentic AI supports faster decisions, deeper investigations, and higher-value human work.
Organizations in India, Middle East, Turkey, and Africa (IMETA) report the highest productivity gains (81%), followed by the United Kingdom, Ireland and Europe (UKIE) (60%) and Asia Pacific and Japan (APJ) (46%). In contrast, only 44% of North American organizations report similar improvements.
As AI continues to reshape the cybersecurity landscape, organizations must reconcile leadership ambition with operational execution. Organizations that want to close the gap between vision and reality can look at adopting agentic AI for its proactive, action-based capabilities. Successful strategies will be defined by their ability to align AI capabilities with front-line needs, involve analysts in deployment decisions, and prioritize outcomes over hype.
Source link