How AI and automation are reshaping security leadership


The contemporary SOC is transforming as it starts to realize the benefits of GenAI and utilize the manifestations of autonomous agentic AI, according to Tines.

Additionally, the promise of security automation is coming to fruition. In theory and practice, security automation should truncate the time SOCs spend investigating and mitigating alerts. However, the tried and true saying about technology still applies: Cybersecurity still relies on the combination of people, processes, and technology. For some time, AI and security automation have achieved gains, but there have also been occasional setbacks.

The IDC White Paper, Voice of Security 2025 surveyed over 900 security decision makers across the United States, Europe and Australia, finding 60% of security teams are small, with fewer than 10 members. Despite their size, 72% report taking on more work over the past year, and an impressive 88% are meeting or exceeding their goals.

Cybersecurity is still establishing its strategies for using GenAI and agentic AI. With that being said, security copilots and widespread LLM models for business have been common for a little over a year.

AI’s impact on security jobs

According to the research, security leaders are bullish on AI – 98% are embracing it and a mere 5% believe AI will replace their job outright. The data also highlights security leaders’ perspectives on the value of leveraging AI and automation to eliminate business siloes, with nearly all leaders seeing the potential to connect these tools across security, IT (98%), and DevOps (97%) functions.

Security managers, who hold the least senior of the surveyed job titles, are most concerned by AI; 14% say AI could entirely subsume their job function. Only 0.6% of executive vice presidents and senior vice presidents see AI eliminating their job function. The management functions are most likely to think AI will change their jobs. In fairness, all job titles believe there will be at least minor changes to their jobs.

However, this enthusiasm coexists with notable concerns and frustrations: 33% of respondents are worried about the time required to train their teams on AI capabilities, while 27% cite compliance as a key blocker. Other hurdles include AI hallucinations (26%), secure AI adoption (25%), and slower-than-expected implementation (20%).

security leaders AI automation

“Challenges in the cybersecurity industry are ever present and ever changing” said Matt Muller, field CISO, Tines. “Security professionals are met with the daunting task to integrate AI across their workflows. Our research shows that security teams are stepping up. However, organizations must take a flexible approach to automation and AI to ensure it remains secure and effective.”

One-third of respondents are satisfied with their team’s tools, but many see potential for improvement. 55% of security teams typically manage 20 to 49 tools, while 23% use fewer than 20, and 22% use 50 to 99.

Regardless of the number of tools, 24% of respondents struggle with poor integration, while 35% feel their stack lacks key functionality. The challenge lies not just in having the right tools, but in ensuring they work in harmony to reduce complexity and boost performance.

“Siloed automation across departments complicates managing security programs and creates vulnerabilities, especially as less technical employees adopt these technologies,” said Christopher Kissel, research VP, Security & Trust Products, IDC Research. “The security leaders we surveyed are strongly in favor of embracing shared automation between security and closely-knit business units like IT and DevOps to improve collaboration, strengthen security posture, streamline operations, and reduce complexity.”

security leaders AI automation

Security leaders on AI and automation

If security leaders gained time through automation or AI, 43% would use it to focus more on security policy development, 42% on training and development, and 38% on incident response planning.

83% of security leaders report having a healthy work-life balance, but only 72% can perform their jobs without working extended hours, suggesting that such sacrifices have become an accepted part of the role for many.

Larger organizations lead in extensive AI adoption across multiple areas, while smaller and mid-sized organizations are still focused on implementation and exploring use cases. This reflects a trend where AI maturity aligns with organizational size and resources. The costs associated with GenAI are a part of the conundrum. Companies are unsure whether to acquire GenAI capabilities in pay-as-you-go tokens or as a large GenAI suite.

Returns from real-world implementations have dampened the initial unbridled enthusiasm for AI. Return on investment has been hard to prove, as implementing
AI for business use cases is not always an intuitive process. For cybersecurity, none of this is particularly new. Cybersecurity leaders have gone through similar cycles in machine learning and user behavioral analytics.

While AI can help draw meaningful insights from the huge amount of available data, considerable human intervention is required to realize AI’s benefits. In IT and its close cousin cybersecurity, new technology is often met with regulatory concerns, challenges in training, and concerns about exposure. These dynamics remain for GenAI and agentic AI.

DOWNLOAD: Voice of Security 2025



Source link