AI in SecOps: How AI is Impacting Red and Blue Team Operations


Integrating AI into SOCs

The integration of AI into security operations centers (SOCs) and its impact on the workforce are pivotal aspects of successful AI adoption and trust building. According to the survey data, AI is significantly influencing security operations and reshaping roles within those organizations. Approximately 66% of applicable respondents indicated they are using AI in their SOCs, underscoring the growth AI has experienced in this area of security. 

AI’s effectiveness in the SOC is further demonstrated by the ability to automate various tasks that might otherwise consume an inordinate amount of time. A whopping 82% found AI useful for improving threat detection—an expected result because AI can easily assist in the analysis of adversary tactics, techniques, and procedures (TTPs) and crafting of associated detections. 

Approximately 62% of organizations are using AI to automate incident prioritization and response, minimizing potential downsides and tedious, time-wasting tasks better suited to automated systems. Another excellent use of the technology, found in 56% of respondents, is supporting faster investigations with improved data correlation across multiple sources.

The Security Researcher Perspective

“As an engineer doing AI development for my company, AppOmni, MLSecOps and AISecOps are 100% happening. It’s pretty difficult to turn them into a production, and I do think they’re going to blow up. People should dig in and learn it because it’s going to be highly applicable to every company. In three or five years’ time, every good engineer is going to have to know how to use and implement LLM technology and other generative AI technology.”

Joseph Thacker aka @rez0_
Security Researcher specializing in AI

AI for Red and Blue Team Operations

How does the use of AI in red teaming enhance collaboration and knowledge sharing with blue teams?

Our survey found that AI is making significant inroads in both red and blue team operations. Of the 30% who use AI in their red team activities, 74% are leveraging AI to simulate more sophisticated cyber-attacks in their red team training.

Approximately 62% of our respondents indicated that AI is used to create more realistic attack simulations, better preparing blue teams for emerging threats. A little over 57% of respondents found that cross-training exercises using AI tools provided better skills and learning opportunities for red/blue activities. 

Other notable areas include a deeper understanding of threats and vulnerabilities (52%) and automated sharing of attack insights with blue teams for faster feedback (50%). We cannot overstate this: Red teams exist to make blue teams stronger. AI-positive integrations between red and blue team activities only help strengthen the organization’s overall security posture and encourage adoption of AI technologies. However, as we noted earlier, respondents are concerned with the highly complex and ethical issues of using AI in offensive security operations. Furthermore, approximately 36% of respondents indicated that red teams might have an issue keeping up with rapidly evolving AI defenses deployed by blue teams.

Want to learn more about how AI is impacting cybersecurity and prepare for the future of AI in SecOps? Check out the full survey results and analysis in the report: SANS 2024 AI Survey.



Source link