There are a few universal rules that apply in the ongoing cybersecurity arms race between attackers and security companies.
The first, and most important rule, is that technological evolution that boosts the ability of threat actors to execute their attacks also enhances the effectiveness of cybersecurity tools. Looking at the last three years of AI and machine learning development is proof of this, because as attackers leverage AI to create new methods to infiltrate and exploit their targets, those same models are making threat detection and prevention more efficient than ever.
Agentic AI, or autonomous AI, is widely considered the next frontier in cybersecurity for its ability to enable tools that adapt, learn and execute on their own, absent any human input or intervention. But despite its promise, there are critical challenges that must be overcome before Agentic AI can perform large-scale, fully autonomous cyberattacks, or serve as the basis of a fully autonomous Security Operations Center.
Here’s why. Virtually every business, whether it has 10 or 10,000 employees, requires a certain level of cybersecurity resilience in 2025, because cyber risk is synonymous with business risk. That being said, each organization’s IT infrastructure is vastly different, from the size of their attack surface to the specifical tools, configurations and security controls they use. For an adversary to leverage agentic AI to successfully attack these specific security environments would require that the AI have intimate knowledge of those environments, which is a tall task for threat actors to accomplish with their AI model.
Right now, AI models are not sophisticated enough to carry out precision-targeted attacks at scale without human oversight. It is possible that agentic AI could enable attackers to one day launch ransomware-as-a-service attacks without the need for a service provider, but for now, many AI-driven attacks today are unsophisticated, relying on the “spray and pray” approach—launching broad, untargeted campaigns in hopes of finding vulnerabilities. These kinds of attacks aren’t going anywhere soon, and threat actors will likely only leverage agentic AI to automate social engineering campaigns or scan networks for vulnerabilities in the near future. It’s also not far-fetched to imagine agentic AI enhancing attacks based on image or voice-cloning that are meant to fool the target into believing they’re talking to a real organization or individual.
The good news is that agentic AI is poised to follow the aforementioned rule of technological advancement in cybersecurity; what’s good for attackers is also good for defenses. Agentic AI will be used to enhance threat hunting, augmenting the ability of security analysts by allowing them to focus on triaging only the most pressing threats. We’re likely to see agentic AI used by security vendors in relatively specific scenarios like unearthing bank fraud, government fraud or other limited cases. That’s because the most effective implementation of agentic AI for security professionals is in using it to find Indicators of Compromise (IOCs) quickly and efficiently.
We are far from being able to automate entire Security Operations Centers (SOCs), with agentic AI, however, primarily because of the sheer amount of high-quality data necessary to train an AI model to run an entire SOC by itself. Humans will still be necessary “in the loop” of triaging security incidents and offering creativity in defending against cyber attacks for some time, as poor training data could create an overwhelming number of false positives that mitigates the effectiveness of a fully autonomous SOC.
Ultimately, while the hype surrounding Agentic AI is understandable, the reality is that we are still in the early stages of its deployment. AI-driven cyberattacks are likely to remain unsophisticated for the time being, but as the technology matures, the stakes will continue to rise. Organizations must stay vigilant, invest in both AI-driven defense tools and human expertise, and prepare for a future where the barriers to large-scale cyberattacks continue to decrease.
About the Author
Dan Schiappa is President, Technology & Services at Arctic Wolf. In this role, Dan is responsible for driving innovation across product, engineering, security services, alliances, and business development teams to help meet demand for security operations through Arctic Wolf’s growing customer base. Before joining Arctic Wolf, Dan Schiappa was CPO with Sophos. Previously, Dan served as Senior Vice President and General Manager of the Identity and Data Protection Group at RSA, the Security Division of EMC. He has also held several GM positions at Microsoft Corporation, including Windows security, Microsoft Passport/Live ID, and Mobile Services. Prior to Microsoft, Dan was the CEO of Vingage Corporation.
Dan can be reached at https://www.linkedin.com/in/daniel-schiappa-bbb1062/
Source link