AI will drive purchases this year, but not without questions

AI will drive purchases this year, but not without questions

AI is moving into security operations, but CISOs are approaching it with a mix of optimism and realism. A new report from Arctic Wolf shows that most organizations are exploring or adopting AI-driven tools, yet many still see risks that need management.

Adoption trends

The report found that 73 percent of organizations have already introduced some form of AI into their cybersecurity programs. Financial services leads adoption, with more than 80 percent using AI, while utilities remain hesitant. Nearly all respondents, 99 percent, said AI will influence at least some of their cybersecurity purchasing decisions over the next year. On average, about 39 percent of security technology purchases are now dependent on AI capabilities being part of the solution.

Adam Marrè, CISO at Arctic Wolf, noted that many leaders want vendors to take the lead in integrating AI. “Many security leaders understand that they are already limited in cybersecurity talent and expertise. Therefore, directly implementing AI into their security programs themselves seems like a difficult and time-consuming task. That is why so many are looking to their trusted vendors to lead the way by introducing AI into existing offerings.”

Breach response and automation

Improving breach response remains a top priority. The report shows that 97 percent of organizations are actively looking for ways to strengthen their threat response readiness. About half are already exploring AI-informed technology that could speed containment and improve outcomes.

Interest in using AI for automation is also strong. Nearly three-quarters of organizations said they plan to use AI to help deliver 24×7 security operations coverage. Many expect AI to supplement smaller teams by handling tier 1 tasks such as detections and initial triage. Other priorities include improving threat prediction and prevention, reducing alert fatigue, and cutting back on repetitive work that contributes to staff burnout.

Trust and concerns

Most security leaders report some level of trust in AI. Only a small percentage expressed little or no trust at all. Two-thirds believe AI will positively impact their security programs within the next year, and almost 80 percent think it will improve their ability to detect new or elusive threats.

But optimism is tempered by concerns. A third of organizations worry about data privacy when using AI, especially with generative models that could mishandle sensitive information. Cost is another barrier, with 30 percent of leaders struggling to justify the investment. A lack of policies around safe use and a shortage of skills to manage AI tools were also noted as challenges.

Dean Teffer, VP of Artificial Intelligence at Arctic Wolf, pointed out that the risks need to be part of any adoption plan. “AI’s potential for change is one that can’t be understated; but we can’t forget that despite all of its promises, at its core AI is still a technology. We have to be cautious any time we introduce new technology into an environment, since it will always carry a certain level of risk. In this case, that risk is associated with potential privacy concerns and data leakage when AI is implemented without necessary acceptable use policies and an effective governance strategy.”

Human and AI collaboration

More than two-thirds of respondents said AI tools will still require substantial human input and oversight. Analysts are expected to shift from repetitive work to validating alerts, threat hunting, and higher-tier investigations. Upskilling staff is seen as an important part of the transition.


Source link

About Cybernoz

Security researcher and threat analyst with expertise in malware analysis and incident response.