How AI agents could revolutionize the SOC — with human help

How AI agents could revolutionize the SOC — with human help

This audio is auto-generated. Please let us know if you have feedback.

NATIONAL HARBOR, Md. — Artificial intelligence is poised to transform the work of security operations centers, but experts say humans will always need to be involved in managing companies’ responses to cybersecurity incidents — as well as policing the autonomous systems that increasingly assist them.

AI agents can automate many repetitive and complex SOC tasks, but for the foreseeable future, they will have significant limitations, including an inability to replicate unique human knowledge or understand bespoke network configurations, according to experts who presented here at the Gartner Security and Risk Management Summit.

The promise of AI dominated this year’s Gartner conference, where experts shared how the technology could make cyber defenders’ jobs much easier, even if it has a long way to go before it can replace experienced humans in a SOC.

“As the speed, the sophistication, [and] the scale of the attacks [go] up, we can use agentic AI to help us tackle those challenges,” Hammad Rajjoub, director of technical product marketing at Microsoft, said during his presentation. “What’s better to defend at machine speed than AI itself?”

A silent partner

AI can already help SOC staffers with several important tasks, according to security experts in their presentations here. Pete Shoard, a vice president analyst at Gartner, said AI can help people locate information by automating complex search queries, write code without “having to learn the language” and summarize incident reports for non-technical executives.

But automating these activities carries risks if it’s mishandled, Shoard said. SOCs should review AI-written code with the same “robust testing processes” applied to human-written code, he said, and employees must review AI summaries so they don’t “end up sending nonsense up the chain” to “somebody who’s going to make a decision” based on it.

In the future, AI might even be able to automate the investigation and remediation of intrusions.

Most AI SOC startups currently focus on using AI to analyze alerts “and reduce the cognitive burden” on humans,” said Anton Chuvakin, a senior staff security consultant in the Office of the CISO at Google Cloud. “This is very worthwhile,” he added, but “it’s also a very narrow take on the problem.” In the far future, he said, “I still want the machines to remediate, resolve certain issues.”

Some IT professionals might “freak out” about the prospect of letting AI loose on their painstakingly customized computer systems, Chuvakin said, but they should prepare for a future that looks something like that.

“Imagine a future where you have an agent that’s working on your behalf, and it’s able to protect and defend even before an attack becomes possible in your environment,” Microsoft’s Rajjoub said during his agentic AI presentation.

Rajjoub predicted that within six months, AI agents will be able to reason on their own and automatically deploy various tools on a network to achieve their human operators’ specified goals. Within a year and a half, he said, these agents will be able to improve and modify themselves in pursuit of those goals. And within two years, he predicted, agents will be able to modify the specific instructions they’ve been given in order to achieve the broader goals they’ve been assigned.

“It’s not two, three, four, five, six years from now,” he said. “We’re literally talking about weeks and months.”

Limitations and risks

But as AI agents take on more tasks, monitoring them will become more complicated.

“Do we really think our employees can keep up with the pace of how agents are being built?” said Dennis Xu, a research vice president at Gartner. “It’s likely that we are never going to be able to catch up.”

He proposed a bold solution: “We need to use agents to monitor agents. But that’s further out on the time horizon.”

Many analysts urged caution in deploying AI in the SOC. Chuvakin described several categories of tasks, some “plausible but risky” and others that he would “flat-out refuse” to believe AI could accomplish in the near to medium-term future.


Source link