Security Think Tank: Stop buying AI, start buying outcomes

Security Think Tank: Stop buying AI, start buying outcomes

‘AI-powered’ security tools are already everywhere, and 2026 will only make that more apparent. The question I hear from CISOs is never “Should we be using artificial intelligence?” so much as “How do I tell the genuine accelerator from the expensive toy?”

The hard truth is that AI has become a serious force multiplier for both sides. In my own work, I have been writing about the shift from human-operated intrusion sets to AI orchestrated campaigns where LLMs are effectively the primary operator and the human becomes simply a prompter and supervisor. Ignoring AI is, of course, a choice, but it’s not a neutral one. It means falling behind in a race where the other team is not planning on giving you a break.

At the same time, the market response has been predictable, following the same path as cloud, XDR [Extended Detection and Response], or any number of technologies before it. Everything has an AI badge on it somewhere; it even features right up there in the names of many of the companies. Features that used to be called analytics or correlation have been repackaged as if they were brand new, but if you buy on buzzwords you will end up paying for things you already had. The reality is we never see the boasts ‘computer-controlled’, ‘digital’ or ‘electronic’ anymore, and I fully expect the marketing buzz around ‘AI-powered’ to go the same way. Over the next 12 to 24 months, AI will become a baseline expectation, the silent engine that quietly rewires how technology operates without needing to be explicitly named. Any organisation that goes all-in on marketing themselves as ‘AI’ is massively missing the point. The value is not derived from the technology itself, but the utility it provides.

So what should buyers actually look for?

For me, the first filter is simple. Start with the work, not the model. Ask your own teams where they are drowning. In most organisations, that will be some mix of alert triage, investigation donkey work, vulnerability noise, and reporting. The AI that is worth paying for is the AI that gives you time and clarity back in those workflows.

There are three main categories where I see real value currently.

The first is summarisation and explanation. Generative models are very good at turning piles of technical context into something humans can consume. In my own world, we deliberately didn’t start with another chatbot bolted onto the side of the product. We started with the user. That means using generative models behind the scenes to do things like summarise a complex asset risk picture, compress a noisy incident into something an analyst can grasp quickly, or generate executive-ready reporting that a non-specialist can actually understand. No one became an analyst to churn out PowerPoint decks for the C-suite. If AI can take that burden away, that is a genuine win.

The second is navigation. Modern environments generate an absurd amount of telemetry. You have logs, alerts, indicators and assets with thousands of attributes across millions of devices. Historically, actually using that data has required learning a query language or depending on a specialist who has. Large language models are well-suited to sit in between the user and that data as a translation layer. You should be able to say “show me all Windows Server 2022 systems that are not running EDR” or “show me devices that became high risk this week”, or “only show me devices in the US with a risk score above 8.5 in the past month”, and get a sensible answer without learning yet another syntax. You should be able to add context with normal words, “include the switch IP and switch port for each of these devices” instead of rewriting the whole query.

This is a way of unlocking the value of the asset intelligence you already collect, especially if your visibility truly spans all device types from IT to OT, IoT and medical. A natural way to interact with your data is one of the most practical uses of AI in security today.

The third is prioritisation. This is another area where what our customers ask for and what AI can do line up almost perfectly. When an analyst sits down in front of a console full of alerts, the real question is “Where do I start?” When a vulnerability team is staring at a list of critical CVEs the real question is “Which ones matter here?” Language models and other AI techniques can look across historical analyst behaviour, peer patterns and the live state of your environment to say “Here are the alerts you should look at first” or “Here are the vulnerabilities that are most likely to hurt you given your infrastructure”. That’s what we are hearing directly from security practitioners. They talk about saving half an hour of firefighting just by getting a sensible starting point instead of a flat list.

Done right this kind of AI doesn’t take control away from the human. It suggests. It highlights. It nudges you towards the needles in the haystacks. The user still decides whether to shut down a plant or isolate a business-critical system. That balance matters, especially in regulated and safety-critical industries where a bad decision has real-world consequences.

On the flip side, there are areas where I would advise caution.

The first is fully autonomous response. There’s currently a lot of justified interest in agentic AI where systems don’t just answer questions but take actions toward a goal. Used well, these agents can take drudgery away from humans. Used badly, they become an overconfident intern with root access. I’m not saying never let AI take actions. I’m saying you need clear guardrails, least privilege, and human accountability for those actions. Treat an AI agent like a new team member who never sleeps and never gets bored but also never truly understands your business. You don’t give that person the keys to production on day one.

The second red flag is magic thinking. If a pitch sounds like “buy our AI and you can replace your SOC”, walk away. Any realistic deployment of AI in security over the next few years is going to look like augmentation. Better triage, better correlation, better reporting, better use of scarce expertise. Not a sentient box that does security for you while you focus on the business.

The third is opacity around data and risk. When you are evaluating AI-backed tools, spend at least as much time on the boring questions as on the demo. Where does the data live? What is used for training? How is access controlled? How do you defend the AI component itself against prompt injection, model abuse or poisoning? There is no point buying AI to defend your environment if you have no idea how that AI is itself being defended and governed.

So are AI-backed tools worth it? The answer is yes, for the right problems, with the right questions.

My advice to buyers in 2026 would be to keep it grounded. Start from one or two painful workflows where you know your team is burning hours. Look for vendors who can show, with your data, that they can give you time back in those use cases. Ask for clear explanations of how the AI is used, what decisions it influences and how you stay in control. Insist on visibility that spans your whole environment, not just a thin slice of IT, because AI is only as good as the data it works from.

We built our security programmes for a world where the attacker was always human. That has already changed. Using AI on defence is no longer optional, but you do get to choose whether you buy into marketing stories about autonomous cyber, or tools that genuinely help your people see more, understand more and act with precision.

The former is hype, the latter is where AI really starts to earn its place on the budget line.



Source link