AI use: 3 essential questions every CISO must ask


In July, Wall Street experienced its worst day since 2022, with the tech-focused Nasdaq falling by 3.6%. The downturn was largely triggered by what commentators suggest is the result of underwhelming earnings from some major tech companies. What’s notable is that the companies hit the hardest by this decline were those heavily invested in AI.

While AI has driven significant investment and optimism, there is growing concern that its capabilities may have been overhyped. This dip in tech stocks underscores the mounting pressure on decision-makers to demonstrate that AI can truly live up to its expectations.

For CISOs, this pressure is particularly acute. They are now tasked with ensuring that their AI-driven initiatives not only bolster cybersecurity but also deliver measurable results that can be communicated to the C-suite and board members.

Cybersecurity, in particular, stands to gain a lot from AI’s capabilities. AI machine-learning algorithms can help detect anomalies in user behaviour—an essential feature in today’s swiftly evolving threat landscape. In fact, a recent study found that 78% of CISOs are already utilising AI in some capacity to support their security teams.

However, like any evolving technology, AI should be approached with a healthy dose of scepticism. To ensure that investments in AI deliver tangible results, CISOs must ask themselves three critical questions before integrating AI into their cybersecurity strategies.

1. Where does it make the most sense to use AI?

Before implementing AI, it’s essential to determine where it can have the greatest impact.

While many practitioners are looking to integrate AI into threat detection and response, it’s important to understand the limitations. Large language models (LLMs) can be valuable in analysing logs attributed to detections and provide high level guidance for response. However, the dynamic nature of the threat landscape presents a challenge: threat actors are also using AI, and the rapid pace at which they evolve often outpaces threat identification systems.

To keep pace with the threat actors, one area where AI can have a significant and immediate impact is in automating repetitive tasks that currently consume much of security teams’ time and headspace. For example, AI-powered insights and guidance can help SOC analysts triage the alerts, reducing the workload and allowing them to focus on more complex threats. By leveraging AI to up-level the analysts in the SOC, CISOs can free up their teams to concentrate on high-priority issues, improving overall efficiency and response times.

2. Is there proof of AI delivering in my use case?

Not all use cases deliver equally, and it’s safer to rely on tried-and-tested applications before experimenting with more novel approaches.

For instance, security information and event management (SIEM) systems have long used AI and machine learning for behavioural analytics. Machine-learning driven user and entity behavior analytics (UEBA) systems excel at detecting abnormal activity that may indicate security threats, such as insider attacks, compromised accounts, or unauthorised access.

These systems work by analysing vast amounts of historical data to establish behavioural baselines for users and entities, then continuously monitor real-time activity for deviations from the norm.

By focusing on well-established AI applications like UEBA, CISOs can ensure their AI investments provide value while reducing risk.

3. What is the quality of the data provided to the AI models?

One of the most crucial factors for AI’s success is the quality of the data provided to the model and the prompt. AI models are only as good as the data they consume, and without access to accurate, complete, and enriched data, AI systems can produce flawed results.

In cybersecurity, where threats are continually evolving, it’s critical to provide AI systems with a diverse data set that encompass the context of attack surfaces, detailed logs, alerts and anomalous activities.

However, emerging attack surfaces—such as APIs—pose a unique challenge. API security is an attractive target for hackers because APIs often transmit sensitive information. While traditional web application firewalls (WAFs) may have sufficed to protect APIs in the past, today’s threat actors have developed more sophisticated techniques to breach perimeter defences. Unfortunately, because API security is a relatively new area, this attack surface is rarely monitored and worse not included in the AI analysis of threats.

With success hinging on the availability of high-quality data, AI may not yet be the best solution for immature or emerging attack surfaces like APIs, where fundamental security practices may still be evolving. In these cases, CISOs must recognise that even the most advanced AI algorithms cannot compensate for a lack of foundational security measures and reliable data.

Conclusion

AI holds enormous potential to transform cybersecurity, but it is not a magic bullet. By asking critical questions about where AI can deliver the most value, relying on proven use cases, and ensuring access to high-quality data, CISOs can make informed decisions about how and when to integrate AI into their cybersecurity strategies. In a landscape where both opportunities and threats are evolving rapidly, a strategic approach to AI implementation will be key to success.



Source link