As we approach 2025, the cybersecurity landscape is continuing to evolve in response to an ever-changing array of threats and technology. Cloud-native frameworks and artificial intelligence (AI) require firms to increasingly adapt in order to meet ever-more complicated challenges. With that in mind, we spoke to industry experts to get their perspectives on potential developments in the upcoming year. They talk tactics, new threats, and the direction of security.
With the threat of cyber-attacks likely to grow even larger in 2025, businesses need to rethink their approach to cyber defence. Darren Thomson, Field CTO EMEAI at Commvault, highlights:“In 2025, we need to be prepared to ride a new wave of existing challenges. Take phishing for example: the 2024 UK Government’s Cyber Security Breaches Survey identifies it as the most predominant attack vector, affecting 84% of those breached. But while phishing itself is not new, cyber-attacks like this have only grown in complexity as attackers exploit six ‘mega trends’ in technology: artificial intelligence (AI), cloud computing, social media, software supply chains, the emergence of homeworking, and the Internet of Things (IoT).”
He argues for “a clear pivot towards ‘right of bang’ thinking, shifting focus to what happens after an inevitable breach (the ‘bang’), aiming to build resilience in the centre of business operations. This shift acknowledges that cyber threats are not solely issues for IT departments but for entire businesses. The goal is to become cyber mature – defined by a robust recovery plan, awareness at all levels of the organisation, and with a strategic emphasis on resilience.”
AI has shown itself to be both a blessing and a curse in cybersecurity, according to Geoff Barlow, Product and Strategy Director at Node4. He explains: “While AI increases the speed, volume, and sophistication of cyber-attacks, making it easier for cybercriminals, it also offers powerful tools for defence, which can help analysts anticipate and respond to threats.”
Barlowe also highlights that, according to Node4’s research, “30% of mid-market IT decision-makers said AI represents a top cyber security threat, [while] 28% believe it could expose their organisation to new cyber security risks.”
He goes on to stress the importance of addressing the AI skills gap through education and third-party support.”Bringing employees with you on the AI journey will put organisations in good stead to tackle whatever AI developments and threats come our way in 2025.”
With the continued rise of cloud-native environments, new vulnerabilities are emerging. Rani Osnat, SVP Strategy at Aqua Security, warns that choices made by some businesses “in the past 12-18 months were too aggressive towards consolidation, leaving large gaps,” and that, “the expectation that you can replace five-to-six tools with a single tool is unrealistic.”
Osnat expects that “we will see a shift back to the best-of-breed point solutions in an effort tomaintain effective cloud security programs.”
Similarly, Moshe Weis, CISO, at Aqua Security underscores the critical role of adaptive, data-centric security. “With data spread across diverse cloud-native architectures, adaptive, data-centric security is essential,” he explains. “Cloud-native security solutions leverage GenAI to automate threat detection and response across distributed environments, enabling real-time analysis and predictive defense.”
Matt Hillary, CISO, at Drata anticipates that “in 2025, security, privacy, and compliance will become increasingly intertwined, necessitating a more integrated, collaborative, and comprehensive approach to GRC [governance, risk and compliance] across these sometimes-divided practices and domains. Organisations will be compelled to integrate previously siloed functions and think about them holistically.”
He continues: “The driving factors for this integrated approach are increasing cyber threats, stricter regulations, and a heightened public awareness of privacy issues and vulnerabilities, especially those associated with the rise of AI.”
This integration must balance AI’s capabilities with ethical considerations, Hillary stresses that businesses must be “ensuring AI systems are unbiased, transparent, and accountable” in order that they can be “maintaining privacy standards” as AI becomes more ingrained in day-to-day operations.
Dane Sherrets, Staff Innovations Architect at HackerOne, believes 2025 “will see greater adoption of AI security and safety standards with a focus on benchmarks that improve AI transparency.”
“One emerging example of this is an increased focus on AI model cards,” he explains. “Model cards, much like nutrition labels on packaged goods, provide a summary to inform potential users about how the models are intended to be used, details on performance evaluation procedures, metadata about the datasets behind the model, and more.”
Sherrets also predicts that we’ll see organisations “become more concerned with responsible AI adoption and use adversarial testing methods, like AI red teaming, to identify safety and security challenges in GenAI. Every industry and organisation has different definitions for how they want a model to behave and what they define as harmful outputs, so engagements like AI red teaming will be essential if teams want to minimize risk and continuously ensure that models cannot be used in ways that a company would consider harmful.”
Cybersecurity in 2025 will demand a shift in thinking, with businesses being advised to assume a multifaceted and proactive approach. Organisations must embrace evolving technology responsibly and be ready to defend against sophisticated threats.
As cybersecurity businesses across the industry grapple with challenges surrounding AI, cloud security, and GRC, one thing is clear: collaboration and adaptability will be key in curatingsuccessful cybersecurity strategies.