The generative AI revolution is creating turmoil in the cybersecurity world. Security has always been a technologyrace. On one hand, it’s a race to keep up with and secure the innovations needed to drive business. On the other hand, it’s a race to keep up with the new AI-driven tactics and techniques leveraged by cybercriminals.
To effectively ramp up security in the age of AI, companies need to fight fire with fire. In general, that means regularly evaluating their cyber defenses and making the necessary adjustments to their security stack. More importantly, it means deploying more sophisticated AI-powered tools capable of detecting and mitigating the new AI-generated threats that they are bound to encounter.
According to the recent State of AI and Security Survey Report by the Cloud Security Alliance and Google Cloud, more than half of respondents (55%) plan to adopt security solutions and tools with generative AI this year alone. Even more organizations (67%) revealed they have already tested AI specifically for security purposes.
While the intended use cases for AI in cybersecurity are wide-ranging, one area where AI can improve security measures is network protection. There are a variety of advantages to using generative AI to evolve network protectionand stay ahead of AI threats. For example, AI can be used to:
Automate complex security tasks: AI can be used to automate complicated network security tasks that are typically difficult for security teams to handle because of the sheer scale and complexity of today’s deployments. For instance, as the digital world moves to APIs for pretty much everything and the shift continues from network infrastructure management to infrastructure as a code, organizations now have APIs that can automate monitoring and configuration tasks across infrastructure layers. AI can use these APIs, making it an obvious shoe-in for orchestrating comprehensive security measures and reducing the workload of security teams that are already stretched too thin.
Enable a more proactive approach to security: AI’s ability to analyze vast amounts of data in real-time makes it an ideal tool to help organizations adopt a more proactive security approach. Security professionals can use AI to run red teamsand conduct regular security assessments to map their threat surface. By combining these efforts with up-to-date threat intelligence, organizations can maintain a proper security posture.
Keep up with the surge in AI-driven attacks: AI can also be used to fight AI-driven attacks. Malicious actors can use AI to exploit vulnerabilities within minutes instead of days. As a result, the time to respond to those threats and implement security measures is shrinking considerably to the point that a human operator is unable to keep up. Security defenses powered by AI are essential to match this pace.
While generative AI is still suffering from hallucinations, customized AI solutions tailored to the security task at hand, can work effectively as an automated management system for everything that is infrastructure, including security. As infrastructures become more complex and attackers more sophisticated, leveraging automation for security is not an option anymore. Rather it’s now an imperative for organizations to keep up with automated and AI-driven threats.
Automate a defense against potential exploits: Defenders can use agentic generative AI to quickly interpret newCommon Vulnerabilities and Exposures (CVEs) that could be exploited and automate the development of exploit tools based on those vulnerabilities. This practice enables security teams to understand potential exploits that might emerge from a newly disclosed vulnerability and draw upon those findings to improve their security posture and resistance against automated threats.
Generative AI is a disruptive technology because it allows pre-trained models to be deployed inside organizations. Organizations do not require huge clouds to train and retrain their models. Rather, they can leverage pre-trained models and customize them using retrieval augmented generation (RAG).
Like many systems, Generative AI is not perfect. Generative AI depends on pre-trained models and leverages RAG to improve its performance for the use case it will be deployed in. The performance of the AI system is heavily reliant on the quality of data, and the amount and way data is retrieved to augment its context. The more information the system has access to, the higher the data quality and the better the search algorithm, the more effective it will be in accurately distinguishing between malicious and legitimate traffic and successfully mitigating threats.
Without the proper data or retrieval algorithm, it can hallucinate or generate connections where none exist. For example, in network security, generative AI might create new policies based on incorrect assumptions, which can lead to false positives that block legitimate users. Even worse, AI could fail to perform and generate policies that are too generic and result in false negatives, meaning that malicious traffic might be categorized as legitimate. In the latter case, these events would not even surface at the SOC level for analysis.
It’s also important to note that the AI system itself must be secured against attacks. Self-learning AI models are vulnerable to data poisoning, where attackers introduce false data to manipulate the AI’s learning process. There is a different set of hardening techniques that will apply to AI and our experience in that area is still in its early stages.
While generative AI is a disruptive technology, I do not believe it will disrupt the balance between defenders and attackers. Defense security has always been a matter of staying ahead of the threats and the bad actors. The technology leveraged to stay ahead has evolved over time, but the concept remains very much the same. Organizations now have touse AI to fight AI.