Security teams vs cybercriminals: Who’s most likely to benefit from AI?


Forget Game of Thrones and House of The Dragon; the real battle for power is between cyber criminals and security teams. After all, a well-planned and executed cyber-attack can cause as much (if not more) chaos than a couple of fire-breathing monsters. 

Cybercriminals on the attack in many ways have a slightly easier job, as they have time on their side and often then only have to find and exploit a single weakness. They can sometimes even succeed with good timing and a few lucky breaks. On the other hand, security teams must be continuously alert, investigate early and often, protect all possible entry points, and be prepared to defend through multiple attack surfaces. With both sides claiming sporadic victories, each is looking to AI to get the upper hand. While both sides can ultimately benefit from its use, it’s important to understand how it is being used currently with an eye on its potential use in the future. Let’s take a look at how the odds stack up: 

AI attack scenario 1: Exploit development

Developing exploit code can often be a long and laborious task that requires a fair bit of practice and patience. Where AI really comes into its own is by helping lower the barrier to entry to this often complex area of development. It’s not so much a case of letting AI just fumble its way to writing an entire exploit from scratch, but more a case of it being used to dramatically speed up the common steps in exploit development such as binary analysis to identify common weaknesses, automating payload generation, smarter approaches to fuzzing, and even adapting existing exploits to bypass common security mitigations such as ASLR (Address Space Layout Randomisation) and DEP (Data Execution Prevention).  

AI attack scenario 2: Let’s go phishing  

Generative AI is perhaps the most obvious and accessible way in which AI can be utilised to create legitimate-looking content. Currently, this often manifests in the form of more complex and realistic phishing templates. Moreover, generative AI significantly accelerates the development process, enabling the rapid creation of convincing content that would otherwise require considerable time and effort.

The production of legitimate-looking content such as articles, videos, audio, social media posts, etc., all written in the style of the legitimate author by copying speech patterns and writing styles, is also a growing concern. This capability allows for highly convincing forgeries that can easily deceive audiences, increasing risks of misinformation, identity theft, and reputational damage. The challenge of verifying authenticity becomes more critical as these AI-generated contents blur the line between genuine and fake materials.

These scenarios may not inspire much optimism, so it’s worth pointing out that not every AI-related function or feature exclusively helps cyber criminals. Here are a couple of examples of where the technology is assisting security teams:  

AI defence scenario 1: Super-fast log parsing

Servers, laptops, smartphones, and applications—in fact, pretty much all hardware/software—will generate logs of some form, often containing operational and performance-related information which can be parsed and subsequently used to identify performance issues or failures. Security teams and incident responders can also use these same logs to trace the roots of a cyber-attack against a given system. Analysis of these log files often means looking for complex patterns; even though this task is often semi-automated it can still be a time-consuming manual task to piece together an attack, which can often end in delivering critical information after the event. 

However, AI and machine learning have helped to scale and speed up this process—enabling security teams to identify known and sometimes new malicious attack patterns in almost real time. AI-powered log parsing also provides information to create algorithms and other security tooling that can spot early warning signs of a cyber-attack. These technologies can analyse vast amounts of data quickly and accurately, detecting anomalies and potential threats that might be missed by traditional methods. By continuously learning and adapting to new attack vectors, AI enhances the effectiveness of security measures, allowing for proactive defence strategies. This rapid identification and response capability significantly reduces the window of opportunity for attackers, thereby strengthening the overall cyber security posture.

Going one step further, introducing AI functionality to SIEM (Security Information and Event Management) and SOC (Security Operations Center) tools allows security teams to get ahead of cybercriminals by anticipating potential network threats and addressing system weaknesses before they can be exploited. This proactive approach enables the identification of vulnerabilities and the prediction of attack vectors, allowing for timely intervention and reinforcement of defences. AI-driven insights and automation enhance the efficiency and effectiveness of these tools, ensuring a more robust and resilient security infrastructure.

AI defence scenario 2: What’s all the fuzz about? 

Fuzz testing or fuzzing is a kind of software testing. It works by injecting large amounts of random or unexpected inputs into software and monitoring its reaction. If the software produces an error or crashes with a segmentation fault of some kind, one can then examine how the program is handling the input that caused the crash; if this sounds familiar, it should as we have already discussed it as part of the process for exploit development, but that is not to say it is useful just for attackers. In particular, if your organisation is responsible for developing software of any kind, this is useful to spot potential exploitation vectors early but also just to simply test the validity of data that might be handled by your software (data files, configuration files, etc.) and avoid any unexpected issues that might occur if such files are malformed. 

Today, security teams can use AI-enhanced fuzzing solutions to speed up the manual injecting, analysis and reporting processes. This means they can bombard their software with significantly more inputs and spot more potential vulnerabilities—all at a much faster pace that can adapt and learn as it goes. 

As they’re AI-based and drawing on machine learning, this new generation of fuzzing solutions should be able to gather information and outputs from new cyber-attacks, analyse them, and inject the necessary test inputs into an organisation’s IT infrastructure. Theoretically, this could keep security teams one step ahead of cybercriminals for quite some time.

AI makes it easier to create malware, exploits, and vulnerabilities. It also speeds up the rate at which attacks can be launched. So far, AI hasn’t really created any significant ‘new’ forms of attack so to speak, but given the pace of change it can facilitate, they’re probably just around the corner. 

Unfortunately, some AI-enhanced cyber defence tools and techniques, such as fuzzing, have been appropriated by bad actors as well. However, it’s still quicker and easier for security teams to analyse and fix their own system vulnerabilities before release than for bad actors to identify them from scratch. 

Taking all these facts into consideration, on balance, it’s fair to say security teams have found a significant ally in AI, at least for now. However, staying on top of regular testing, monitoring, and vulnerability checking is still essential. That’s because although AI, automation and machine learning are invaluable tools, they are not magic bullets—and human input is still very much needed to drive and direct them. Security teams that use AI should not become complacent. They need to maintain an agile and proactive approach to security systems and processes if they are to keep the cybercriminals at bay.



Source link