AI vs. the Human Mind: The New Ransomware Playbook
Ransomware has always relied on the psychological levers of fear, urgency, and shame to pressure victims. But the rules of engagement are changing.
Cybercriminals are leveraging AI to ratchet up the pressure with more convincing, manipulative techniques, using everything from deepfake videos to precision-targeted spear phishing emails to achieve their aims.
Here we explore how AI-generated ransomware tactics are being used to maximum effect to deceive victims and why, in this new era of ransomware warfare, uncovering the ways that attackers are weaponizing AI is essential to protecting against this more destructive wave of cybercrime.
Laser-like accuracy, automation, and scalability
The use of AI has seen cyber attackers turbo-charge their capabilities at all stages of the attack lifecycle, from identifying and profiling targets to automating attacks and making payment demands.
Phishing emails, which continue to be the prime starting point for ransomware attacks, can now be crafted with greater accuracy, using more convincing language to mimic individuals to dupe the victim.
AI can be used across the entire phishing process using LLMs to train against large data sets to send highly individualized messages that appear to be from a trusted contact. These can closely imitate the tone, language, and style of an individual. When combined with psychological techniques that create a sense of urgency, such as requesting immediate action, payment, or information, these methods become even more powerful tools for attackers.
AI also reduces the time-intensive research stage of the attack, so that information can be collected almost instantaneously, making attackers’ efforts more cost-effective. The ability to use personal details from social media, corporate websites and other up-to-date information can make these messages even more believable and more successful. And armed with knowledge about network behaviour, criminals can also determine the optimal window of opportunity to launch an attack.
Once an attack has succeeded, AI can also power 24/7 automated ransom negotiations, adjusting demands in real-time based on a victim’s financial profile, while also predicting responses, escalating pressure, and refining coercion tactics to make it harder for organizations to refuse paying.
Deepfake videos are adding to the cyber arsenal
Alongside standard phishing, emboldened attackers are adding voice messages and phone calls to their toolbox to convince people to share information or send money. The use of deepfake videos is one of the more advanced ways of using social engineering to coerce individuals into giving up company secrets or making payments.
Last year, the UK Engineering firm Arup, revealed it had been on the receiving end of one such attack, when criminals used an AI deepfake video on an employee to steal $25 million.
Whilst there may be no sophisticated malware involved, the attack exemplifies that the 21st century, AI-driven version of the confidence trick, is still as effective, dangerous and damaging as ever. Though the tools are changing rapidly, these attacks still exploit human psychology by gaining the victim’s trust and using urgency to create fear and pressure—a modern twist on tactics which have been used by criminal fraudsters for centuries.
Ransomware reaches record levels
AI is making life much harder for cybersecurity practitioners tasked with mitigating and preventing these attacks. At a time when ransomware has reached unprecedented levels, security teams now have an additional challenge to face in identifying and neutralizing attacks which don’t have the typical ‘red flags’ alerting them to malicious activity.
The email filters that would have detected and prevented a phishing email from reaching their target—such as grammatical errors or typos—are increasingly ineffective against adaptive, AI-driven threats.
Additionally, AI is making it easier to launch attacks at scale, and we have already seen ransomware attacks hit record levels in 2024. AI lowers the barriers of entry providing a ‘fast track’ path to the big league for the lower-skilled attackers, and leading to the emergence of new groups, new variants, and a surge in attack volume.
Amongst those that have gained notoriety is ‘FunkSec’, a group which is reportedly using AI codes. Researchers have observed that FunkSec’s malware codebase is organized in a manner suggesting the use of generative AI, enabling the group to produce advanced tools with apparent ease. The group also employs a double extortion strategy, combining data encryption with data theft to pressure victims into paying ransoms—a tactic now used in 95% of all ransomware attacks worldwide. The group demands relatively low ransoms, with victims threatened with the release of their stolen data if the ransom is not paid.
Protecting data to deny attackers their prize
The scales, however, are not tipping entirely in the attackers’ favour. AI can also be a highly effective tool for defenders, with advanced detection and response solutions which can analyze behavioral patterns in real-time, identifying anomalies that signature-based tools often miss.
AI solutions are also important for preventing data exfiltration; by preventing unauthorized data transfers with Anti Data Exfiltration (ADX) technology, organizations can shut down extortion attempts. If attackers cannot profit from their efforts by accessing data, they are effectively rendered powerless.
While AI is lowering the bar when it comes to providing easy access to sophisticated tools, organizations should not feel powerless in their efforts to detect, prevent, and mitigate an attack. Ransomware has always preyed on the innate tendencies and vulnerabilities of human nature, with criminals sharpening up their practices to ensure they make attacks as personal and effective as possible. There are technological solutions available to thwart the mind games they use and, by shifting toward proactive, AI-powered threat prevention, teams stand the best chances of keeping their employees and data safe and secure.
Ad
Join our LinkedIn group Information Security Community!
Source link