The Top 4 Forms of AI-Enabled Cyber Threats


The Top 4 Forms of AI-Enabled Cyber Threats

The face of cyber threats has transformed dramatically over the decades. At first, they emerged as hacks, viruses and denial of service attacks, often hatched by young computer whiz kids chasing thrills and bragging rights. Then, criminal organizations leveraged increasingly sophisticated tools and techniques to steal private customer data, shut down business operations, access confidential/sensitive corporate information and launch ransomware schemes.

Today, artificial intelligence (AI) is empowering threat actors with exponentially greater levels of efficiency, volume, velocity and efficacy. A single AI tool can do the jobs of literally hundreds – if not thousands – of human hackers and spammers, with the ability to learn, process, adapt and strike with unprecedented speed and precision. What’s more, like the T-1000 android assassin in Terminator 2, AI can impersonate anyone – your friends, family, co-workers and even potential romantic partners – to develop and unleash the next generation of attacks.

This evolution of AI tools and the resulting increase in AI-generated cyberattacks has put immense pressure on organizations over the past 12 months. In light of these incidents, the FBI recently issued a warning about “the escalating threat” of AI-enabled phishing/social engineering and voice/video-cloning scams.

“These AI-driven phishing attacks are characterized by their ability to craft convincing messages tailored to specific recipients (while) containing proper grammar and spelling,” according to the FBI, “increasing the likelihood of successful deception and data theft.”

In seeking to lend further insights, we conducted a comprehensive analysis of activity from January 2023 to March 2024, to get a better sense of the evolving practices and trends associated with cybercrime and AI. As a result, we came up with the following top four forms of AI-enhanced threats:

Chatbot abuse. Underground forums have made available exposed ChatGPT login credentials, chatbots which automatically code malicious scripts and ChatGPT “jailbreaks” (the use of prompts to bypass certain boundaries or restrictions programmed into AI). However, we noticed that interest in these chatbots declined toward the end of 2023, as cybercriminals learned how to manipulate ChatGPT prompts themselves to obtain desired outcomes.

Social engineering campaigns. In exploring the possibilities of self-directed ChatGPT prompts, cybercriminals have focused intently on social engineering to trigger phishing-linked malware and business email compromise (BEC) attacks, among other exploits. AI makes it all too easy for them to conduct automatic translation, construct phishing pages, generate text for BEC schemes and create scripts for call service operators. As the FBI noted, the increasing sophistication of the technology is making it more difficult than ever to distinguish potentially harmful spam from legitimate emails.

Deepfakes. While deepfakes have been around for years, AI is taking the concept to new avenues of deception. Before, deepfakes required extensive audio, photographic and/or video to set up the ruse. That’s how celebrity deepfakes grew so common, because the internet contains an abundance of content about people in the news. AI, however, is allowing adversaries to more readily leverage content to target common individuals and companies through disinformation campaigns in the form of social media posts impersonating and compromising people and businesses.

To cite one prominent example, the “Yahoo Boys” used deepfakes to carry out pseudo romance/sextortion scams – creating fake personas and gaining the trust of victims and tricking them into sending compromising photos – and then forcing the victims to pay money to avoid having the photos released publicly. In another example, a threat actor advertised synthetic audio and video deepfake services in November 2023, claiming to be able to generate voices in any language using AI, for the purposes of producing bogus advertisements, animated profile pictures, banners and promotional videos.

Know-your-customer (KYC) verification bypasses. Organizations have used KYC verification to confirm customers’ identity, financial activities and risk level in order to prevent fraud. Criminals, of course, are always seeking to circumvent the verification process and are now deploying AI to do so. A threat actor using the name, John Wick, allegedly operated a service called OnlyFake, used “neural networks” to make realistic-looking photos of identification cards. Another going by the name, *Maroon, advertised KYC verification bypass services that supposedly can unlock all accounts requiring facial verification, such as those which direct users to upload their photos in real-time from their phone camera.

If there is a common theme we found in our analysis, it’s that AI isn’t changing the intended end game for cybercriminals – it’s just making it much easier to get there, more swiftly and successfully. The technology allows for refinements which directly lead to more sophisticated, less detectable and more convincing threats. That’s why security teams should take heed of the described developments/trends as well as the FBI warning and pursue new solutions and techniques to counter ever-increasingly formidable, AI-enabled attacks. If history has taught us anything, it’s that the first step in effectively confronting a new threat is fully understanding what it is.

Ad



Source link