LLMs lower the barrier for entry into cybercrime


Cybercriminals employ evolving attack methodologies designed to breach traditional perimeter security, including secure email gateways, according to Egress.

“Without a doubt chatbots or large language models (LLM) lower the barrier for entry to cybercrime, making it possible to create well-written phishing campaigns and generate malware that less capable coders could not produce alone,” said Jack Chapman, VP of Threat Intelligence, Egress.

“However, one of the most concerning, but least talked about applications of LLMs is reconnaissance for highly targeted attacks. Within seconds a chatbot can scrape the internet for open-source information about a chosen target that can be leveraged as a pretext for social engineering campaigns, which are growing increasingly common. I’m often asked if LLM really changes the game, but ultimately it comes down to the defense you have in place. If you’re relying on traditional perimeter detection that uses signature-based and reputation-based detection, then you urgently need to evaluate integrated cloud email security solutions that don’t rely on definition libraries and domain checks to determine whether an email is legitimate or not!”

Evolving attack techniques

As threats evolve, the cybersecurity industry must work together to continue to manage human risk in email.

From RingCentral to alias impersonation attacks and leveraging social media to security software impersonations and sextortion, there has been no shortage of phishing attacks in 2023. The number one phishing topic was missed voice messages, which accounted for 18.4% of phishing attacks between January to September 2023, making them the most phished topic for the year so far. Many of these attacks use HTML smuggling to hide their payload.

The potential for cybercriminals to use chatbots to create phishing campaigns and malware has been cause for concern, but is it possible to tell whether a phishing email has been written by a chatbot? The report found that no person or tool can definitively tell whether an attack was written by a chatbot. Because they utilize large language models (LLMs), the accuracy of most detector tools increases with longer sample sizes, often requiring a minimum of 250 characters to work. With 44.9% of phishing emails not meeting the 250-character limit and a further 26.5% falling below 500, currently AI detectors either won’t work reliably or won’t work at all on 71.4% of attacks.

The proportion of phishing emails employing obfuscation techniques has jumped by 24.4% in 2023, sitting at 55.2%. Obfuscation enables cybercriminals to hide their attacks from certain detection mechanisms. Egress found that 47% of phishing emails that use obfuscation contain two layers to increase the chances of bypassing email security defenses to ensure successful delivery to the target recipient. 31% use only one technique. HTML smuggling has proven the most popular obfuscation technique, accounting for 34% of instances.

Graymail dissected

To understand how graymail impacts cybersecurity, Egress researchers analyzed 63.8 million emails that organizations received over four weeks. They found that, on average, 34% of mail flow can be categorized as graymail (bulk but solicited emails such as notifications, updates, and promotional messages). Additionally, Wednesday and Friday are the most popular days of the week to send or receive graymail. The research found a direct correlation between the volume of graymail and the volume of phishing emails received; people with busier inboxes are more likely to be targeted by phishing campaigns.

Traditional perimeter detection is falling short

More phishing emails are getting through traditional perimeter detection, so while overall volume hasn’t increased, attacks are increasing in sophistication and cybercriminals use a multitude of tactics to successfully get through perimeter email security.

The percentage of emails that got through Microsoft defenses has increased by 25% from 2022 to 2023. Likewise, the percentage of emails that got through secure email gateways (SEGs) increased by 29% from 2022 to 2023.

Additionally, there’s been an 11% increase in phishing attacks sent from compromised accounts in 2023. Compromised accounts are trusted domains, so these attacks usually get through traditional perimeter detection. 47.7% of the phishing attacks that Microsoft’s detection missed were sent from compromised accounts. The most common type of payload is phishing links to websites (45%), up from 35% in 2022. And all payloads bypassed signature-based detection to some degree.

Jack Chapman adds: “We produced this report to equip cybersecurity professionals with insights into advanced attacks, and what we found is that real-time teachable moments really do improve people’s ability to accurately identify phishing emails. Legacy approaches to email security rely heavily on quarantine barring end users from seeing phishing emails, but as our report highlights, phishing emails will inevitably get through. This is one of the reasons why we’ve flipped the quarantine model on its head, adding dynamic banners to neutralize threats within the inbox. These banners are designed to clearly explain the risk in a way that’s easy to understand, timely, and relevant, acting as teachable moments that educate the user. Ultimately, teaching someone to catch a phish is a more sustainable approach for long-term resilience.”



Source link