Social Engineering In The Age Of AI


by Andy Patel, Senior Researcher, WithSecure

Generative models will soon be integrated into the productivity tools we use daily. AI features will be included in word processors, email clients, artistic software, presentation software, search engines, and more.

Extrapolate forward a little, and those models will be built into operating systems, available for use with just a few API calls, precipitating a new generation of apps we haven’t even thought of yet.

Generative AI’s integration into the tools we use every day means that it’ll be used to create both benign and malicious content. Thus, detecting whether a piece of content was created by an AI won’t be enough to determine whether it’s malicious.

Adversaries will use these technologies as much as we do. And they will become more productive just as we will. One way they might use generative AI is to create content designed to socially engineer us. It’s not like AI will refuse to generate social engineering content. Most of it is designed to look like regular business and interpersonal communication anyway.

Social engineering content is, almost by definition, designed to look benign. Ask ChatGPT to write an email from you to a colleague, telling them that you are in a hurry to get their feedback on a presentation that you are working on, and it’ll happily comply.

There’s nothing malicious about that request. The same goes for an email informing someone that you just bumped into their car in the company parking lot. And the same from a partner or authority requesting someone to re-upload some confidential documents to a new repository that was created in response to GDPR rules.

In a study I published earlier this year, I used a large language model to generate various types of undesirable content ranging from phishing, to fake news, to online harassment. It complied with my every request and did a great job writing the content I asked of it.

While I was conducting that research, access to GPT-3 wasn’t cheap. So I speculated that criminals would adopt it based upon a return of investment calculation – whether paying to generate content would be cheaper than writing it themselves or sticking to the copy-paste methodology they were already using.

And then it got cheaper. Ten times cheaper, for instance, when gpt3-turbo was released. And then stuff like alpaca ¬– a model you can run on a laptop – came out.

At this point, I had imagined that using a language model to create content is practically a no-brainer. So why would an adversary use a language model to create phishing and spear phishing content?

For phishing, the case is fairly obvious. You write one prompt. You supply it to the model many times. And each time, a slightly different piece of content is produced. This allows you to send dozens or hundreds of different spam emails out instead of copy-pasting the same one as you might be doing right now.

Also, these models write good English. Phishing messages are known for their spelling mistakes, grammatical errors, and bad English.

People can almost spot a phishing email because of how badly written it is. But GPT-3 doesn’t make those kinds of errors. Oh, and it can write in other languages, too. A criminal operation utilizing GPT-3 doesn’t need someone capable of writing good English anymore.

They also don’t need people who can write in other languages. And they don’t need to worry about whether Google Translate will do a good enough job. For spear phishing, the reason to use a language model might seem a little less clear.

Why write a prompt to generate a piece of content you may only use once or a few times? I already mentioned one reason – to produce high-quality writing in a language you may not be fluent in.

The other, which is less obvious, is to do style transfer. It is possible to present a language model with a written style and then ask it to write in that style. This is something that even a trained writer may find difficult.

The ability to have a model generate content in a specific written style enables an attacker to better impersonate someone. And that gives the attack a better chance of succeeding.

And there are other nefarious things you might do with style transfer. Inject a fake document into the trove of documents you are about to release to the public after a hack-and-leak operation. How would the owners of the faked document possibly refute its authenticity?

Spear phishers could also use a large language model as a sort of chatbot. Some highly targeted spear phishing tactics involve a process whereby the attacker builds trust with their victims over time. This connection is built over multiple back-and-forth messages.

A large language model could be used to automate this trust-building process, thus allowing attackers to scale their operations. There are no obvious technological solutions that can definitively tell us whether we’re being socially engineered, whether it be by humans or their AI buddies.

So, for now, we’re going to have to rely on vigilance. And that is built by things like media literacy and phishing awareness training. One approach to phishing awareness that I’ve not seen a lot of mention of is to teach users about the psychology used in social engineering attacks.

We could be teaching employees about concepts such as confirmation bias, authority bias, and scarcity. The principle of social proof can further help in such training – by encouraging employees who’ve identified and reported threats, or even fallen for them, to share their stories, so that others will learn how to be more vigilant in the future.

Creating an environment where employees help one another to identify threats might be useful if they do it without forwarding potentially malicious content to each other. And finally, a company might consider rewarding employees for following safety protocols and reporting threats.

There’s no silver bullet against social engineering attacks. But vigilance and awareness are going to beat overconfidence in technological solutions. At least for now Another option that hasn’t been explored yet is to use a language model to suggest tactics that might be used to socially engineer an individual.

Present such a model with some of their social media posts, or perhaps a curated list of facts harvested from scraping their presence from the Internet and ask it the right questions. One might even envision a taskbased architecture that can select one of the victim’s contacts and gather the content required to mimic their writing style.





Source link