How ChatGPT—and Bots Like It—Can Spread Malware


The AI landscape has started to move very, very fast: consumer-facing tools such as Midjourney and ChatGPT are now able to produce incredible image and text results in seconds based on natural language prompts, and we’re seeing them get deployed everywhere from web search to children’s books.

However, these AI applications are being turned to more nefarious uses, including spreading malware. Take the traditional scam email, for example: It’s usually littered with obvious mistakes in its grammar and spelling—mistakes that the latest group of AI models don’t make, as noted in a recent advisory report from Europol.

Think about it: A lot of phishing attacks and other security threats rely on social engineering, duping users into revealing passwords, financial information, or other sensitive data. The persuasive, authentic-sounding text required for these scams can now be pumped out quite easily, with no human effort required, and endlessly tweaked and refined for specific audiences.

In the case of ChatGPT, it’s important to note first that developer OpenAI has built safeguards into it. Ask it to “write malware” or a “phishing email” and  it will tell you that it’s “programmed to follow strict ethical guidelines that prohibit me from engaging in any malicious activities, including writing or assisting with the creation of malware.”

ChatGPT won’t code malware for you, but it’s polite about it.

OpenAI via David Nield

However, these protections aren’t too difficult to get around: ChatGPT can certainly code, and it can certainly compose emails. Even if it doesn’t know it’s writing malware, it can be prompted into producing something like it. There are already signs that cybercriminals are working to get around the safety measures that have been put in place.

We’re not particularly picking on ChatGPT here, but pointing out what’s possible once large language models (LLMs) like it are used for more sinister purposes. Indeed, it’s not too difficult to imagine criminal organizations developing their own LLMs and similar tools in order to make their scams sound more convincing. And it’s not just text either: Audio and video are more difficult to fake, but it’s happening as well.

When it comes to your boss asking for a report urgently, or company tech support telling you to install a security patch, or your bank informing you there’s a problem you need to respond to—all these potential scams rely on building up trust and sounding genuine, and that’s something AI bots are doing very well at. They can produce text, audio, and video that sounds natural and tailored to specific audiences, and they can do it quickly and constantly on demand.



Source link