Who would have thought that an individual’s interest in generative AI applications could lead them into a trap?
While artificial intelligence chatbots such as ChatGPT and Google Bard have got people around the world typing enthusiastically into their keyboards to create unique responses, scammers are using this curiosity to deceive people and steal passwords and sensitive data.
Using popular social media platforms such as Facebook, the miscreants are running generative AI application scams by creating fake pages that offer tips, news and enhanced versions of AI services. impersonating renowned AI brands, including ChatGPT, Google Bard, Midjourney, and Jasper, to deceive users into downloading malicious content.
A recent Check Point Research report highlights how scammers are crafting fake Facebook pages with engaging content related to AI brands, luring unsuspecting individuals to like and comment on their posts.
This engagement ensures that the fake content appears on the feeds of the victims’ connections, furthering the spread of the generative AI application scam.
The criminals then offer seemingly enticing services or exclusive content through links, leading users to download malware camouflaged as generative AI applications.
Cyber criminals are getting smarter. They know everyone is interested in generative AI and use Facebook pages and ads to impersonate ChatGPT, Google Bard, Midjourney and Jasper,” said Sergey Shykevich, Threat Intelligence Group Manager, Check Point Research.
“Unfortunately, thousands of people are falling victim to generative AI application scams. They are interacting with the fake pages, which furthers their spread – and even installing malware disguised as free AI tools. We urge everyone to be vigilant in ensuring they are only downloading files from authentic and trusted sites”, Sergey Shykevich added.
Generative AI application Scams: Fake ChatGPT, Google Bard, AI tools and Beyond
The malicious malware, once installed, stealthily steals users’ online passwords, crypto wallets, and other sensitive information saved in their browsers.
Shockingly, many users remain oblivious that they have fallen victim to a scam. They passionately discuss AI-related topics in the comments, like and share the posts, unwittingly expanding the reach of free AI tools scams.
The fake pages often claim to offer tips, news, and enhanced versions of popular AI services like Google Bard or ChatGPT, with multiple variations such as Bard New, Bard Chat, GPT-5, G-Bard AI, and others. Some scammers also take advantage of the popularity of other AI services, such as Midjourney, to lure users into their deceitful traps.
The generative AI application scams begin with fraudulent AI Facebook pages that lead users to landing pages that encourage them to download password-protected archive files, supposedly associated with generative AI engines.
Unfortunately, these archives contain nothing but malware, resulting in the theft of sensitive information from the victim’s machine.
The malware utilized in these scams employs various legitimate services like Github, Gofile, and Discord for command and control communication and data exfiltration.
The cybercriminals receive stolen information through Discord webhooks, allowing them to monitor and analyze the data extracted from each infected machine.
To further obfuscate their malicious intentions, most of the comments on these fake Facebook pages are made by bots with Vietnamese names, and the default chat language on the counterfeit Midjourney site is Vietnamese.
In addition to stealing passwords and sensitive data from browsers, the malware also targets cryptocurrency wallets (including Zcash, Bitcoin, Ethereum, etc.), FTP credentials from Filezilla, and sessions from various social and gaming platforms.
Once the data is harvested, it is consolidated into a single archive and uploaded to the file-sharing platform Gofile, from which the cybercriminals notify their Discord channel of the successful theft.
These sophisticated generative AI application scams are constantly evolving. Some campaigns use deceptive Facebook ads and compromised accounts disguised as AI tools to distribute the malware.
One particularly stealthy malware, known as ByosBot, operates under the radar, using the dotnet bundle format to evade detection. ByosBot focuses on stealing Facebook account information, creating a self-sustaining cycle where the stolen data propagates malware through new compromised accounts.
In light of these free AI tools and generative AI application scams, cybersecurity experts urge all internet users to remain vigilant and cautious when downloading files, especially from sources that may appear dubious.
The Check Point Research lists quick mitigation strategies to find and deflect these scams.
Quick mitigation against generative AI application scams
To begin with these precautions, one must understand that if an offer is too good to be true, then it probably is. No matter if its AI, job offers, gift cards, or cash prizes, if a offers seems dubious, then there is a high chance that it is probably a part of a threat campaign.
Five ways to protect yourself from AI scams
Threat actors continue to employ cunning tactics, posing as legitimate entities to deceive their targets. Identifying these malicious attempts requires vigilance and adherence to the following strategies:
- Ignore Display Names: Phishers manipulate display names to appear genuine, but instead of solely relying on them, verify the sender’s email or web address to ensure it originates from a trusted source.
- Verify the Domain: Be cautious of domains with slight misspellings or seemingly plausible variations. Scrutinize URLs closely, as phishers often employ these tricks to mislead users.
- Download from Trusted Sources: Exercise caution when downloading software; avoid unofficial channels like Facebook groups or forums. Instead, obtain software directly from trusted, official websites to minimize risks.
- Check Links Carefully: Malicious links in emails can lead to phishing sites. Before clicking, hover over the links to confirm their legitimacy.
- Use phishing verification tools like phishtank.com to cross-check suspicious links for known phishing patterns. Ideally, access the company’s website directly rather than relying on links provided in emails.