Large language models (LLMs) and generative AI are rapidly advancing globally, offering great utility but also raising misuse concerns.
The rapid modernization of generative AI and its AI counterparts will transform the complete future of cybersecurity threats significantly. However, besides its potential risks, it’s also important to appreciate the value of generative AI in legitimate applications.
Cybersecurity researchers at the Threat Intelligence Team of Avast recently reported that hackers are actively abusing ChatGPT to generate malware and social engineering threats.
Hackers Abusing ChatGPT
In recent times, AI-driven scams have been on the rise, making it easier for cybercriminals or threat actors to craft convincing lures like:-
- Emails
- Social scams
- E-shop reviews
- SMS scams
- Lottery scam emails
Rising threats use advanced tech, and this scenario is reshaping the battlefield of AI technologies, mirroring abuses in areas like-
- Cryptocurrencies
- Covid-19
- Ukraine conflict
ChatGPT’s fame attracts hackers more for its fame than AI conspiracy, making it mature for exploration in their works.
Currently, ChatGPT isn’t an all-in-one tool for advanced phishing attacks. Attackers often require templates, kits, and manual work to make their attempts convincing. Multi-type models, like LlamaIndex, could enhance future phishing and scam campaigns with varied content.
TTPs & Mediums
Here below, we have mentioned all the TTPs and mediums used by the threat actors to abuse the ChatGPT:-
- Malvertising
- YouTube scams
- Typosquatting
- Browser Extensions
- Installers
- Cracks
- Fake updates
LLMs for Malware and Social Engineering Threats
LLMs simplify malicious code generation, but some expertise is still needed. Specialized malware tools can complicate the process by evading security measures.
Creating LLM malware prompts demands precision and technical expertise, with restrictions on prompt length and security filters limiting complexity.
AI tech has transformed spam tactics significantly, with spambots unwittingly revealing themselves by sharing ChatGPT’s error messages, highlighting their presence.
Notably, spambots now exploit user reviews by copying ChatGPT responses, aiming to boost feedback and product ratings deceptively.
This highlights the need for vigilance in digital interactions as manipulated reviews mislead consumers into purchasing lower-quality products.
Bad actors can circumvent ChatGPT’s filters, but it’s time-consuming. They often resort to traditional search engines or available educational-use-only malware on GitHub.
Besides this, the Deepfakes are powered by AI, which poses significant threats, fabricating convincing videos and causing damage to reputations, public trust, and even personal security.
Positive Scenario
Security analysts can employ ChatGPT to generate detection rules or clarify existing ones, aiding both beginners and experienced analysts in enhancing pattern detection tools like:-
AI-based assistant tools
There are several projects that integrate LLM-based AI assistants, enhancing productivity across various tasks, from office work to technical work.
AI assistants aid malware analysts by simplifying assembly comprehension, disassembling code analysis, and debugging, streamlining reverse engineering efforts.
Here below, we have mentioned the known AI-based assistant tools:-
- Gepetto for IDA Pro
- VulChatGPT
- Windbg Copilot
- GitHub Copilot
- Microsoft Security Copilot
- PentestGPT
- BurpGPT
Recommendations
Here below, we have mentioned all the recommendations offered by the security researchers:-
- Be cautious of unbelievable offers.
- Make sure to verify the publisher and reviews.
- Always understand your desired product.
- Don’t use cracked software.
- Report suspicious activity.
- Update your software regularly.
- Trust your cybersecurity provider.
- Self-education is crucial.
Keep informed about the latest Cyber Security News by following us on Google News, Linkedin, Twitter, and Facebook.
Related Read