ChatGPT’s False Information Generation Enables Code Malware


The issue allows attackers to exploit ChatGPT’s tendency to generate false information, particularly in the form of nonexistent code packages.

In a recent study, cybersecurity researchers have discovered a concerning vulnerability in ChatGPT, a popular generative artificial intelligence (AI) platform. The flaw/issue allows attackers to exploit ChatGPT’s tendency to generate false information, particularly in the form of nonexistent code packages.

By utilizing what the researchers term “AI package hallucinations,” threat actors can create and distribute malicious code packages that developers may inadvertently download and integrate into their legitimate applications and code repositories.

The researchers, from Vulcan Cyber’s Voyager18 research team, detailed their findings in a blog post published on June 6th, 2023. They highlighted the risks posed to the software supply chain, as malicious code and Trojans could easily slip into widely used applications and repositories such as npm, PyPI, GitHub, and others.

The root cause of the problem lies in ChatGPT’s reliance on outdated and potentially inaccurate training data. As a large language model (LLM), ChatGPT is capable of generating plausible but fictional information. This phenomenon, known as AI hallucination, occurs when the model extrapolates beyond its training data, leading to responses that seem plausible but may not be accurate.

The attack technique involves posing coding-related questions to ChatGPT, which then provides recommendations for code packages. Attackers exploit the platform’s tendency to suggest unpublished or nonexistent packages. They can then create their malicious versions of these packages, waiting for ChatGPT to recommend them to unsuspecting developers. Consequently, developers may unknowingly install these malicious packages, thereby introducing significant risks into their software supply chain.

To demonstrate the severity of the issue, the researchers conducted a proof-of-concept simulation using ChatGPT 3.5. They engaged in a conversation with the platform, asking for a package to solve a coding problem. ChatGPT responded with multiple package recommendations, some of which were nonexistent.

The researchers then proceeded to publish their malicious package, replacing the nonexistent recommendation. Subsequently, when another user posed a similar question, ChatGPT suggested the newly created malicious package, leading to its installation and potential harm.

Watch the demonstration video shared by Vulcan Cyber

The research team also provided recommendations on how developers can identify and mitigate these risks. They advised validating the packages they download by scrutinizing factors such as creation dates, download numbers, comments, stars, and any associated notes.

Developers are urged to exercise caution, especially when recommendations come from AI platforms rather than trusted sources within the community.

This discovery adds to a series of security risks associated with ChatGPT. As the platform gained widespread adoption, threat actors seized the opportunity to exploit its capabilities for various malicious activities. From malware attacks, phishing campaigns and credential theft; the rise of generative AI platforms like ChatGPT has attracted both legitimate users and malicious actors.

  1. ARMO integrates ChatGPT to secure Kubernetes
  2. OpenAI ChatGPT Bug Bounty – Earn $200 to $20k
  3. Fake ChatGPT Extension Hijacks Facebook Accounts
  4. DarkBERT AI: Bringing Cybersecurity to the Dark Web
  5. Polymorphic Blackmamba malware created with ChatGPT



Source link