Users of OpenAI platform ChatGPT are bypassing restrictions on the platform to create malicious content.
Months after creators detected users creating malware codes and emails for cyberattacks, the chatbot was updated with enhanced policies to deny soliciting similar requests. However, researchers have come across events of advanced users feeding requests to manipulate the app to bypass ChatGPT restrictions and bring forth illegal content that can be used for malicious purposes.
Even though ChatGPT will not respond if it is directly asked to create a malicious script, it can be indirectly asked by means of manipulation or confusion.
Bofin Babu, Co-founder of CloudSEK XVigil, posted screenshots on LinkedIn explaining how the language model developed by OpenAI can be asked to impersonate someone and then break the policies.
Screenshots of commands asking ChatGPT to be DAN (Photo: Bofin Babu)
Bypassing ChatGPT restrictions
Users on Reddit were also found jailbreaking ChatGPT restrictions using commands such as “Do Anything Now” (DAN). The chatbot was convinced to be DAN, who can follow what is demanded and win tokens against it.
ChatGPT asked to generate content against OpenAI guidelines (Photo: Bofin Babu)
The app denied taking requests that went against its updated policies, however, later was successfully manipulated. Similarly, Check Point drew attention to conversations on hacking forums discussing bypassing ChatGPT limitations using OpenAI API. Bots, including ChatGPT are advertised on hacking forums with talks about their misuse as shown below:
(Photo: Check Point)
OpenAI’s API has very few anti-abuse mechanisms in place, noted Check Point researchers. They also found newer versions of OpenAI API on Telegram, which had no restrictions. ChatGPT is being used as a learning tool to create malicious bots on the hacking forums as shown below:
Telegram bot misusing OpenAI’s API (Photo: Check Point)
Telegram bot
The Telegram bot has been detected to make phishing emails that ChatGPT recently got programmed not to perform. The bot is also being used to create fraudulent messages asking users to update their banking credentials and creating malware. It charged $5.50 for every 100 queries after 20 free queries. This indicates that hackers may be willing to make a payment to use the Telegram bot that is styled as ChatGPT.
ChatGPT restriction bypass
ChatGPT security risks are yet to be fully gauged because it can amass user data while taking commands, however, when the data can be removed is not known yet. The General Data Protection Regulation (GDPR) regulation demands programmers develop, train and deploy AI with a clearly defined purpose.
The Federal Trade Commission (FTC) has also indicated the intention to refine rules around the use of AI to comply with the Children’s Online Privacy Protection Act (COPPA). This is to prevent the deceptive use of AI that may impact children’s data. Several countries are yet to have a clear set of federal laws that regulate how machine learning models can process user data.
The app can amass user data while soliciting their requests. However, it is not clear if the data can be fully erased. Moreover, there are hiccups to using the data sent by ChatGPT as the data need not be accurate, as pointed out by cybersecurity solutions provider Avast.
Microsoft has invested over $10 billion in OpenAI to integrate it into Azure OpenAI services which will connect with the cloud. ChatGPT has instantly become the talk of the town with new users joining the club to try out what’s in it. This had led to security issues leading to concerns that a hacker can exploit a vulnerability in it and breach the data found herein.