Japanese Cybersecurity experts have found that ChatGPT could write code for malware by entering a prompt that makes the AI believe it is in developer mode.
OpenAI launched ChatGPT in November 2022, which was just a prototype. It is driven by a machine learning model expecting it to respond like a human.
However, it was programmed to not respond to specific questions, including adult content, sexual questions, or malicious activities.
Since its release, cybercriminals have studied its responses and tried to manipulate and use it for criminal purposes. It is still impossible to anticipate the amount of risk it can produce.
Takashi Yoshikawa, an Analyst at Mitsui Bussan Secure Directions, said, “It is a threat (to society) that a virus can be created in a matter of minutes while conversing purely in Japanese. I want AI developers to place importance on measures to prevent misuse”.
A two-day meeting is planned at the end of this month by the G7 ministers for generative AI governance and improved research in Takasaki, Gunma Prefecture.
The first local government to include ChatGPT for trial purposes is reported to be Yokosuka, Kanagawa Prefecture.
As per JP Times reports, In this experiment, chatGPT was tricked into believing it was in developer mode and then prompted to write a ransomware code that encrypts and demands payment from the victim.
However, ChatGPT completed writing the code in just a few minutes, verified by attacking an experimental PC.
This leads to questioning its extreme potential, which cybercriminals can utilize for malicious activities. ChatGPT’s complete potential is yet to be explored by researchers.
Further looking at this discovery, it is essential to explore the negative potential limit of using it.
This also leads to the conclusion that there are possibilities that cyber criminals have already been aware of this and have been using it for malicious purposes, which is yet to be confirmed.
Building Your Malware Defense Strategy – Download Free E-Book