ChatGPT Successfully Built Malware But Failed To Analyze It


Researchers fed the malware samples with a variety of complexity on ChatGPT to analyze the purpose behind this code and finally got surprising results in explaining the code structure.

ChatGPT is an emerging AI technology created by OpenAI and several reports stated that it has strong skills in creating custom malware families such as ransomware, backdoor, and hacking tools.

But it’s not limited that hackers are trying to get assistance from ChatGBT to help design features for a dark web marketplace similar to Silk Road or Alphaba.

Since there are several discussions on the internet about how effectively ChatGPT performs malware development and analysis, researchers from ANY(.)RUN attempted to analyze the different types of malware code samples submitted on ChatGPT to find how deeply it goes into analyzing the malware.

EHA

Writing code is one of the strongest sides of ChatGPT, especially mutation, but on the other side, threat actors can easily abuse it to develop polymorphic malware.

Test ChatGPT to Anlyse Malware Code

To find out the efficiency of analyzing malware code, different complexity malware sample codes have been submitted on ChatGBT.

Initially, researchers submitted a simple malicious code snippet to ChatGBT to analyze it and the code that hides drives from the Windows Explorer interface.

Submitted code
ChatGBT results

In this first result, ChatGBT provides a fair result, and the AI has understood the exact purpose of the code and also highlights the malicious code intents and logical ways.

Next, another complex ransomware code has been submitted again to test the ChatGBT performance.

In the following results, ChatGPT identified its function as the researchers dealing with a fake ransomware attack.

Sample 2 Chat GPT results

Attackers will not deal with simple code in real-life situations, so finally submitted the high-complexity code.

According to the Any.run report, “So for the next couple of tests, we ramped up the complexity and provided it with code that is closer to that what you can expect to be asked to analyze on the job.”

This final analysis concluded by submitting large code and the AI directly through the error, then researchers tried different methods and still, the answer wasn’t as expected.

Next, another complex ransomware code has been submitted again to test the ChatGBT performance.

In this result, researchers expect the deobfuscate the script but it throws that it’s not humanly readable that has already known and this contains no values, Any.run researchers said.

“As long as you provide ChatGPT with simple samples, it is able to explain them in a relatively useful way. But as soon as we’re getting closer to real-world scenarios, the AI just breaks down.”

Network Security Checklist – Download Free E-Book



Source link