OpenAI Confirms Hackers Using ChatGPT to Create Malware


OpenAI has confirmed that hackers are exploiting its ChatGPT artificial intelligence model to create malware and conduct cyberattacks.

The AI research company released a report detailing over 20 instances where threat actors attempted to use ChatGPT for malicious purposes since the beginning of 2024.

SIEM as a Service

The report, titled “Influence and Cyber Operations: An Update,” reveals that state-sponsored hacking groups from countries like China and Iran have been leveraging ChatGPT’s capabilities to enhance their offensive cyber operations. These activities range from debugging malware code to generating content for phishing campaigns and social media disinformation.

One notable case involved a Chinese threat actor dubbed “SweetSpecter,” which attempted to use ChatGPT for reconnaissance, vulnerability research, and malware development. The group even targeted OpenAI directly with unsuccessful spear-phishing attacks against the company’s employees.

Analyse Any Suspicious Links Using ANY.RUN’s New Safe Browsing Tool: Try for Free

Threat Actors Abuse OpenAI

Another significant threat came from “CyberAv3ngers,” an Iranian group associated with the Islamic Revolutionary Guard Corps. This actor utilized ChatGPT to research vulnerabilities in industrial control systems and generate scripts for potential attacks on critical infrastructure.

OpenAI also identified a third Iranian group, “STORM-0817,” which employed the AI model to develop Android malware capable of stealing sensitive user data, including contacts, call logs, and location information.

Lure Content using ChatGPT

While these findings are alarming, OpenAI emphasized that the use of ChatGPT has not led to any significant breakthroughs in malware creation or the ability to build viral audiences for influence operations.

The company stated that the observed activities are consistent with their assessment of GPT-4’s capabilities, which they do not believe have materially advanced real-world vulnerability exploitation.

Nevertheless, the report highlights the growing concern about AI tools being misused for cybercrime. As generative AI becomes more sophisticated and accessible, there is a risk that it could lower the barrier of entry for less skilled hackers, potentially leading to an increase in low-level cyberattacks.

In response to these threats, OpenAI has implemented measures to disrupt malicious activities, including banning accounts associated with the identified operations. The company is also collaborating with industry partners and relevant stakeholders to share threat intelligence and improve collective cybersecurity defenses.

Cybersecurity experts warn that this trend is likely to continue as AI technology evolves. They emphasize the need for AI companies to develop robust safeguards and detection mechanisms to prevent the misuse of their models for malicious purposes.

The revelations from OpenAI serve as a wake-up call for the tech industry and policymakers to address the potential risks associated with advanced AI systems.

As AI becomes increasingly integrated into various aspects of our digital lives, striking a balance between innovation and security will be crucial to mitigate the threats posed by malicious actors exploiting these powerful tools.

OpenAI has committed to ongoing efforts to identify, prevent, and disrupt attempts to abuse their models for harmful ends. The company plans to continue sharing its findings with the broader research community and work on strengthening its multi-layered defenses against state-linked cyber actors and covert influence operations.

As the AI landscape continues to evolve, vigilance and collaboration between AI developers, cybersecurity professionals, and government agencies will be essential in staying ahead of emerging threats and ensuring that the benefits of AI technology can be realized without compromising global security.

Strategies to Protect Websites & APIs from Malware Attack => Free Webinar



Source link