How threat actors can use generative artificial intelligence?


How threat actors can use generative artificial intelligence?

Pierluigi Paganini
December 02, 2024

Generative Artificial Intelligence (GAI) is rapidly revolutionizing various industries, including cybersecurity, allowing the creation of realistic and personalized content.

The capabilities that make Generative Artificial Intelligence a powerful tool for progress also make it a significant threat in the cyber domain. The use of GAI by malicious actors is becoming increasingly common, enabling them to conduct a wide range of cyberattacks. From generating deepfakes to enhancing phishing campaigns, GAI is evolving into a tool for large-scale cyber offenses

GAI has captured the attention of researchers and investors for its transformative potential across industries. Unfortunately, its misuse by malicious actors is altering the cyber threat landscape. Among the most concerning applications of Generative Artificial Intelligence are the creation of deepfakes and disinformation campaigns, which are already proving to be effective and dangerous.

Deepfakes are media content—such as videos, images, or audio—created using GAI to realistically manipulate faces, voices, or even entire events. The increasing sophistication of these technologies has made it harder than ever to distinguish real content from fake. This makes deepfakes a potent weapon for attackers engaged in disinformation campaigns, fraud, or privacy violations.

A study by the Massachusetts Institute of Technology (MIT) presented in 2019 revealed that deepfakes generated by AI could deceive humans up to 60% of the time. Given the advancements in AI since then, it is likely that this percentage has increased, making deepfakes an even more significant threat. Attackers can use them to fabricate events, impersonate influential figures, or create scenarios that manipulate public opinion.

The use of Generative Artificial Intelligence in disinformation campaigns is no longer hypothetical. According to a report by the Microsoft Threat Analysis Center (MTAC), Chinese threat actors are using GAI to conduct influence operations targeting foreign countries, including the United States and Taiwan. By generating AI-driven content, such as provocative memes, videos, and audio, these actors aim to exacerbate social divisions and influence voter behavior.

For example, these campaigns leverage fake social media accounts to post questions and comments about divisive internal issues in the U.S. The data collected through these operations can provide insights into voter demographics, potentially influencing election outcomes. Microsoft experts believe that China’s use of AI-generated content will expand to influence elections in countries like India, South Korea, and the U.S.

Generative Artificial Intelligence China AI influence operations Taiwan

GAI is also a boon for attackers seeking financial gain. By automating the creation of phishing emails, malicious actors can scale their campaigns, producing highly personalized and convincing messages that are more likely to deceive victims.

An example of this misuse is the creation of fraudulent social media profiles using GAI. In 2022, the Federal Bureau of Investigation (FBI) warned of an uptick in fake profiles designed to exploit victims financially. GAI allows attackers to generate not only realistic text but also photos, videos, and audio that make these profiles appear genuine.

Additionally, platforms like FraudGPT and WormGPT, launched in mid-2023, provide tools specifically designed for phishing and business email compromise (BEC) attacks. For a monthly fee, attackers can access sophisticated services that automate the creation of fraudulent emails, increasing the efficiency of their scams.

Another area of concern is the use of GAI to develop malicious code. By automating the generation of malware variants, attackers can evade detection mechanisms employed by major anti-malware engines. This makes it easier for them to carry out large-scale attacks with minimal effort.

One of the most alarming aspects of GAI is its potential for automating complex attack processes. This includes creating tools for offensive purposes, such as malware or scripts designed to exploit vulnerabilities. GAI models can refine these tools to bypass security defenses, making attacks more sophisticated and harder to detect.

While the malicious use of GAI is still in its early stages, it is gaining traction among cybercriminals and state-sponsored actors. The increasing accessibility of GAI through “as-a-service” models will only accelerate its adoption. These services allow attackers with minimal technical expertise to execute advanced attacks, democratizing cybercrime.

For instance, in disinformation campaigns, the impact of GAI is already visible. In phishing and financial fraud, the use of tools like FraudGPT demonstrates how attackers can scale their operations. The automation of malware development is another worrying trend, as it lowers the barrier to entry for cybercrime.

Leading security companies, as well as major GAI providers like OpenAI, Google, and Microsoft, are actively working on solutions to mitigate these emerging threats. Efforts include developing robust detection mechanisms for deepfakes, enhancing anti-phishing tools, and creating safeguards to prevent the misuse of GAI platforms.

However, the rapid pace of technological advancement means that attackers are always a step ahead. As GAI becomes more sophisticated and accessible, the challenges for defenders will grow exponentially.

Generative Artificial Intelligence is a double-edged sword. While it offers immense opportunities for innovation and progress, it also presents significant risks when weaponized by malicious actors. The ability to create realistic and personalized content has already transformed the cyber threat landscape, enabling a new era of attacks ranging from deepfakes to large-scale phishing campaigns.

As the technology evolves, so will its misuse. It is imperative for governments, businesses, and individuals to recognize the potential dangers of GAI and take proactive measures to address them. Through collaboration and innovation, we can harness the benefits of GAI while mitigating its risks, ensuring that this powerful tool serves humanity rather than harming it.

Follow me on Twitter: @securityaffairs and Facebook and Mastodon

Pierluigi Paganini

(SecurityAffairs – hacking, generative artificial intelligence)







Source link