Will AI-Generated Cyberattacks Surge In The Future?


by Emily Newton

AI’s popularity has skyrocketed in recent years. Generative tools like ChatGPT have captivated individuals and businesses alike, but as the initial hype has faded, darker implications have emerged. Now that AI is more versatile and accessible than ever, will it create a wave of AI-generated cybercrime?

AI is a powerful tool for anyone who uses it, including cybercriminals. As this technology advances, security professionals may need to consider how to defend against AI-driven attacks.

How Cybercriminals Can Use AI

Quantifying the threat of AI-generated cyberattacks starts with understanding how cybercriminals may use this technology. Like legitimate uses for intelligent systems, AI cybercrime comes in many forms.

Optimized Phishing

Phishing is one of the most prominent use cases for AI in cybercrime, partly because phishing remains the most common form of cybercrime today. These attacks can be remarkably effective without any help from AI, but generative models can unlock their full destructive potential.

Generative AI can craft convincing, personalized phishing messages the same way it writes blog posts and marketing messages. As a result, social engineering attempts may lack their traditional telltale signs, like misspellings and poor grammar. AI’s speed will also let criminals produce much higher volumes of these messages in less time.

Research has already verified the efficacy of AI-generated phishing. A test at a security conference found that users were far more likely to fall for AI-produced phishing attempts than human-generated ones.

Building New Malware Strains

Generative tools like ChatGPT can write more than just natural language. They can also produce code, opening the door to AI-generated malware strains.

Writing new code is a time-consuming, error-prone process, but AI can streamline it significantly. That’s great news for developers, but these benefits extend to malicious code, too. Cybercriminals can use generative AI to automate the writing or checking process when developing new malware strains.

Since AI works so quickly and effectively, the malware it creates may be more threatening than its human-programmed counterparts.

Cybercriminals could use it to develop and implement new, detection-resistant strains before security researchers have time to adapt. Zero-day exploits are already a widely recognized security concern, but AI could accelerate their emergence as it streamlines malware generation.

Vulnerability Detection

Similarly, cybercriminals can also use AI to find new attack vectors. As cybersecurity has become a more prevalent issue, more businesses have implemented extensive protections. Over half of all organizations have a zero-trust framework in place, but no defense is perfect, and AI helps criminals overcome these barriers.

AI can scan business networks and IT infrastructure to find potential vulnerabilities. Today’s complex cyber defenses leave fewer of these opportunities and make them harder to find, but AI can identify them faster and more accurately than people.

Thanks to AI’s speed and accuracy, these vulnerability scans can shorten attack timelines, even against a well-protected business. As more of these models come as off-the-shelf solutions on the dark web, they will also lower the barrier of entry for advanced attacks.

Deepfakes

While many of these risks involve using AI to exacerbate existing threats, AI can create entirely new ones, too. Deepfakes — AI-generated media resembling real-world video, audio or image content — could pose security challenges if businesses are not prepared.

Criminals could use deepfakes to make videos or audio messages that look and sound exactly like real, trusted parties. They might impersonate a company’s CEO to direct employees to send sensitive information to a cybercriminal’s email or sow distrust throughout an organization.

Business email compromise is already one of the costliest cybercrimes, and deepfakes could make this type of fraud easier than ever.

Deepfakes could also bypass biometric security or protect cybercriminals’ identities. Since these threats are so new, protections against them are fairly sparse.

AI-Generated Cyberattacks Today and Tomorrow

These cases are more than theoretical threats. Cybercriminals are already starting to use AI to form more sophisticated or effective attacks against businesses and individuals alike. Security firm Zscaler has already witnessed deepfake attacks and says AI drove the 47% rise in phishing attacks the company discovered in 2022.

As AI becomes more versatile and accessible, its role in cybercrime will undoubtedly grow. Cybercriminals started using generative models like ChatGPT almost as soon as the technology became available, and AI is only becoming more powerful. The potential returns for cybercriminals are too great for them to pass up.

AI-generated cybercrime could become the norm in a matter of months as current trends grow. Even if it doesn’t happen that quickly, AI will redefine cybercrime over the next few years. The shift is already taking place.

Defending Against AI-Generated Attacks

As AI-generated cyberattacks grow, the cybersecurity industry must adapt. Specific approaches and protections may vary between organizations, but here are some general steps that shift should include.

Fight Fire With Fire

The best protection against AI attacks is AI itself. Thankfully, cybersecurity is already a leading use case for AI, as 51% of AI adopters use it in security. That trend must grow as businesses try to keep pace with AI-driven advancements in cybercrime.

Security teams can use AI vulnerability detection to find holes in their defenses before criminals do. Similarly, some models can detect deepfakes and other AI-generated content to help protect employees from AI-powered fraud and social engineering. This technology’s speed and accuracy make cybercrime more dangerous, but security professionals can reap the same benefits.

Increased Scrutiny

As AI heightens the risks of phishing and other forms of social engineering, employees should become increasingly skeptical about unusual communications. Stricter policies around acceptable actions — even if an authority figure seemingly asks for them — may be necessary.

Zero-trust is essential as part of this increased scrutiny. All employees should also receive training on how to check messages for AI-generated content and why it’s important to verify before trusting anything.

Adapt to New Norms

Finally, businesses must recognize that AI’s growth will accelerate the rate of change in cybercrime. Adapting to new criminal trends is already a critical part of thorough security, but these trends will change faster with AI.

According to a 2022 report, 42% of organizations today penetration test once every one to two years, but that may have to change. More frequent testing may be necessary to stay on top of rapidly evolving, AI-driven threats.

Outside of these tests, security professionals should monitor the overall cybercrime landscape closely to identify threats similar businesses are facing they may have to account for.

AI Will Redefine Both Cybercrime and Cybersecurity

AI is a revolutionary technology, but that power applies to both good and bad parties. Just as this technology is changing the way businesses operate, it’s opening new opportunities for criminals.

AI has already become a common part of cybercrime. Security teams must adapt to this trend and likewise implement AI to stay safe.

About the author
Emily Newton is a seasoned tech and industrial writer who explores the impact of technology in different industries. She has over six years of experience providing insights on innovative technologies.





Source link