GenAI is a powerful tool that can be used by security teams to protect organizations, however, it can also be used by malicious actors, making phishing-related attacks a growing and concerning threat vector, according to Ivanti.
Ivanti’s research revealed that when asked which threats are increasing in severity due to GenAI, phishing was the top response (45%) among survey participants. Although training is a crucial part of a multi-layered cyber defense, many organizations have not adapted their training strategies to address AI-powered threats.
In fact, 57% of organizations say they use anti-phishing training to protect themselves from sophisticated social engineering attacks, but only 32% believe that such training is “very effective.”
GenAI boosts phishing threats
Attackers are now using GenAI to craft highly believable content to lure victims — all at high scale and low cost. This threat vector will become even more powerful as attackers further personalize their phishing messages based on data found in the public domain.
As GenAI continues to evolve, so must the understanding of its implications for cybersecurity,” said Robert Grazioli, CIO at Ivanti. “Undoubtedly, GenAI equips cybersecurity professionals with powerful tools, but it also provides attackers with advanced capabilities. To counter this, new strategies are needed to prevent malicious AI from becoming a dominant threat.”
GenAI has the potential to help security teams enhance threat detection, improve predictive capabilities and facilitate real-time responses to emerging threats. To deliver on its immense promise, GenAI requires real-time, highly accessible data, yet 72% report that their IT and security data remain isolated in silos.
Although GenAI gives tremendous power to threat actors, a notable 90% of respondents believe that GenAI benefits security teams as much as, if not more than it benefits threat actors. But curiously, security professionals are much more likely — 6x more likely, in fact — to say AI tools will primarily benefit employers, not employees.
Security professionals doubt AI benefits for their roles
Ivanti’s research shows that 1 in 3 security professionals cite a lack of skill and talent as a major challenge. To bring employees along, companies must invest in upskilling their cybersecurity teams, using strategies such as interactive learning opportunities and attack simulations. And given the evolution of AI tools, training must be ongoing and continuously evolving. To ensure employees feel engaged and activated, encourage self-directed learning about AI security trends in addition to company-offered training.
Ivanti surveyed over 14,500 executives, IT and security professionals and office workers to understand how organizations manage AI in cybersecurity and the necessary processes, technology and talent to enhance defenses.
Fill out the form to get your free eBook: