Today, new research by ISACA has revealed that, despite nearly three quarters (73%) of European organisations reporting that their staff use AI at work, only 30% provide limited training to employees in tech-related positions, while 40% offer no training at all.
However, issues around AI aren’t limited to the absence of training in the workplace. Policy on how AI should and shouldn’t be used is also lacking, with only 17% of organisations having a formal, comprehensive AI policy in place.
Aside from lack of training and policies, business and IT professionals are reporting a gap in education around AI. When asked how familiar respondents were with AI, almost three quarters (74%) were only somewhat familiar or not very familiar at all. Concerns around the use of AI don’t stop there: when asked about generative AI being exploited by bad actors, 61% of respondents were extremely or very worried that this might happen. Additionally, earlier this year, research by Keeper Security revealed AI-powered attacks as the top emerging attack vector witnessed by organisations, with over 50% of business leaders concerned about the proliferation and impact of such attacks.
AI presents a double-edged sword for cybersecurity. While it can be used to analyse data faster for threats, cybersecurity experts have outlined several security concerns. One concern is AI attackers. Malicious actors could train AI to craft malware that bypasses current security. AI is also susceptible to “data poisoning” where attackers feed it bad data to manipulate its decisions. For instance, an AI filtering financial transactions could be tricked into approving fraudulent ones.
AI’s dependence on large datasets raises privacy issues. Breaches of this data could expose sensitive information. Mitigating these risks requires secure data storage and robust AI development practices. Human oversight remains crucial to ensure AI is used ethically and effectively.
Chris Dimitriadis, Chief Global Strategy Officer at ISACA, said: “AI is going to continue to come to the fore, helping to shape the way that IT and cybersecurity industries transform and innovate. AI is being used twofold – bad actors are weaponising it to develop more sophisticated cyberattacks and in response it is being used by cyber professionals to better detect and respond to those threats. If businesses are to see the benefits of using AI, they need to have the right skills in place in order to be able to identify new threat models, risks and controls.”
Upskilling and training are in high demand, with 34% of respondents believing they will need to increase their skills and knowledge of AI in the next six months, and just over a quarter (27%) stating they will need to do so in the next seven months to a year, to retain their job or advance their career. In total, an overwhelming 86% of respondents feel that this training will be necessary within the next two years.
Dimitriadis adds: “As cyber criminals use AI to carry out increasingly sophisticated and targeted attacks, it’s more important than ever for cyber professionals to have formal training and clear company policies on AI to follow. Businesses can upskill their teams and keep pace with the evolving threat that the rise of AI poses, protecting their reputation and strengthening customer trust.”