GenAI is everywhere, but security policies haven’t caught up
Nearly three out of four European IT and cybersecurity professionals say staff are already using generative AI at work, up ten points in a year, but just under a third of organizations have put formal policies in place, according to new ISACA research.
The use of AI is becoming more prevalent within the workplace, and so regulating its use is best practice. Yet 31% of organizations have a formal, comprehensive AI policy in place, highlighting a disparity between how often AI is used versus how closely it’s regulated in workplaces.
Policies work twofold to enhance activity and protect businesses
AI is already making a positive impact, for example, 56% of respondents say it has boosted organizational productivity, and 71% report efficiency gains and time savings. Looking ahead, 62% are optimistic that AI will positively impact their organization in the next year.
Yet that same speed and scale make the technology a magnet for bad actors. 63% are extremely or very concerned that generative AI could be turned against them, while 71% expect deepfakes to grow sharper and more widespread in the year ahead. Despite that, only 18% of organizations are putting money into deepfake-detection tools, a significant security gap. This disconnect leaves businesses exposed at a time when AI-powered threats are evolving fast.
AI has significant promise, but without clear policies and training to mitigate risks, it becomes a potential liability. Role-specific guidelines are needed to help businesses safely harness AI’s potential.
“With the EU AI Act setting new standards for risk management and transparency, organizations need to move quickly from awareness to action,” says Chris Dimitriadis, ISACA’s Chief Global Strategy Officer. “AI threats, from misinformation to deepfakes, are advancing rapidly, yet most organizations have not invested in the tools or training to counter them. Closing this risk-action gap isn’t just about compliance, it’s critical to safeguarding innovation and maintaining trust in the digital economy.”
Education is the way to get the best from AI
But policies are only as effective as the people who understand, and can confidently put them into practice.
As AI continues to evolve, there is a need to upskill and gain new qualifications. 42% of respondents believe that they will need to increase their skills and knowledge in AI within the next six months in order to retain their job or advance their career, an increase of 8% from just last year. 89% of digital trust professionals recognise that this will be needed within the next two years.
32% of organizations aren’t currently providing AI training to any employees. When training is provided, it’s often only for those working in IT (35%) and rarely for all employees (22%). Consider this in the context that 81% believe employees in their organization currently use AI whether or not their organization permits it.
Source link