CISOs: Don’t block AI, but adopt it with eyes wide open

CISOs: Don't block AI, but adopt it with eyes wide open

The introduction of generative AI (GenAI) tools like ChatGPT, Claude, and Copilot has created new opportunities for efficiency and innovation – but also new risks. For organisations already managing sensitive data, compliance obligations, and a complex threat landscape, it’s essential not to rush into adoption without thoughtful risk assessment and policy alignment.

As with any new technology, the first step should be understanding the intended and unintended uses of GenAI and evaluating both its strengths and weaknesses. This means resisting the urge to adopt AI tools simply because they’re popular. Risk should drive implementation – not the other way around.

Organisations often assume they need entirely new policies for GenAI. In most cases, this isn’t necessary. A better approach is to extend existing frameworks – like acceptable use policies, data classification schemes, and ISO 27001-aligned ISMS documentation – to address GenAI-specific scenarios. Adding layers of disconnected policies can confuse staff and lead to policy fatigue. Instead, integrate GenAI risks into the tools and procedures employees already understand.

A major blind spot is input security. Many people focus on whether AI-generated output is factually accurate or biased but overlook the more immediate risk: what staff are inputting into public LLMs. Prompts often include sensitive details – internal project names, client data, financial metrics, even credentials. If an employee wouldn’t send this information to an external contractor, they shouldn’t be feeding it to a publicly-hosted AI system.

It’s also crucial to distinguish between different types of AI. Not all risks are created equal. The risks of using facial recognition in surveillance are different from giving a developer team access to an open-source GenAI model. Lumping these together under a single AI policy oversimplifies the risk landscape and may result in unnecessary controls – or worse, blind spots.

There are five core risks that cyber security teams should address:

Inadvertent data leakage: Through use of public GenAI tools or misconfigured internal systems.

Data poisoning: Malicious inputs that influence AI models or internal decisions.

Overtrust in AI output: Especially when staff can’t verify accuracy.

Prompt injection and social engineering: Exploiting AI systems to exfiltrate data or manipulate users.

Policy vacuum: Where AI use is happening informally without oversight or escalation paths.

Addressing these risks isn’t just a matter of technology. It requires a focus on people. Education is essential. Staff must understand what GenAI is, how it works, and where it’s likely to go wrong. Role-specific training – for developers, HR teams, marketing staff – can significantly reduce misuse and build a culture of critical thinking.

Policies must also outline acceptable use clearly. For example, is it okay to use ChatGPT for coding help, but not to write client communications? Can AI be used to summarise board minutes, or is that off-limits? Clear boundaries paired with feedback loops – where users can flag issues or get clarification – are key to ongoing safety.

Finally, GenAI use must be grounded in cyber strategy. It’s easy to get swept up in AI hype, but leaders should start with the problem they’re solving – not the tool. If AI makes sense as part of that solution, it can be integrated safely and responsibly into existing frameworks.

The goal isn’t to block AI. It’s to adopt it with eyes open – through structured risk assessment, policy integration, user education, and continuous improvement.


Source link