Why shadow AI could be your biggest security blind spot

Why shadow AI could be your biggest security blind spot

From unintentional data leakage to buggy code, here’s why you should care about unsanctioned AI use in your company

Why shadow AI could be your biggest security blind spot

Shadow IT has long been a thorn in the side of corporate security teams. After all, you can’t manage or protect what you can’t see. But things could be about to get a lot worse. The scale, reach and power of artificial intelligence (AI) should make shadow AI a concern for any IT or security leader.

Cyber risk thrives in the dark spaces between acceptable use policies. If you haven’t already, it may be time to shine a light on what could be your biggest security blind spot.

What’s shadow AI and why now?

AI tools have been a part of corporate IT for quite a while now. They’ve been helping security teams to detect unusual activity and filter out threats like spam since the early 2000s. But this time it’s different. Since the breakout success of OpenAI’s ChatGPT tool in 2023, when the chatbot garnered 100 million users in its first two months, employees have been wowed by the potential for generative AI to make their lives easier. Unfortunately, corporates have been slower to get on board.

That’s created a vacuum that frustrated users have been only too keen to fill. Although it’s impossible to accurately measure a trend that, by its very nature, exists in the shadows, Microsoft reckons 78% of AI users now bring their own tools to work. It’s no coincidence that 60% of IT leaders are concerned that senior executives lack a plan to implement the tech officially.

Popular chatbots like ChatGPT, Gemini or Claude can be easily used and/or downloaded onto a BYOD handset or home working laptop. They offer some employees the tantalizing prospect of cutting workload, easing deadlines and freeing them up to work on higher value tasks.

Beyond public AI models

Standalone apps like ChatGPT are a big part of the shadow AI challenge. But they don’t represent the full extent of the problem. The technology can also sneak into the enterprise via browser extensions. Or even features in legitimate business software products that users switch on without IT’s knowledge.

Then there is agentic AI: the next wave of AI innovation centered around autonomous agents, designed to work independently to complete specific tasks set for them by humans. Without the right guardrails in place, they could potentially access sensitive data stores, and execute unauthorized or malicious actions. By the time anyone realizes, it may be too late.

What are the risks of shadow AI?

All of which raise huge potential security and compliance risks for organizations. Consider first the unsanctioned use of public AI models. With every prompt, the risk is that employees share sensitive and/or regulated data. It could be meeting notes, IP, code or customer/employee personally identifiable information (PII). Whatever goes in is used to train the model, and could therefore be regurgitated to other users in the future. It’s also stored on third-party servers, potentially in jurisdictions which do not have the same security and privacy standards as yours.

This will not sit well with data protection regulators (e.g., GDPR, CCPA, etc.). And it further exposes the organization by potentially enabling employees from the chatbot developer to view your sensitive information. The data could also be leaked or breached by that provider, as happened to Chinese provider DeepSeek.

Chatbots may contain software vulnerabilities and/or backdoors that expose the organization unwittingly to targeted threats. And any employee willing to download a chatbot for work purposes may accidentally install a malicious version, designed to steal secrets from their machine. There are plenty of fake GenAI tools out there designed explicitly for this purpose.

The risks extend beyond data exposure. Unsanctioned use of tools to code, for example, could introduce exploitable bugs into customer-facing products, if output is not properly vetted. Even the use of AI-powered analytics tools may be risky if models have been trained on biased or low-quality data, leading to flawed decision making.

AI agents may also introduce fake content and buggy code, or take unauthorized actions without their human masters even knowing. The accounts such agents need to operate might also become a popular target for hijacking if their digital identities aren’t securely managed.

Some of these risks are still theoretical, some not. But IBM claims that, already, 20% of organizations last year suffered a breach due to security incidents involving shadow AI. For those with high levels of shadow AI, it could add as much as US$670,000 on top of the average breach costs, it calculates. Breaches linked to shadow AI can wreak significant financial and reputational damage, including compliance fines. But business decisions made on faulty or corrupted outputs may be just as damaging, if not more so, especially as they’re likely to go unnoticed.

Shining a light on shadow AI

Whatever you do to tackle these risks, adding each new shadow AI tool you find to a “deny list” won’t cut it. You need to acknowledge these technologies are being used, understand how extensively and for what purposes, and then create a realistic acceptable use policy. This should go hand in hand with in-house testing and due diligence on AI vendors, to understand where security and compliance risks exist in certain tools.

No two organizations are the same. So build your policies around your corporate risk appetite. Where certain tools are banned, try to have alternatives that users could be persuaded to migrate to. And create a seamless process for employees to request access to new ones you haven’t discovered yet.

Combine this with end-user education. Let staff know what they may be risking by using shadow AI. Serious data breaches sometimes end in corporate inertia, stalled digital transformation and even job losses. And consider network monitoring and security tools to mitigate data leakage risks and improve visibility into AI use.

Cybersecurity has always been a balance between mitigating risk and supporting productivity. And overcoming the shadow AI challenge is no different. A big part of your job is to keep the organization secure and compliant. But it’s also to support business growth. And for many organizations, that growth in the coming years will be powered by AI.



Source link