Data Loss, Monetary Damage, and Reputational Harm: How Unsanctioned AI Hurts Companies and 6 Mitigation Strategies


The emergence of AI represents a workplace revolution, transforming virtually every industry and reshaping the daily experiences and responsibilities of employees. However, like all new technologies, it carries risks. One of the most significant is the unauthorized use of AI tools within corporate environments, which can compromise protected data, invite regulatory scrutiny, or lead to legal challenges. The use of AI-enabled tools that have not been approved and validated as safe is an increasing issue commonly referred to as Shadow AI.

Most organizations recognize that their teams are utilizing mainstream Gen AI tools like ChatGPT, Claude, and Co-Pilot, and have started programs to govern their use and mitigate risk. Shadow AI presents a much more complex issue to tackle. An average mid-sized company leverages around 150 SaaS applications. Approximately 35% of those applications are AI-enabled, equating to about 50 AI-enabled applications in use. Identifying those 150 SaaS applications, determining which are AI-enabled, and establishing processes to ensure their risks are effectively managed poses a challenge for many organizations. This Shadow AI problem is expected to worsen as the percentage of AI-enabled SaaS applications is estimated to double in the coming year.

Shadow AI By the Numbers

In May 2025, Cisco discovered that up to 60% of organizations lack confidence in their ability to detect the use of Shadow AI. An April 2025 report in SecurityWeek indicates that as many as 50% of employees were already utilizing unapproved AI tools as early as October 2024. Given the pace of AI adoption, that figure is certainly much higher today.

Understanding the Risks

An alarming report from TechNewsWorld.com reveals that over 73% of ChatGPT work-related queries were processed using non-corporate accounts. Thus, even organizations that have implemented controls to minimize risk (e.g., a centrally managed and optimally configured corporate account) may be exposed to bias, breaches of regulations such as GDPR or HIPAA, and the potential misuse of sensitive information to train the LLM for future responses.

Shadow AI magnifies these risks, as your organization and users are often not even aware that they are using AI. I recently worked with an attorney who was unaware that the Grammarly app he uses to proofread highly confidential documents is AI-enabled. What would the impact be on his firm if they determined that Grammarly does not have the proper controls to keep that data from being exposed?

Costs of Shadow AI

Shadow AI can lead to the loss of sensitive data, cause reputational harm, and result in monetary damage. A few examples include:

Fortunately, the strategies for securing AI are relatively simple; unfortunately, executing on them isn’t.

6 Strategies to Reduce Shadow AI Risk

  • Generate an AI Inventory: You can’t manage Shadow AI use that you aren’t aware of. Use tools already available in your environment (e.g., Microsoft Defender, a CASB, etc.) or purchase tools specifically designed to identify Shadow AI (e.g., BetterCloud, Torii).
  • Establish an AI Acceptable Use Policy: Communicate how employees, contractors, and third parties may safely and responsibly use artificial intelligence tools and technologies within an organization. Well-conceived policies can prevent misuse, ensure compliance, and support ethical, secure, and legally sound use of AI systems. Keep your policies updated to prevent confusion and keep pace with developments in the AI industry.
  • Build an AI Intake/Approval Process: Ensure everyone understands that you have established a process for assessing and approving AI use cases and tools.
  • Update your Third-Party Risk Management Program: Ensure your Third-Party Risk Management (TPRM) strategy reflects the AI era. When assessing vendors, make sure you’re determining if it’s an AI use case, and if so, that they have an AI Governance program that minimizes your risk. Ideally, they are ISO 4200 certified, and at a minimum, they should be demonstrably compliant with the NIST AI Risk Management Framework.
  • Provide a Secure AI Platform: Provide your employees with a safe, secure, and internally governed AI platform as an alternative to unsanctioned AI, limiting the risks associated with mainstream Gen AI tools like ChatGPT, Claude, and Gemini.
  • Training and Education: Provide employees with the training needed to recognize and manage the risks associated with using AI.

Leverage AI Frameworks and Regulations

As you develop your AI governance programs, leverage best practices and stay aware of evolving laws and regulations related to AI. Designing policies that comply with current regulations will help ensure your AI strategy aligns with the latest industry standards and best practices. Utilize these ongoing regulatory frameworks and standards to shape effective AI policies:

  • NIST AI RMF and ISO 42001: Use these frameworks as the basis of your AI Governance program.
  • EU AI Act: Understand how the EU enforces trustworthy and transparent AI.
  • NYC Local Law 144 and Colorado AI Act: Emerging laws that make the organization using a third-party-provided AI application responsible if the system exhibits bias unless they have performed an independent assessment before using it and at least once annually after that.

Conclusion

As organizations race to harness the benefits of artificial intelligence, Shadow AI presents a growing and often invisible risk that can undermine security, compliance, and operational integrity. By proactively implementing the strategies outlined in this article, businesses can shift from reactive containment to thoughtful governance. The goal isn’t to block innovation, but to guide it, ensuring that AI is adopted responsibly, transparently, and in alignment with organizational values and regulatory expectations. Now is the time to shine a light on Shadow AI and build the framework needed to manage it.

About the Author

John Verry is the Managing Director of CBIZ with over 25 years of experience. As a leading voice in risk management and information security frameworks, John has helped numerous organizations design and implement ISO and NIST-based security programs, ensuring robust and resilient business operations. His expertise spans cybersecurity, privacy, and AI governance. John Verry can be reached at our company website https://www.cbiz.com/contact.



Source link

About Cybernoz

Security researcher and threat analyst with expertise in malware analysis and incident response.