Employees are quietly bringing AI to work and leaving security behind
While IT departments race to implement AI governance frameworks, many employees have already opened a backdoor for AI, according to ManageEngine.
The rise of unauthorized AI use
Shadow AI has quietly infiltrated organizations across North America, creating blind spots that even the most careful IT leaders struggle to detect.
Despite formal guidelines and sanctioned tools, shadow Al has become the norm rather than the exception. 70% of IT decision makers (ITDMs) have identified unauthorized AI use within their organizations.
60% of employees are using unapproved AI tools more than they were a year ago, and 93% of employees admit to inputting information into AI tools without approval. 63% of ITDMs see data leakage or exposure as the primary risk of shadow AI. Conversely, 91% of employees think shadow AI poses no risk, not much risk, or some risk that’s outweighed by reward.
Summarizing notes or calls (55%), brainstorming (55%), and analyzing data or reports (47%) are the top tasks employees complete with shadow AI. GenAI text tools (73%), AI writing tools (60%), and code assistants (59%) are the top AI tools ITDMs have approved for employee use.
“Shadow AI represents both the greatest governance risk and the biggest strategic opportunity in the enterprise,” said Ramprakash Ramamoorthy, director of AI research at ManageEngine. “Organizations that will thrive are those that address the security threats and reframe shadow AI as a strategic indicator of genuine business needs. IT leaders must shift from playing defense to proactively building transparent, collaborative, and secure AI ecosystems that employees feel empowered to use.”
Identifying the shadow AI gaps
To turn the use of shadow AI from a liability into a strategic advantage, IT leaders need to close the gaps in education, visibility, and governance revealed by the report. Specifically, a lack of education around AI model training, safe user behavior, and organizational impact is driving systematic misuse.
Blind spots continue to grow in organizations, even as IT teams move to approve and integrate AI tools as quickly as possible. Meanwhile, shadow AI proliferates due to inadequate enforcement of established governance policies.
85% report that employees are adopting AI tools faster than their IT teams can assess them. 32% of employees entered confidential client data into AI tools without confirming company approval, while 37% entered private, internal company data.
53% of ITDMs say employees’ use of personal devices for work-related AI tasks is creating a blind spot in their organization’s security posture. Only 54% report their organizations have implemented AI governance policies and actively monitor for unauthorized use, while 91% have implemented policies overall.
The future of AI at work
Proactively managing AI means harnessing employee initiative while maintaining security. It delivers the business value discovered in shadow AI but does so via AI tools that are approved by IT.
63% advise integrating approved AI tools into standard workflows and business applications, 60% suggest implementing policies on acceptable AI use, and 55% suggest establishing a list of vetted and approved tools.
66% of employees recommend setting policies that are fair and practical, 63% recommend providing official tools that are relevant to their tasks, and 60% advise providing better education on understanding the risks.
“Shadow AI is a fatal flaw for most organizations,” said Sathish Sagayaraj Joseph, regional technical head at ManageEngine. “IT teams can’t manage risk they can’t see, and they can’t enable business value that users won’t divulge. Proactive AI management unites IT and business professionals in their pursuit of common, organizational goals. That means employees are equipped to understand and avoid AI-related risks, and IT is empowered to help them use AI in ways that drive real business outcomes.”
Source link