HelpnetSecurity

Shadow AI risks deepen as 31% of users get no employer training


Between one-fifth and one-third of workers use AI outside the influence and governance of the IT function, according to a global survey of 6,000 full-time employees at enterprise organizations. Researchers found a widening gap between employee AI adoption and the controls organizations have in place to manage it.

The Lenovo Work Reborn Research Series 2026 report documents a workforce split into two groups: employees equipped with IT-managed tools, training, and oversight, and those operating independently with consumer AI services.

Training and tooling gaps drive unsanctioned use

Many employees report that their employers fail to supply either AI tools or training, and a sizable share of those who receive training describe it as irregular or ineffective. Half of all employees say better training would help them get more value from AI at work, pointing to a workforce ready to adopt AI faster than its employers can equip it.

Employee enthusiasm for AI continues to climb. Seven in ten use AI tools at least a few times a week, and 80% expect their use of AI to increase over the next year. Lenovo’s research indicates that adoption is outpacing the capacity of enterprises to manage, enable, or align it.

Security implications of unmanaged adoption

The report identifies two security concerns tied to shadow AI. Bypassing compliance controls increases the risk that intellectual property or sensitive data are processed outside governed environments. Fragmented workflows produce inconsistent execution and uneven distribution of productivity gains across teams.

Employee trust in enterprise-provided tools shows measurable cracks. A meaningful share of employees doubt the reliability of information produced by employer-provided AI tools, and a smaller group questions whether their privacy and personal data are safe when using them.

Cybersecurity awareness among employees is uneven. Nearly half of employees are highly concerned about criminals using AI to develop sophisticated cyber attacks against their company, and the same percentage are highly concerned about themselves or a colleague accidentally leaking sensitive company information through public AI systems such as ChatGPT. Forty percent are highly concerned about deepfake videos and AI-generated phishing emails. Only 23% are highly concerned about criminals attacking their company’s AI systems directly, a gap that becomes consequential as organizations deploy AI agents and internal AI infrastructure.

These employee perceptions track with separate findings from Lenovo’s CIO Playbook, which reports that 61% of IT leaders say AI is increasing cybersecurity risks and only 31% are confident in their ability to address those risks.

What employees want from security teams

Seventy-four percent of employees say more or better cybersecurity training on AI-related risks would reassure them that they and their organization are protected. Seventy-three percent say it would be reassuring to know their company’s cybersecurity team is using AI to address these risks. Seventy percent say stricter policies on how employees can use AI would provide reassurance.

The report recommends building governance before scaling AI, embedding security awareness into the flow of work through continuous in-context learning, and standardizing the AI interface across workflows and endpoints.

Webinar: The IT Leader’s Guide to AI Governance



Source link