Shadow AI is widespread — and executives use it the most

Shadow AI is widespread — and executives use it the most

This audio is auto-generated. Please let us know if you have feedback.

Dive Brief:

  • More than 80% of workers, including nearly 90% of security professionals, use unapproved AI tools in their jobs, according to a new report from the cyber risk monitoring vendor UpGuard.
  • This unapproved AI use, which can introduce security vulnerabilities, is not just widespread but pervasive, with half of workers saying they use unapproved AI tools regularly and less than 20% saying they use only company-approved AI tools.
  • Security leaders were more likely than the average employee to report using unapproved tools and far more likely to say they did so regularly, according to the report.

Dive Insight:

The use of unauthorized AI platforms, known as shadow AI, is a significant problem facing businesses across sectors today, according to UpGuard’s Nov. 10 report.

In a remarkable development, UpGuard found that roughly one-quarter of workers consider their AI tools to be “their most trusted source of information,” nearly on par with their manager and higher than their colleagues or search engines. Employees in manufacturing, finance and health care reported the highest levels of trust in AI tools.

That trust perspective has consequences. “Employees who view AI tools as their most trusted source of information are far more likely to use shadow AI tools as part of their regular workflow,” UpGuard said.

Companies in a wide range of industries have shadow IT issues, with consistently high percentages of employees reporting periodic and regular unauthorized AI use across finance, IT, manufacturing and health care, among other sectors. Mid-level managers and low-level employees had the highest levels of overall shadow AI use, while executives had the highest levels of regular use.

All corporate departments use a lot of shadow AI, UpGuard’s report found, although marketing and sales teams reported using it to a greater extent than operations and finance personnel.

For security teams trying to reduce the prevalence of shadow AI, one of UpGuard’s findings is particularly notable: Employees use unapproved tools because they think they know enough to manage the risks.

“We found a positive correlation between users reporting that they understood AI security requirements and that they regularly used unapproved AI tools,” UpGuard said. “This data suggests that as employees’ knowledge of AI risks increases, so does their confidence in making judgments about that risk — even at the expense of following company policies.”

The correlation suggests that security awareness training is not a sufficient safeguard against threats, according to the report. “Such programs need new approaches in order to succeed.”

Indeed, fewer than half of workers said they knew and understood their companies’ policies about AI usage. Meanwhile, 70% said they were aware of employees inappropriately sharing sensitive data with AI tools. That rate was even higher for security leaders, according to the report.

UpGuard’s report is based on two 2024 surveys of 1,500 security leaders and lower-level employees in the U.S., the U.K., Canada, Australia, New Zealand, Singapore and India.



Source link