73% of security professionals admit to using SaaS applications that had not been provided by their company’s IT team in the past year, according to Next DLP.
Unauthorized tool use poses major risks for organizations
This is despite the fact that they are acutely aware of the risks, with respondents naming data loss (65%), lack of visibility and control (62%) and data breaches (52%) as the top risks of using unauthorised tools. Adding to this, one in ten admitted they were certain their organization had suffered a data breach or data loss as a result.
A survey of more than 250 global security professionals also revealed that despite having a laissez-faire attitude towards shadow SaaS, security professionals have taken a more cautious approach to GenAI usage.
Half of the respondents highlighted that AI use had been restricted to certain job functions and roles in their organization, while 16% had banned the technology completely. Adding to this, 46% of organizations have implemented tools and policies to control employees’ use of GenAI.
“Security professionals are clearly concerned about the security implications of GenAI and are taking a cautious approach,” explains Next DLP’s Chief Security Officer, Chris Denbigh-White. “However, the data protection risks associated with unsanctioned technology are not new. Awareness alone is insufficient without the necessary processes and tools. Organizations need full visibility into the tools employees use and how they use them. Only by understanding data usage can they implement effective policies and educate employees on the associated risks.”
Employees lack understanding of shadow SaaS risks
40% of security professionals do not think employees properly understand the data security risks associated with shadow SaaS and AI. Yet, they are doing little to combat this risk. Only 37% of security professionals had developed clear policies and consequences for using these tools, with even less (28%) promoting approved alternatives to combat usage.
Only half had received guidance and updated policies on shadow SaaS and AI in the past six months, with one in five admitting to never receiving this. Additionally, nearly one-fifth of security professionals were unaware of whether their company had updated policies or provided training on these risks, indicating a need for further awareness and education.
“Clearly, there is a disparity between employee confidence in using these unauthorised tools and the organization’s ability to defend against the risks,” adds Denbigh-White. “Security teams should evaluate the extent of shadow SaaS and AI usage, identify frequently used tools, and provide approved alternatives. This will limit potential risks and ensure confidence is deserved, not misplaced.”