Not all data security risks involving the web browser come from external adversaries. A large portion comes from well-meaning employees or contractors making mistakes. Accidental data exposure has spiked massively, coinciding with the explosion of web-based generative AI tools, SaaS apps, web apps and cloud storage.
Midmarket organizations are particularly vulnerable since they typically lack the comprehensive data loss prevention (DLP) capabilities of large enterprises, instead operating primarily on trust and training. In the last 12 months, 52% of organizations have had sensitive data loss because of insiders — either intentional or accidental — and 43% had one or more data-loss incidents relating to generative AI usage.
Data loss via generative AI tools is a new vector for an existing major problem: employees sending sensitive data to inappropriate accounts or apps via a browser when they shouldn’t. Of course, generative AI is just one of many possible channels where employees might unwittingly transmit confidential information. There are also unapproved SaaS and web apps. Others might share sensitive documents to a personal email or cloud-storage account.
The list goes on, but all these incidents share something in common — most of them happen through a web browser, where the average knowledge worker now spends approximately 85% of their time. “There’s a lot of shadow IT happening now, whether it’s generative AI or any other tool,” said Monique Lance, senior product marketing manager at Palo Alto Networks. “In a typical midmarket organization, around 85% to 90% of new applications being used aren’t formally approved by IT, so they can’t be properly protected. You don’t have those last-mile visibility or controls, so you don’t know what they’re screensharing or what they’re copy-pasting into other applications, and so forth.”
In many organizations, employees regularly use their own cloud platforms and web apps. That’s not necessarily because they explicitly want to circumvent IT’s controls, but because they’re used to them and often perceive these apps as being more user-friendly. Often, they are. It’s a perfect example of where user experience and data security start competing against each other: Deploy extremely rigid security controls, such as whitelisting only a handful of approved apps and websites, and employees might be tempted to find risky workarounds so they can work efficiently.
Generative AI, where almost all use of genAI apps happen through a web browser, is now one of the main single points of failure. Despite potential productivity gains, almost 72% of generative AI interactions happen in noncorporate accounts.1 For example, employees routinely copy and paste information into conversations with genAI tools, and that information may include sensitive corporate data that could potentially resurface if used in model training or during a data leak if the platform is compromised.
The problem is that once sensitive data spills onto an external server, whether it’s one belonging to a genAI, SaaS or web app vendor, it ends up outside of internal security controls and policies, leaving that sensitive data out of your control and exposing it.
It doesn’t help that many organizations also have disjointed protection that is not unified — especially for data loss prevention. As Lance said, “There’s DLP for email, DLP for the endpoint, DLP for data-at-rest and DLP for data-in-motion. But here, in the browser, you get all the DLP you need for full last-mile protection.”
Another risk vector is unmanaged devices, where an employee or external contractor might download a confidential document via their own browser onto a local personal device or storage device. This device may not be properly secured and be compromised by attackers, meaning that threats that live on the device can get access to that sensitive data.
To tackle these internal risks effectively, midmarket teams turn to secure browsers capable of providing deep visibility and enforcing granular user controls during user browsing sessions, the goal being to prevent intentional or unintentional leaks without adding friction to the user experience. For instance, AI-powered browser-native enterprise DLP inspects content for sensitive data as it’s being entered into the application and blocks potentially problematic activity in real-time — no matter the website or web app. That way, everything that happens in the browser is recorded, monitored and made visible to security teams instead of having to consolidate log data from a multitude of point systems.
1The LAYERX enterprise AI and SaaS data security report 2025. Enterprise AI and SaaS Data Security Report 2025. https://go.layerxsecurity.com/the-layerx-enterprise-ai-saas-data-security-report-2025
