Most enterprise AI activity is happening without the knowledge of IT and security teams. According to Lanai, 89% of AI use inside organizations goes unseen, creating risks around data privacy, compliance, and governance.
This blind spot is growing as AI features are built directly into business tools. Employees often connect personal AI accounts to work devices or use unsanctioned services, making it difficult for security teams to monitor usage. Lanai says this lack of visibility leaves companies exposed to data leaks and regulatory violations.
AI use cases hiding in plain sight
In healthcare, workers used AI tools to summarize patient data, raising HIPAA concerns. In the financial sector, teams preparing for IPOs unknowingly moved sensitive information into personal ChatGPT accounts. Insurance companies used embedded AI features to segment customers by demographic data in ways that could violate anti-discrimination rules.
Lexi Reese, CEO of Lanai, said one of the most surprising discoveries came from inside tools that had already been approved by IT.
“One of the biggest surprises was how much innovation was hiding inside already-sanctioned apps (SaaS and In-house apps). For example, a sales team discovered that uploading ZIP code demographic data into Salesforce Einstein boosted upsell conversion rates. Great for revenue, but it violated state insurance rules against discriminatory pricing.
“On paper, Salesforce was an ‘approved’ platform. In practice, the embedded AI created regulatory risk the CISO never saw.”
Lanai says these examples reflect a larger trend. AI is often embedded inside tools like Salesforce, Microsoft Office, and Google Workspace. Because these features are part of tools employees already use, they can bypass traditional controls like data loss prevention and network monitoring.
How Lanai’s platform works
To address this problem, Lanai launched an edge-based AI observability agent. The platform installs lightweight detection software directly on employee devices. By working at the edge, it can detect AI activity in real time without routing data through central servers.
Reese explained that this design required solving complex engineering challenges.
“Running AI models at the edge flips the script. The easy path is to take a static list, which is not dynamically updated at the speed employees are using it, and either analyze pings to that list in the browser or ship every conversation to the cloud and analyze.
“This is what AI security start-ups do but those architectures are either dated very quickly because their static list still come from a tops down committee saying ‘here’s what I think my employees are using’ and/or creates a new data-exfiltration risk.
“We engineered our prompt detection model to run directly on laptops and browsers, without traffic leaving the device perimeter. The hard part was compressing detection into something lightweight enough that doesn’t hurt performance, while still rich enough to detect prompt interactions, not just app names.
“Once we know an interaction is AI, our SaaS has risk and workflow-intelligence models that cluster prompt patterns instead of scanning for static keywords. That preserves privacy, minimizes latency, and lets us scale across thousands of endpoints without draining performance.”
Lanai says the software can be deployed in under 24 hours using standard mobile device management systems. Once in place, it helps organizations understand their AI footprint and create policies to manage usage.
Governance over shutdown
The company emphasizes that its goal is not to block AI outright. Instead, the focus is on giving CISOs and other leaders the information they need to make decisions. By seeing which tools are being used, companies can evaluate them for risk and decide which to approve or limit.
For regulated industries like healthcare, Reese said distinguishing between safe and unsafe AI use requires going beyond app-level monitoring.
“The trick is that ‘approved platform’ doesn’t mean ‘approved workflow.’ We look at the prompt+data pattern, not just the app.
“For example: In a large hospital network, clinicians were using the embedded AI summarization feature inside their web-based EHR portal to auto-draft patient visit summaries. On the surface, this was within a sanctioned EHR platform, but the workflow introduced PHI into an AI model that wasn’t part of the hospital’s HIPAA business associate agreement.
“Lanai can detect the difference, not by flagging ‘EHR use’ in general, but by recognizing the specific prompt+data pattern that carried sensitive patient records into an unsafe AI workflow.
“We detect signals like: what data types are in the prompt, which AI feature was invoked, and whether the workflow matches company or regulator-defined sensitive use cases. That allows us to separate compliant innovation from risky misuse in real time, and do it inside the same SaaS tool, which is where most legacy monitoring fails.”
Measuring the impact
Lanai says organizations using its platform are seeing significant improvements in reducing AI-related incidents.
“In that healthcare system, ‘data exposure incidents’ are primarily cases where clinicians pasted patient records, lab results, or protected health information into AI features embedded in EHR or productivity apps.
“Within 60 days of Lanai deployment, customers have seen up to an 80% drop, not because people stopped using AI, but because they finally had visibility to flag and redirect unsafe workflows,” Reese said.
Similar patterns are emerging in the financial services sector, where organizations have reported up to a 70% reduction in unapproved AI usage for analyzing confidential financial data within just one quarter. In some cases, this drop occurs because the unsanctioned application is shut down. In others, the organization maintains the productivity benefits by bringing the AI use case into a secure, approved environment within the sanctioned tech stack.
Webinar: Why AI and SaaS are now the same attack surface
Source link