Enterprise teams already run dozens of AI tools across daily work. Usage stretches from code generation and analytics to customer support drafting and internal research. Oversight remains uneven across roles, functions, and industries. A new Larridin survey of enterprise leaders places measurement and governance at the center of this operating environment.
Executives frequently express confidence in their understanding of AI activity across the organization. Directors and managers closer to daily operations describe a different condition. Confidence declines as proximity to execution increases, producing a 16-point gap between executive and director views of AI visibility. This gap persists across industries and company sizes.
Shadow AI usage contributes to this disconnect. More than one-fifth of leaders identify employee use of personal or unsanctioned AI tools as a barrier to success, even as most of the same group reports high confidence in visibility. Tool procurement provides insight into licenses purchased, though it provides limited insight into daily use patterns at the desktop and browser level.
Russ Fradin, CEO of Larridin said: “The C-suite believes AI is visible, valuable, and under control, while adoption is racing ahead of measurement and governance is inconsistent. Until enterprises can organize their efforts around real-time data, AI could be a strategic liability as well as a strategic asset.”
Confidence at the top, blind spots on the ground
Most enterprises rely on more than one AI product. Organizations reporting stronger returns use an average of 2.7 tools, compared with 1.1 for lower-performing peers. Specialized tools support distinct workflows such as software development, automation, analysis, and content generation. Centralized platforms account for only part of daily activity.
This diversification introduces redundancy. Some leaders believe overlapping tools are a source of budget waste. Embedded AI features within SaaS platforms add to the count. The average large enterprise now operates 23 AI tools, with 45 percent of adoption occurring outside formal IT procurement channels.
Only 38 percent of organizations maintain a comprehensive inventory of AI applications in use. Inventory gaps complicate governance, budgeting, and risk management, especially as regulatory frameworks such as ISO 42001 require continuous awareness of deployed systems.
More tools, less visibility
Return on investment varies widely by sector. Retail, software, manufacturing, and telecommunications organizations report a high likelihood of realizing ROI within six months. Hospitality, restaurants, and healthcare report lower expectations.
Workflow structure explains much of the difference. Sectors that deconstruct knowledge work into discrete, automatable tasks achieve faster results. Industries anchored in physical operations or tightly regulated processes report slower progress. Healthcare stands out with high executive confidence in visibility paired with the lowest ROI expectations, reflecting governance friction and compliance constraints.
Industry context shapes AI returns
Results also differ by job function. IT teams report the strongest outcomes and the highest confidence in both visibility and ROI. These teams use AI to generate code, automate infrastructure, and accelerate delivery, producing measurable outputs such as deployment frequency and system uptime.
Customer support and logistics report lower confidence. AI use in these functions centers on drafting, summarization, and coordination tasks that deliver incremental gains. Measurement remains limited, and value attribution proves difficult. Customer support roles report the lowest ROI confidence across all functions, despite heavy investment in chatbots and agent assistance tools.
Why IT pulls ahead and support lags
Most workers report modest time savings from AI. More than 85 percent remain under 10 hours saved per month. A small group of power users, roughly six percent of the workforce, reports savings exceeding 20 hours per month. These users engage across multiple tools and advanced capabilities.
Training correlates strongly with proficiency. Organizations with formal AI training programs report higher skill levels, satisfaction, and productivity gains. Utilization metrics alone fail to capture this difference. Login counts and license adoption provide limited insight into effectiveness or value creation.
The productivity gap inside the workforce
Structural issues limit enterprise measurement. Thirty percent of respondents cite responsibility gaps for AI measurement. Fragmented ownership across teams follows closely. Technical limitations rank lower.
Governance policies exist in most organizations, though execution varies. Sixty-nine percent report having AI risk and compliance policies, and more than 80 percent express satisfaction with guardrails. At the same time, many lack visibility into workforce adoption rates, risk exposure, and value metrics. Organizations with formalized governance demonstrate higher likelihood of ROI, reflecting alignment across leadership, security, and operational teams.
Metrics tracked emphasize ease of collection. Money saved, percentage of users, and time saved per week lead the list. Fewer organizations track investment per tool, maturity by function, or delivery speed improvements. These gaps limit the ability to connect AI usage with business outcomes.
