More employees get AI tools, fewer rely on them at work

More employees get AI tools, fewer rely on them at work

People across many organizations now have access to AI tools, and usage keeps spreading. Some groups rely on AI during regular work, others treat it as an occasional helper. That gap between access and routine use sits at the center of new research from Deloitte on enterprise AI adoption.

The research draws on a global survey of more than 3,200 business and IT leaders conducted in late 2025. Respondents come from large organizations across industries and regions. Many report progress during the past year, especially around tool access and executive support. The findings also show friction around scaling, governance, and workforce readiness.

Access grows faster than daily use

Survey data shows a rise in sanctioned AI access. About six in ten workers now have approved tools, up from under four in ten a year earlier. A smaller share uses those tools as part of everyday workflows. Usage patterns remain uneven across roles and functions.

Organizations report early productivity gains tied to tasks like summarization, research, and basic automation. Leaders describe AI as helpful during specific steps in a process. Routine, end to end use across teams appears less common. Many respondents connect this pattern to training gaps, workflow design, and uncertainty around expectations.

Movement from pilot projects to production remains limited. One quarter of surveyed organizations report moving a significant share of experiments into production environments. More than half expect to reach that level within several months. Respondents describe integration work, security reviews, and compliance checks as central parts of that transition.

From a security standpoint, this shift introduces new operational exposure. “Organizations should expect new classes of attack and failure modes to emerge, particularly around permitted access being misused outside its intended workflow,” Ali Sarrafi, CEO of Kovant, told Help Net Security. “Security teams will need to rethink controls that were originally designed for human activity.”

Sarrafi said many of these risks stem from agents performing valid actions in the wrong context. An agent may have permission to query preference data, for example, yet a specific task such as booking travel should limit access to a single user’s information. Addressing that gap requires permission models that adjust dynamically based on task scope.

Transformation looks different across companies

The research groups organizations by how closely AI connects to core operations. About one third report deep changes such as new products, redesigned processes, or revised business models. The rest apply AI through process updates or within existing workflows.

All groups report productivity gains tied to day to day work, but revenue impact remains limited. Leaders expect future growth from AI driven products and services, with current gains centered on operational output and decision support.

Security teams supporting these efforts encounter growing complexity as agentic workflows span more systems. Sarrafi said the number of possible action patterns grows quickly as agents connect across tools and services. Observing those actions at scale becomes a challenge. He emphasized the need for external control mechanisms that interrupt invalid activity without relying on the agent itself to self regulate. These controls, he said, need independence from the agent to prevent destructive sequences that could lead to data loss.

Skills and work design remain open questions

Workforce readiness comes up often. Many organizations focus on basic AI training, with less movement around job design, career paths, or role changes.

Leaders expect automation to change parts of many jobs over the next few years. Entry level and task focused roles see the earliest shifts, with managers spending more time overseeing work shared between people and machines.

Sarrafi added that reliability risks extend beyond access control. He described scenarios where agents appear to perform well based on internal signals without connection to real world outcomes. Without independent evaluation tied to external results, organizations lack confirmation that agent actions deliver intended value. He said continuous assessment mechanisms that operate outside the agent provide a way to validate outcomes during live operation.

Sovereign AI moves into board discussions

Location and control of AI development now shape purchasing and architecture decisions. A large share of surveyed companies factor country of origin into vendor selection. Many report building AI stacks with local providers to meet data residency and regulatory expectations.

Sovereign AI refers to systems designed, trained, and deployed under local laws using controlled infrastructure and data. Respondents describe this topic as part of strategic planning discussions, especially for organizations operating across borders. Requirements vary by region and industry, adding complexity to deployment choices.

Extent of agentic AI usage (Source: Deloitte)

Agentic AI expands alongside governance gaps

Interest in agentic AI grows quickly. These systems set goals, reason through tasks, and act through software interfaces. Nearly three quarters of surveyed companies plan to deploy agentic AI within two years. Current usage remains lower.

Governance maturity lags behind adoption plans. About one fifth of respondents report established governance models for autonomous agents. Leaders describe the need for boundaries around agent actions, approval workflows, monitoring, and audit records. Cross functional teams that include security, legal, and business leaders play a role in these efforts.

Sarrafi said many governance gaps trace back to identity and privilege management. Integrations often bypass established practices, increasing the scope of agent driven actions. When exploitation occurs, the impact can extend across connected systems due to elevated permissions.

Physical AI extends AI into operations

Physical AI includes robotics, automated machinery, and systems that sense and act in the physical world. More than half of surveyed organizations report some level of current use. Adoption projections rise during the next two years, especially in manufacturing, logistics, and defense.

Controlled environments such as factories and warehouses support earlier deployment. Leaders cite costs, safety requirements, and regulatory approvals as major considerations. Business cases often include infrastructure changes, maintenance, and downtime planning.

Security blind spots appear most often at integration points between digital systems and physical assets. Sarrafi said agentic systems rely on connections across email, workflow platforms, and third party services, expanding exposure beyond traditional enterprise boundaries. He added that this integration layer creates shared risk ownership across IT, operational technology, and safety teams.

Readiness varies by domain

Many leaders report higher confidence in AI strategy and governance planning than in infrastructure, data management, or talent readiness. Strategy and policy decisions move quickly at the executive level. System upgrades and skill development take longer to execute across large organizations.

Enterprise AI remains a work in progress, influenced by access, design choices, and organizational structure. Progress continues across tools, agents, and physical systems. Daily integration into work remains uneven, leaving many organizations focused on the next stage of adoption.



Source link