Privacy programs are taking on more operational responsibility across the enterprise. A new Cisco global benchmark study shows expanding mandates, rising investment, and sustained pressure around data quality, accountability, and cross-border data management tied to AI systems.
Privacy programs grow with AI demand
AI projects expanded the scope of privacy work across most enterprises over the past year. Budgets followed that shift, with additional spending planned as AI moves from pilots into production systems.
Privacy teams now support a wider range of responsibilities. These include sourcing data for model training, overseeing AI use cases, and coordinating governance efforts across the business. The study shows privacy functions moving closer to core operations, where data is accessed, shared, and reused at scale.
Participants link privacy investment to tangible outcomes. Faster innovation, improved coordination, and stronger customer trust appear across regions and industries. These patterns suggest privacy programs now operate as foundational infrastructure rather than standalone compliance efforts.
Governance maturity lags behind adoption
Many enterprises have established governance committees or working groups to oversee AI use. Only a small share describe those structures as proactive or well integrated across business, legal, and technical teams.
Governance responsibility often sits within IT or security, leaving gaps in executive ownership and product involvement. Even so, governance value is widely recognized, particularly for product quality, regulatory readiness, and alignment with corporate values.
Privacy teams contribute policy guidance, data controls, and risk assessments that influence how AI systems are built and deployed. Governance activity increasingly occurs during routine workflows rather than through static policy documents.
Transparency outweighs formal compliance signals
Customer expectations around data use continue to rise as AI systems handle more personal and behavioral information. Demand for transparency has increased around how data is collected, processed, and used in AI driven services.
Explanations of how data is used carry more weight than formal compliance claims or breach prevention messaging. Dashboards, contractual disclosures, and direct explanations give users greater visibility into data practices.
Customers show greater willingness to share data when policies are easy to understand. Privacy laws also contribute to that comfort, particularly in AI contexts where data use can feel opaque.
Localization complicates global operations
Cross-border data rules remain a persistent challenge, especially for multinational enterprises. Data localization requirements add cost and operational friction, affecting infrastructure design, vendor management, and deployment timelines.
AI systems rely on large, distributed datasets, increasing demand for cross-border data movement while regulations push toward local storage. Slower service rollouts, duplicated infrastructure, and strain on technical staff emerge as common effects.
Views on data security continue to evolve. Confidence in strictly local storage has softened, while trust in providers that manage global data flows has increased. Many support international approaches that allow data movement under shared governance principles.
“To capture the potential of AI, organizations (83%) are advocating for a shift toward harmonized international standards” said Harvey Jang, Cisco Vice President and Chief Privacy Officer. “They recognize that global consistency is an economic necessity to ensure data can flow securely while maintaining the high standards of protection required for trust.”
Data quality and IP protection surface as weak points
AI exposes gaps in data discipline that were easier to manage in earlier systems. Accessing relevant, high quality data remains difficult when it is needed. Data preparation and classification continue to require significant effort, slowing AI development and deployment.
Intellectual property protection has become a major concern. Sensitivity around training data and the risk of exposing proprietary or customer information increase as models draw from broader datasets and operate across organizational boundaries.
Tagging systems exist in many environments, though fewer teams describe them as comprehensive or automated. Manual processes and partial coverage create blind spots that complicate governance and oversight.
Generative AI changes data use
Demand for data continues to rise as generative and agentic AI expand beyond experimentation. Data sources include system logs, customer data, telemetry, synthetic datasets, and collections built specifically for training.
Data quality and unclear ownership remain the largest obstacles to sourcing AI training data. Localization rules add complexity when datasets span jurisdictions.
Governance approaches are moving closer to actual use. Blanket bans on AI tools are becoming less common. User guidance, access controls, and safeguards increasingly operate at the moment data is entered or models are used.
Vendor governance gains importance
Confidence in vendor transparency around data use and system behavior is widespread, and transparency has become a baseline expectation.
Formal accountability mechanisms lag behind those expectations. About half require detailed contractual terms covering data ownership and liability. Teams are strengthening vendor oversight, monitoring alignment with governance principles, and seeking independent certifications during procurement.
Providers appear increasingly willing to negotiate data use terms, reflecting a market adapting to enterprise governance demands.
