AI moves fast, but data security must move faster

AI moves fast, but data security must move faster

Generative AI is showing up everywhere in the enterprise, from customer service chatbots to marketing campaigns. It promises speed and innovation, but it also brings new and unfamiliar security risks. As companies rush to adopt these tools, many are discovering that their data protection strategies are not ready for the challenges AI creates.

The 2025 Thales Data Threat Report, based on a survey of more than 3,000 IT and security professionals, highlights how quickly AI is reshaping enterprise security priorities. It also shows why digital sovereignty is becoming more important as organizations operate across borders and in the cloud.

GenAI adoption outpaces security readiness

One-third of enterprises are already integrating generative AI into their operations or have reached a point where it is transforming their business processes. This shift is happening quickly, and many organizations are not waiting to address security or compliance before moving forward.

Nearly 70% of survey respondents said the fast-changing GenAI ecosystem is their top security concern. This ecosystem includes new SaaS services, emerging infrastructure, and increasingly autonomous AI agents that handle sensitive data.

Data integrity and trustworthiness have become central issues. In traditional programs, most of the focus went to confidentiality and availability. AI changes this balance. Attackers can now target the data itself, injecting false or biased information into models to cause harm. These integrity attacks ranked second on the list of concerns, just behind ecosystem complexity.

Data security is foundational for AI

Generative AI depends on large amounts of reliable, high-quality data. If that data is compromised, the AI cannot function safely. Enterprises are starting to respond with investments in AI-specific security tools. More than 70% of survey participants reported funding these efforts, using a mix of cloud provider offerings and specialized tools.

Even with new investments, there is a gap between adoption and protection. Security teams need better visibility into how data moves through AI systems, especially when these systems are embedded into SaaS products. Without oversight, organizations risk exposing confidential data or breaking privacy rules when information is used for model training or inference.

Preparing for a hybrid future

For CISOs, the challenge is to align their security programs with both AI risks and sovereignty requirements. The report points to a few practical steps. Mapping data across on-premises and cloud environments is essential. Adopting unified tools can reduce the complexity of fragmented controls. Planning for flexibility will help organizations adapt as regulations and technologies evolve.

Generative AI will continue to grow, and its success depends on the quality and protection of the data behind it. Digital sovereignty will define where and how that data is stored and processed. Addressing both together will help security leaders manage risk while enabling innovation.


Source link

About Cybernoz

Security researcher and threat analyst with expertise in malware analysis and incident response.