Data protection and AI: what to know about new UK cyber standard


In a significant move positioning the UK at the forefront of responsible AI adoption, the government has introduced what it calls a “world first” AI-focused cyber security code of practice. Released on 31 January 2025, the Code of Practice for the Cyber Security of AI represents a crucial step in creating a secure environment for AI innovation while protecting digital infrastructure from emerging threats.

This initiative comes at a critical juncture in AI development. With half of UK businesses experiencing cyberattacks in the past year and AI systems increasingly embedded in critical operations, establishing robust security frameworks has become essential for maintaining trust in these transformative technologies.

A framework built for the AI era

The code is reportedly underpinned by 13 software development principles for developers to follow throughout the entire AI system lifecycle − from secure design and development to deployment, maintenance, and eventual disposal. Unlike general software security standards, this code specifically addresses unique AI vulnerabilities including data poisoning, model obfuscation, and indirect prompt injection.

While compliance remains voluntary, the framework establishes a clear hierarchy of provisions categorised as required, recommended, or possible. This tiered approach offers flexibility for organisations at different stages of AI maturity while establishing baseline security measures that must be implemented by those choosing to comply.

The code will serve as the foundation for a new global standard through the European Telecommunications Standards Institute (ETSI), potentially extending the UK’s influence in international AI governance.

Urgent need: AI data leakage crisis

Recent research highlights why this standard arrives not a moment too soon. According to data protection provider Harmonic’s Q4 2024 report on AI data leakage, approximately 8.5% of employee prompts to popular AI tools contain sensitive information. This creates significant security, compliance, and legal vulnerabilities for organisations.

The research found customer data accounted for 45% of sensitive information shared, followed by employee data (26%) and legal and financial information (15%). Most concerning for security professionals, nearly 7% of sensitive prompts contained security-related information including penetration test results, network configurations, and incident reports − essentially providing potential attackers with blueprints for exploitation.

The problem is exacerbated by widespread use of free AI tools. The report found 64% of ChatGPT users relied on the free tier, with 54% of sensitive prompts entered there. Without enterprise security controls and with terms that typically allow training on user queries, these free tools represent significant data loss vectors.

UK’s strategic AI approach

The UK’s approach contrasts notably with the European Union’s more prescriptive AI Act. Instead of comprehensive legislation, the UK has opted for a principles-based, cross-sector framework that applies existing technology-neutral regulations to AI. This reflects the government’s assessment that while legislative action will ultimately be necessary, particularly regarding General Purpose AI systems, acting now would be premature.

This approach aligns with the UK’s broader AI strategy outlined in the AI Opportunities Action Plan, which emphasises a pro-innovation regulatory environment designed to attract technology investment while addressing essential security concerns.

Economic impact

The standard supports the UK’s ambition to become a global AI leader. The sector currently comprises over 3,100 AI companies employing more than 50,000 people and contributing £3.7 billion to the economy. The recently launched AI Opportunities Action Plan aims to boost these figures significantly, potentially adding £47 billion annually by increasing productivity up to 1.5% each year.

Similarly, the AI assurance market − ensuring AI systems work as intended − is projected to grow six-fold by 2035, potentially unlocking more than £6.5 billion in value.

Practical implementation

For businesses looking to implement the standard, the government has published a comprehensive implementation guide. Helping organisations determine which requirements apply to different types of AI systems and provides practical steps for achieving compliance. The guide emphasises the critical importance of advanced governance tracking and AI data gateways to prevent sensitive information exposure to public GenAI models. A necessity highlighted by recent data leakage incidents. Such controls should monitor all AI interactions from employees, contractors, and third parties who might inadvertently share proprietary information.

The recently launched AI Assurance Platform also offers a consolidated destination for information on managing AI-related risks, with tools for conducting impact assessments, evaluating systems, and reviewing data for potential bias. The platform supports implementation of real-time monitoring solutions that can detect and block sensitive data before it reaches public LLMs. Rather than simply blocking access to AI tools, the Code encourages organisations to establish secure channels for AI usage that maintain visibility and control over data flows while still enabling productivity gains from these powerful technologies.

Looking forward

In my view, the UK’s new AI standards represent a balanced approach to fostering innovation while addressing security concerns. By providing frameworks that are both comprehensive and flexible, these initiatives aim to build trust in AI systems and unlock their potential economic benefits.

As AI continues its rapid evolution, a strategic approach favouring guidance and principles over rigid legislation offers businesses the adaptability needed to innovate responsibly. The success of these standards will ultimately depend on their adoption across sectors and how effectively they evolve to address emerging challenges in the increasingly complex AI landscape.

John Lynch is the Director of UK Market Development at Kiteworks.



Source link