The UK National Cyber Security Centre (NCSC) has published new guidelines that can help developers and providers of AI-powered systems “build AI systems that function as intended, are available when needed, and work without revealing sensitive data to unauthorised parties.”
How to put cybersecurity at the core of the AI systems
The Guidelines for secure AI system development cover four key stages of the development lifecycle of machine learning (ML) applications.
Secure design hinges on all persons involved – system owners, developers, users – being aware of the unique security risks facing AI systems and being taught to avoid them.
“Model the threats to your system, and design your system for security as well as functionality and performance,” the guidelines instruct. Also, developers should consider the security benefits and trade-offs when selecting their AI model (more complex isn’t always better).
Secure development presupposes securing the supply chain; protecting assets (models, data, prompts, software, logs, etc.); documenting models, datasets and meta- or system-prompts; and managing technical debt.
Secure deployment entails a secure infrastructure (in every part of the system’s lifecycle) and a constant protection of the mode and data from direct and indirect access. To address (inevitable) security incidents, incident response, escalation and remediation plans have to be through out and put in place.
AI should be released responsibly, i.e., only after its security has been thoroughly evaluated (and users have been appraised of limitations or potential failure modes).
“Ideally, the most secure setting will be integrated into the system as the only option. When configuration is necessary, the default option should be broadly secure against common threats (that is, secure by default). You apply controls to prevent the use or deployment of your system in malicious ways,” the guidelines say.
Finally, to assure secure operation and maintenance, operators are urged to monitor their system’s behaviour and inputs, switch on automated updates by default, and be transparent and responsive, especially when it comes to failures (e.g., vulnerabilities).
Who are the AI cybers security guidelines for?
The guidelines have been drawn up with the help of the US Cybersecurity and Infrastructure Security Agency (CISA) and similar agencies and CERTs from around the world, as well as industry experts.
“The new UK-led guidelines are the first of their kind to be agreed globally. They will help developers of any systems that use AI make informed cyber security decisions at every stage of the development process – whether those systems have been created from scratch or built on top of tools and service provided by others,” the NCSC pointed out.
“[The guidelines are] aimes primarily at providers of AI systems, whether based on models hosted by an organisation or making use of external application programming interfaces (APIs). However, we urge all stakeholders (including data scientists, developers, managers, decision-makers and risk owners) to read these guidelines to help them make informed decisions about the design, deployment and operation of their machine learning AI systems.”
The release of the guidelines follows that of an Executive Order that President Joe Biden issued to jumpstart actions aimed at protecting Americans from the potential risks of AI systems (fraud, privacy threats, discrimination and other abuses).