Fixing The Use And Misuse Of AI Security


In a landmark move at the intersection of artificial intelligence (AI) and cybersecurity, the U.S. Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) and the United Kingdom’s National Cyber Security Centre (NCSC) have jointly revealed the world’s first global guidelines for AI security.

These Guidelines for Secure AI System Development are designed to empower developers across the globe, offering comprehensive insights to inform cybersecurity decisions at every stage of AI system development.

AI Security Guidelines: The Key Takeaways!

These Secure AI System Development guidelines represent a collaborative effort involving 21 international agencies and ministries, including all members of the Group of seven major industrial economies. The aim is to address the critical need for robust cybersecurity covering the implementation and regulation of artificial intelligence technology.

The Guidelines for Secure AI System Development provide developers with a commonsense path, ensuring that AI systems are designed, developed, and deployed with cybersecurity being at the core of it. 

The release of these AI security guidelines follows Australia’s 7-year plan to combat cybercrime and bolster the nation’s cybersecurity. The Australian government recently announced its plan to fight cybercrime, emphasizing collaboration both regionally and globally to defend international cyber norms.

Secure by Design: The UK’s take on AI Guidelines

At the heart of the UK-led guidelines is the concept of a ‘secure by design’ approach, ensuring that cybersecurity is an integral part of the development process from the outset. 

The guidelines are set to officially launch at an event hosted by the NCSC, featuring discussions with key industry, government, and international partners, including representatives from Microsoft, the Alan Turing Institute, and cybersecurity agencies from the UK, the U.S., Canada, and Germany.

Secretary of Homeland Security Alejandro N. Mayorkas emphasized the significance of these guidelines in the development of AI, calling them a “commonsense path” to building AI systems that are safe, secure, and trustworthy.

CISA Director Jen Easterly highlighted the global dedication to transparency, accountability, and secure practices in AI development, reaffirming the commitment to protect critical infrastructure through international collaboration.

A Global Understanding of AI Cyber Risks

NCSC CEO Lindy Cameron sees the guidelines as a crucial step in shaping a global understanding of cyber risks and mitigation strategies around AI. By placing security at the forefront of development, the guidelines aim to ensure that security is not an afterthought but a fundamental requirement throughout the AI development lifecycle.

UK Science and Technology Secretary Michelle Donelan emphasizes the country’s global leadership in AI safety. The National Cyber Security Centre (NCSC) released comprehensive guidelines, placing cybersecurity at the core of AI development to mitigate risks systematically. 

Following an international agreement on responsible AI at Bletchley Park, the UK continues its commitment to a global collaborative approach. Donelan asserts that this initiative aligns with advancing AI to revolutionize the NHS, and public services, and foster high-skilled employment.

The guidelines, covering secure design, development, deployment, and operation, offer suggested behaviors for enhanced security. Accessible on the NCSC website, officials provide insights in a blog post.

With the release of the Guidelines for Secure AI System Development, the international community takes a significant stride towards fostering transparency, accountability, and secure practices in AI technology. 

Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.





Source link