Cyberscoop

Major world economies spell out key elements of AI ‘ingredients list’


A group of international government agencies released guidance Tuesday on what they believe any artificial intelligence “ingredients list” tool should include to make AI more secure.

The concept of such a list, known as a “software bill of materials (SBOM),” is to know everything that goes into a particular piece of software so that any supply chain risks are easier to identify. There’s been a growing focus from cyber experts on how they interact with AI.

The guidance produced by agencies from the G7 group of nations, including the Cybersecurity and Infrastructure Security Agency, is aimed at setting minimum voluntary standards for what SBOMs for AI should look like. It builds on past efforts to produce other kinds of SBOM guidance.

“While not exhaustive or mandatory, the supplemental minimal elements outlined in this guidance reflect the consensus of G7 experts and will expand over time to keep pace with the rapid advancement of AI technology,” CISA stated. (Some refer to SBOMs for AI as AIBOMs.)

The elements include those that fall under the categories of information related to the SBOM for AI itself, on the AI system as a whole, for identifying the models used by the AI system, on datasets used during the whole life cycle of the model, on physical and virtual infrastructure needed for operation and support support of the AI system, on cybersecurity measures that apply to AI models and systems and on the AI system’s key performance indicators. 

A trio of industry professionals who have worked on the topic of AISBOMs told CyberScoop they welcomed the guidance, in each case praising it as a good step that could nonetheless be improved upon.

“Pretty much every piece of software out there is now going to have AI incorporated into it, and when a hospital is buying an AI-enabled medical device, or the Department of War is buying an AI-enabled weapon system, or auto manufacturers are putting AI into cars, we need to be able to trust what AI is in those systems,” said Daniel Bardenstein, CEO of Manifest Cyber. “And the first step to trust is to identify what is this AI, where did it come from? How is it trained?”

“This is a strong, applaudable step towards getting everybody on the same page that this is the future of how we need to think about trusting AI,” said Bardenstein, who has built and AIBOM generator and worked on the topic in the past with CISA and the OWASP Foundation.

Dmitry Raidman, co-founder and chief technology officer at Cybeats — and someone who, like Bardenstein, has built his own AIBOM generator and worked on AIBOMs with CISA and OWASP — said the G7 guidance was “amazing” because it covers 80 to 90% of what’s needed.

“There was no baseline, but it now will put out a clear baseline,” he said.

On the downside, Bardenstein said he had concerns with how easily organizations can implement the guidance, and Raidman said it doesn’t adequately tackle the issue of runtime.

Allan Friedman, sometimes called the “godfather of SBOMs,” said the guidance was a good document, but probably mislabeled because it states that the elements it identifies are not mandatory.

“This document is laying out sets of types of data that could be useful,” said Friedman, who worked on SBOMs in multiple U.S. government roles who is senior technical adviser at the Institute for Security and Technology and technologist in residence at TPO Group. “And so it is a great, great piece to advance AI transparency and AI system transparency, but it lists potential elements. These aren’t the minimum elements.”

Friedman said the next steps could include mapping the guidance into what is being implemented today, and talking about aligning it with policies in the European Union and G7 governments to make sure there are minimal conflicts.

Written by Tim Starks

Tim Starks is senior reporter at CyberScoop. His previous stops include working at The Washington Post, POLITICO and Congressional Quarterly. An Evansville, Ind. native, he’s covered cybersecurity since 2003. Email Tim here: tim.starks@cyberscoop.com.



Source link