CISA Has a New Road Map for Handling Weaponized AI


Last month, a 120-page United States executive order laid out the Biden administration’s plans to oversee companies that develop artificial intelligence technologies and directives for how the federal government should expand its adoption of AI. At its core, though, the document focused heavily on AI-related security issues—both finding and fixing vulnerabilities in AI products and developing defenses against potential cybersecurity attacks fueled by AI. As with any executive order, the rub is in how a sprawling and abstract document will be turned into concrete action. Today, the US Cybersecurity and Infrastructure Security Agency (CISA) will announce a “Roadmap for Artificial Intelligence” that lays out its plan for implementing the order.

CISA divides its plans to tackle AI cybersecurity and critical infrastructure-related topics into five buckets. Two involve promoting communication, collaboration, and workforce expertise across public and private partnerships, and three are more concretely related to implementing specific components of the EO. CISA is housed within the US Department of Homeland Security (DHS).

“It’s important to be able to put this out and to hold ourselves, frankly, accountable both for the broad things that we need to do for our mission, but also what was in the executive order,” CISA director Jen Easterly told WIRED ahead of the road map’s release. “AI as software is clearly going to have phenomenal impacts on society, but just as it will make our lives better and easier, it could very well do the same for our adversaries large and small. So our focus is on how we can ensure the safe and secure development and implementation of these systems.”

CISA’s plan focuses on using AI responsibly—but also aggressively in US digital defense. Easterly emphasizes that, while the agency is “focused on security over speed” in terms of the development of AI-powered defense capabilities, the fact is that attackers will be harnessing these tools—and in some cases already are—so it is necessary and urgent for the US government to utilize them as well.

With this in mind, CISA’s approach to promoting the use of AI in digital defense will center around established ideas that both the public and private sectors can take from traditional cybersecurity. As Easterly puts it, “AI is a form of software, and we can’t treat it as some sort of exotic thing that new rules need to apply to.” AI systems should be “secure by design,” meaning that they’ve been developed with constraints and security in mind rather than attempting to retroactively add protections to a completed platform as an afterthought. CISA also intends to promote the use of “software bills of materials” and other measures to keep AI systems open to scrutiny and supply chain audits.

“AI manufacturers [need] to take accountability for the security outcomes—that is the whole idea of shifting the burden onto those companies that can most bear it,” Easterly says. “Those are the ones that are building and designing these technologies, and it’s about the importance of embracing radical transparency. Ensuring we know what is in this software so we can ensure it is protected.”



Source link