The difficulty of defending against the misuse of AI – and possible solutions – was the topic of a U.S. congressional hearing today.
Data security and privacy officials and advocates were among those testifying before the House Committee on Homeland Security at a hearing titled, “Advancing Innovation (AI): Harnessing Artificial Intelligence to Defend and Secure the Homeland.” The committee plans to include AI in legislation that it’s drafting, said chairman Mark E. Green (R-TN).
From cybersecurity and privacy threats to election interference and nation-state attacks, the hearing highlighted AI’s wide-ranging threats and the challenges of mounting a defense. Nonetheless, the four panelists at the hearing – representing technology and cybersecurity companies and a public interest group – put forth some ideas, both technological and regulatory.
Cybercrime Gets Easier
Much of the testimony – and concerns raised by committee members – focused on the advantages that AI has given cybercriminals and nation-state actors, advantages that cybersecurity officials say must be countered by increasingly building AI into products.
“AI is democratizing the threat landscape by providing any aspiring cybercriminal with easy-to-use, advanced tools capable of achieving sophisticated outcomes,” said Ajay Amlani, senior vice president at biometric company iProov.
“The crime as a service dark web is very affordable. The only way to combat AI-based attacks is to harness the power of AI in our cybersecurity strategies.”
AI can also help cyber defenders make sense of the overwhelming amount of data and alerts they have to contend with, said Michael Sikorski, CTO of Palo Alto Networks’ Unit 42. “To stop the bad guys from winning, we must aggressively leverage AI for cyber defense,” said Sikorski, who detailed some of the “transformative results” customers have achieved from AI-enhanced products.
“Outcomes like these are necessary to stop threat actors before they can encrypt systems or steal sensitive information, and none of this would be possible without AI,” Sikorski added.
Sikorski said organizations must adopt “secure AI by design” principles and AI usage oversight. “Organizations will need to secure every step of the AI application development lifecycle and supply chain to protect AI data from unauthorized access and leakage at all times,” he said, noting that the principles align with the NIST AI risk management framework released last month.
Election Security and Disinformation Loom Large
Ranking member Bennie Thompson (D-MS) asked the panelists what can be done to improve election security and defend against interference, issues of critical importance in a presidential election year.
Amlani said digital identity could play an important role in battling disinformation and interference, principles included in section 4.5 of President Biden’s National Cyber Security Strategy that have yet to be implemented.
“Our country is one of the only ones in the western world that doesn’t have a digital identity strategy,” Amlani said.
“Making sure that it’s the right person, it’s a real person that’s actually posting and communicating, and making sure that that person is in fact right there at that time, is a very important component to make sure that we know who it is that’s actually generating content online. There is no identity layer to the internet currently today.”
Safe AI Use Guidelines Proposed by Public Policy Advocate
The most detailed proposal for addressing the AI threat came from Jake Laperruque, deputy director of the Security and Surveillance Project at the Center for Democracy and Technology, who argued that the “AI arms race” should proceed responsibly.
“Principles for responsible use of AI technologies should be applied broadly across development and deployment,” Laperruque said.
Laperruque gave the Department of Homeland Security credit for starting the process with its recently published AI roadmap. He said government use of AI should be based on seven principles:
- Built upon proper training data
- Subject to independent testing and high performance standards
- Deployed only within the bounds of the technology’s designed function
- Used exclusively by trained staff and corroborated by human review
- Subject to internal governance mechanisms that define and promote responsible use
- Bound by safeguards to protect human rights and constitutional values
- Regulated by institutional mechanisms for ensuring transparency and oversight
“If we rush to deploy AI quickly rather than carefully, it will harm security and civil liberties alike,” Laperruque concluded. “But if we establish a strong foundation now for responsible use, we can reap benefits well into the future.”
Media Disclaimer: This article is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.