NIST asks public for help securing AI agents

NIST asks public for help securing AI agents

This audio is auto-generated. Please let us know if you have feedback.

The National Institute of Standards and Technology is asking the public for suggested approaches to managing the security risks of AI agents.

In a Federal Register notice set for publication on Thursday, NIST’s Center for AI Standards and Innovation (CAISI) solicited “information and insights from stakeholders on practices and methodologies for measuring and improving the secure development and deployment of artificial intelligence (AI) agent systems.”

The public engagement reflects persistent concerns about security weaknesses in increasingly ubiquitous AI agents. Many companies have adopted these agents without fully understanding or developing plans to mitigate their flaws, inadvertently creating new avenues for hackers to penetrate their computer networks. The wide latitude given to poorly secured AI agents could be especially dangerous in critical infrastructure networks, which sometimes control industrial machinery that is essential to health and safety.

“If left unchecked, these security risks may impact public safety, undermine consumer confidence, and curb adoption of the latest AI innovations,” NIST said in its solicitation.

The agency is giving tech companies, academic researchers and other members of the public 60 days to provide “concrete examples, best practices, case studies, and actionable recommendations based on their experience developing and deploying AI agent systems and managing and anticipating their attendant risks.”

Looking for guidance

CAISI, created during the Biden administration and overhauled in 2025 under President Donald Trump, is responsible for developing AI security assessment methods, testing AI models for weaknesses and partnering with industry to create voluntary security standards. NIST said public feedback would help CAISI evaluate AI security risks and produce “technical guidelines and best practices to measure and improve the security of AI systems.”

The solicitation asks the public to respond to a number of specific questions, including several about the security risks unique to AI agents, the technical controls available for securing agents and the current maturity level of methods for detecting cyber incidents involving agents.

CAISI also wants to know how agents’ specific capabilities and deployment methods can influence the effectiveness of their security controls and which agent-security research areas deserve the most urgent attention, among other issues.



Source link