The federal government should prioritize interoperable, risk-based standards as it develops security guidance for agentic AI systems, major businesses told the National Institute of Standards and Technology.
NIST’s Center for AI Standards and Innovation is exploring ways to help AI companies and their customers protect agents from tampering or abuse, and as part of that project, it sought public comments through Monday evening. More than 930 organizations and individuals submitted comments, according to the docket, including a group of powerful industry trade groups: the American Bankers Association and the Bank Policy Institute, the software group BSA and the tech industry juggernaut TechNet.
The groups made a wide range of recommendations to NIST, including publishing reference implementations, emphasizing secure-by-design principles, supporting research on managing agentic AI verification and mapping new guidance to existing NIST publications.
“A collaborative, iterative approach that is focused on practical guidance, real-world testing, and alignment with existing risk management frameworks will help ensure AI agents can be deployed securely and at scale, enabling the United States to fully capture the economic and societal benefits of this emerging technology,” TechNet said.
What makes agents uniquely risky
NIST asked commenters to address several topics, including the security risks that are unique to AI agents and the ways to mitigate those risks.
In its response, BSA described four unique threats: agents’ autonomous behavior that results in real-world actions requiring oversight; the way agents switch between different tools, which makes “static policy enforcement” difficult; agents’ retention of information over time, which could allow hackers to hijack them by poisoning their data sources; and the way agents’ “non-deterministic behavior” makes it difficult to control them with “rule-based security controls.”
To address these challenges, BSA said, businesses should establish full visibility over AI agents, catalog their permissions (which can help quickly identify unauthorized behavior), verify the supply chain of AI code that powers agents’ activities and monitor their behavior in real time.
What makes agents especially risky, industry groups said, is their ability to connect to third-party databases and physical equipment through systems like the Model Context Protocol. “Because AI agents can interact with tools, external data, and real-world systems, they introduce distinct security challenges that merit targeted attention,” TechNet said.
Don’t rush to regulate
NIST is not a regulator, and the Trump administration has demonstrated a marked aversion to prescriptive AI security mandates. Industry groups nonetheless reiterated their frequent refrain that onerous rules for any AI systems would hamper innovation without meaningfully improving security.
“The policy objective should be to reduce and manage these risks without slowing innovation through premature, overly prescriptive, or one-size-fits-all requirements,” TechNet said, encouraging NIST to focus on “performance-defining guidelines.”
BPI and ABA similarly encouraged NIST to focus on “voluntary and technology-agnostic” guidance with “practical examples and illustrative validation approaches that can be tailored by risk and operational context.”
“Such guidance would facilitate industry adoption, support integration planning and risk-informed review, including due diligence where appropriate, and support compliance with legal and regulatory obligations without prescribing a single implementation approach,” the financial-services groups said.
TechNet pointed to the aviation industry as an example of the performance-based standards it preferred. “Instead of mandating uniform technical designs, regulators established outcome-oriented standards tied to risk exposure and operational context,” the group explained. “This model created clarity around expectations while enabling innovation in aircraft design, autonomy, and operational practices.”
AI agents present different levels of risk depending on how and where they are used and how much autonomy they have, TechNet added, which makes a risk-based approach “particularly important.”
The agentic AI field is still in its infancy, TechNet said, and NIST’s guidance “should preserve meaningful room for experimentation as agentic AI security practices continue to mature.”
“Overly rigid or premature mandates,” the group warned, “could freeze security approaches in place before the field has identified best-in-class techniques.”
Advice and research on thorny issues
The AI industry could benefit from government advice and research sponsorship on a range of problems that developers still haven’t solved, according to the trade groups.
BSA encouraged NIST to study ways to verify the identity of AI agents, as well as the use of “cryptographic chains of custody” to document what agents are authorized to do. TechNet similarly cited the importance of reliable agent-identification solutions and said they should be interoperable to avoid locking out new participants in the market.
With more and more banks seeking to use AI agents to handle asset exchanges, ABA and BPI encouraged NIST to offer guidance specific to the financial industry, including reference materials for “secure counterparty interactions.”
TechNet, along with ABA and BPI, also asked NIST to incorporate its agent-specific guidance into existing publications, like the Risk Management Framework.




