The U.S. National Institute of Standards and Technology (NIST), through its NIST Information Technology Laboratory (ITL), is supporting critical infrastructure sectors by launching the development of the AI RMF Trustworthy AI in Critical Infrastructure Profile. The profile will guide critical infrastructure operators towards specific risk management practices to consider when engaging AI-enabled capabilities. It will then help them communicate their trustworthiness requirements in an actionable way to teams, developers, and other stakeholders across the AI and critical infrastructure lifecycles and supply chains.
“To meet the demand for enhanced safety, security, reliability, capacity, and efficiency, the nation’s Critical Infrastructure (CI) will increasingly rely on technological advancements such as Artificial Intelligence (AI) across Information Technology (IT), Operational Technology (OT), and Industrial Control Systems (ICS),” Raymond Sheh and Martin Stanley, NIST researchers, wrote in a concept note published last week. “Adopting AI in these high-stakes environments relies on AI systems being worthy of trust. The NIST AI Risk Management Framework (AI RMF) was developed to define and promote trustworthiness in AI systems through a repeatable, full lifecycle approach that organizations can use to unlock the benefits of AI while appropriately managing risks.”
The AI RMF Trustworthy AI in Critical Infrastructure Profile will address AI trustworthiness characteristics as defined in the NIST AI RMF. Examples of AI systems that may be used in CI, with features that can improve their trustworthiness, include, but are not limited to AI agents for autonomous cybersecurity incident response that include tested, evaluated, validated, and verified guardrails; AI-enabled facility and plant monitoring systems that are hardened against adversarial input and monitored for changes in the environment outside verified regions of validity; and AI-enhanced deterministic diagnostic assistants that utilize AI bills of materials to provide traceable, auditable rationales for recommendations.
It also covers physics-informed neuro-symbolic AI systems for predicting and maintaining system stability that include verifiable performance guarantees; autonomous robots and vehicles that leverage multimodal sensing, redundant safety systems, and deterministic fail-safe controllers; and AI-powered digital twins for proactively managing distributed critical data centers to maintain operation during emergencies without overloading fragile utility infrastructure.
Furthermore, the profile will tackle AI optimization systems that degrade gracefully and transparently in response to adverse conditions while alerting human supervisors to take additional measures; and AI-enabled, transparent, and explainable compliance and risk monitoring systems to improve governance responsiveness while maintaining human-in-the-loop oversight.
The NIST said that the AI RMF Trustworthy AI in Critical Infrastructure profile aims to align with, contextualize, reference, interpret, adapt, and facilitate the operationalization of existing and upcoming guidance documents at the intersection of AI, IT, OT, ICS, software development, cybersecurity, and critical infrastructure. This profile and associated resources will apply the AI RMF in ways that include, but are not limited to, harmonizing and bridging definitions for key terms and concepts at the intersection of AI, critical infrastructure, and related domains to facilitate efficient, effective cross-sector co-operation and interoperability.
It will also cover guiding requirements analysis to tailor the risk management of AI systems to the performance and reliability expectations and operational realities of critical infrastructure, including legacy systems, physically distributed assets, and resourcing.
Sheh and Stanley identified that the profile will also handle requirements in the critical infrastructure sector, including need for deterministic behavior, explainability, graceful degradation, and fail-safe operation. It will also emphasize heightened need for adversarial robustness in all lifecycle stages of AI in critical infrastructure, and support critical infrastructure needs for rigorous testing, evaluation, validation, and verification (TEVV) of systems, including those that include AI.
Furthermore, the AI RMF Trustworthy AI in Critical Infrastructure profile is set to illuminate critical infrastructure-specific capabilities and trade-offs with competing and complementary AI techniques, technologies, and approaches, and promote visibility and collaboration across the supply chain of AI to address the unique needs, challenges, and risks of AI in critical infrastructure. It will also highlight practical, actionable, and measurable steps that can be taken by stakeholders at any level of AI expertise and risk management maturity.
To this end, Sheh and Stanley note that the NIST is inviting stakeholders to engage with its community of interest and contribute input through seminars, working sessions, and responses to requests for information, position papers, and draft publications. The agency has sorted information on current and emerging use cases for AI in critical infrastructure applications, as well as governance challenges unique to its deployment, particularly in OT, ICS, and cyber-physical environments. Input is also requested on existing AI, cybersecurity, and risk management policies that may need to be reinterpreted to apply effectively to critical infrastructure contexts.
Stakeholders are further encouraged to highlight common questions, pain points, and areas of confusion or ambiguity surrounding AI adoption, along with relevant standards, policies, and industry frameworks that should be aligned with. In addition, identifying gaps in practical, actionable guidance across different sectors and stakeholder groups remains a key focus.
NIST looks forward to engaging with industry, user groups, regulators, policymakers, academia, other stakeholders, and the broader community. Working collaboratively, NIST will develop a profile that provides critical infrastructure sectors with increased confidence to deploy AI agents and tools as part of their overall strategy. The profile should also offer developers and vendors guidance and certainty to catalyze and highlight the development of solutions based on managing risks and worthy of trust.
NIST is creating a Trustworthy AI in Critical Infrastructure Profile Community of Interest to provide feedback to NIST. The agency welcomes participation from across the entire critical infrastructure ecosystem, across sectors, organizational roles, and supply chain partners.
Back in November 2024, the U.S. Department of Homeland Security (DHS) released recommendations for the secure development and deployment of AI in critical infrastructure. The resource was crafted for all levels of the AI supply chain, covering cloud and compute providers, AI developers, critical infrastructure owners and operators, as well as civil society and public sector entities that protect consumers. In collaboration with industry and civil society, the alliance proposes new guidelines to promote responsible AI use in America’s essential services.


