NIST unveils ARIA to evaluate and verify AI capabilities, impacts


The National Institute of Standards and Technology (NIST) is launching a new testing, evaluation, validation and verification (TEVV) program intended to help improve understanding of artificial intelligence’s capabilities and impacts.

Assessing Risks and Impacts of AI (ARIA) aims to help organizations and individuals determine whether a given AI technology will be valid, reliable, safe, secure, private, and fair once deployed. The program comes shortly after several recent announcements by NIST around the 180-day mark of the Executive Order on trustworthy AI and the U.S. AI Safety Institute’s unveiling of its strategic vision and international safety network.

“To fully understand the impacts AI is having and will have on our society, we need to test how AI functions in realistic scenarios — and that’s exactly what we’re doing with this program,” said U.S. Commerce Secretary Gina Raimondo.

“With the ARIA program, and other efforts to support Commerce’s responsibilities under President Biden’s Executive Order on AI, NIST and the U.S. AI Safety Institute are pulling every lever when it comes to mitigating the risks and maximizing the benefits of AI,” Raimondo continued.

“The ARIA program is designed to meet real-world needs as the use of AI technology grows,” said Under Secretary of Commerce for Standards and Technology and NIST Director Laurie E. Locascio. “This new effort will support the U.S. AI Safety Institute, expand NIST’s already broad engagement with the research community, and help establish reliable methods for testing and evaluating AI’s functionality in the real world.”

ARIA expands on the AI Risk Management Framework, which NIST released in January 2023, and helps to operationalize the framework’s risk measurement function, which recommends that quantitative and qualitative techniques be used to analyze and monitor AI risk and impacts. ARIA will help assess those risks and impacts by developing a new set of methodologies and metrics for quantifying how well a system maintains safe functionality within societal contexts.

“Measuring impacts is about more than how well a model functions in a laboratory setting,” said Reva Schwartz, NIST Information Technology Lab’s ARIA program lead. “ARIA will consider AI beyond the model and assess systems in context, including what happens when people interact with AI technology in realistic settings under regular use. This gives a broader, more holistic view of the net effects of these technologies.”

The results of ARIA will support and inform NIST’s collective efforts, including through the U.S. AI Safety Institute, to build the foundation for safe, secure and trustworthy AI systems.

NIST unveils ARIA to evaluate and verify AI capabilities, impacts

Read more: 25 cybersecurity AI stats you should know



Source link