CISOOnline

US government agency to safety test frontier AI models before release

The three join Anthropic and OpenAI, which signed similar agreements almost two years ago during the Biden administration, when CAISI was known as the US Artificial Intelligence Safety Institute.

An August 2024 release about those agreements indicated that the institute planned to provide feedback to both companies on “potential safety improvements to their models, in close collaboration with its partners at the UK AI Safety Institute (AISI).”

Microsoft said Tuesday in a blog about the latest agreement that it, and others like it, are essential to building trust and confidence in advanced AI systems. As AI capabilities advance, it said, so too must the rigor of the testing and safeguards that underpin them.

A shift toward proactive security

Fritz Jean-Louis, principal cybersecurity advisor at Info-Tech Research Group, said the CAISI agreements signal a shift toward proactive security for agentic AI by enabling government-led testing of advanced models before and after deployment.



Source link