CISOOnline

Anthropic Mythos spurs White House to weigh pre-release reviews for high-risk AI models

What a review might mean

Pre-release evaluation of AI models is not a new idea, but it remains poorly defined in the US policy context. The Biden executive order Trump revoked had required developers of the largest AI systems to notify the government and share safety test results before deployment — one of several provisions the Trump administration characterized as burdensome obstacles to innovation.

The institutional picture has also shifted. The US AI Safety Institute, created under the Biden order to conduct pre-deployment evaluation and housed within the National Institute of Standards and Technology, was substantially reorganized after Trump took office. In June 2025, the agency was renamed the Center for AI Standards and Innovation, and its mission was revised.

Commerce Secretary Howard Lutnick framed the change as a repudiation of what he called the use of safety as a pretext for censorship and regulation. The renamed center’s mandate now includes leading unclassified evaluations of AI capabilities that may pose risks to national security, with a stated focus on demonstrable risks such as cybersecurity, biosecurity, and chemical weapons, potentially positioning it to play a role in any future review process.



Source link