OpenAI said it is expanding its Trusted Access for Cyber program to “thousands of individuals and organizations,” who will use the company’s technology to root out bugs and vulnerabilities in their products.
The program will also incorporate GPT 5.4 Cyber, a new variant of ChatGPT that OpenAI says is specifically optimized for cybersecurity tasks. OpenAI’s goal with this release is to make advanced cybersecurity tools more widely accessible.
The company said access to the program and cybersecurity-focused model will still be governed by “strong” Know-Your-Customer and identity verification rules to help prevent the model’s spread to bad actors.
“Our goal is to make these tools as widely available as possible while preventing misuse,” the company said in a blog posted Tuesday. “We design mechanisms which avoid arbitrarily deciding who gets access for legitimate use and who doesn’t.”
OpenAI’s announcement comes one week after Anthropic rolled out Project Glasswing, a similar effort that seeks to provide major tech companies with Claude Mythos, an unreleased model that Anthropic officials have claimed is too dangerous to sell commercially.
OpenAI officials noted they publicly announced Trusted Access for Cyber program months earlier. They have also quietly avoided direct comparisons to Mythos, and GPT 5.4 Cyber.
Cybersecurity experts in the U.S. and UK have described Mythos as a significant improvement from previous frontier models around identifying (and potentially exploiting) cybersecurity vulnerabilities, though there remains debate and speculation about the model’s ultimate impact on information security.
Similarly, GPT 5.4 Cyber has been finetuned for testing and vulnerability research, though OpenAI wants to make iterative improvements to the program as lessons are learned.
The company has plans to allow a broader group of cyber operators to use the model to protect critical infrastructure, public services and other digital systems. The company said it is also leery of having too much influence over which industries or sectors ultimately take part in the program.
“We don’t think it’s practical or appropriate to centrally decide who gets to defend themselves,” the blog stated. “Instead, we aim to enable as many legitimate defenders as possible, with access grounded in verification, trust signals, and accountability.”

