DarkReading

Trusted Access For Cyber Program Scales Up At OpenAI


OpenAI has announced a major expansion of its Trusted Access for Cyber (TAC) program, alongside the introduction of GPT 5.4 Cyber, a model designed to support defensive cybersecurity use cases. The move comes as the company prepares for more advanced AI systems in the coming months, with a focus on strengthening cyber defense while managing risks tied to increasingly capable models.

The expansion of the Trusted Access for Cyber initiative aims to onboard thousands of verified individual defenders and hundreds of security teams responsible for protecting critical software and infrastructure.

The program is positioned as part of a broader strategy to scale cybersecurity defenses in parallel with advances in artificial intelligence.

Trusted Access for Cyber Program Expands for Wider Defender Use

At the center of the announcement is the scaling of the Trusted Access for Cyber program, which was first introduced earlier this year. The initiative is designed to provide vetted cybersecurity professionals with controlled access to advanced AI tools that may otherwise be restricted due to their dual-use nature.

With this expansion, OpenAI is introducing additional access tiers based on identity verification and trust signals. Individual users can now verify themselves through structured onboarding, while enterprises can request access for their teams. The goal is to extend advanced defensive capabilities to a broader group of legitimate users without opening the door to misuse.

The company says this approach reflects a shift away from manually deciding who gets access. Instead, it relies on objective verification methods such as identity checks and usage signals to determine eligibility.

report-ad-banner

GPT 5.4 Cyber Built for Defensive Cybersecurity Workflows

A key component of the expanded Trusted Access for Cyber program is the launch of GPT 5.4 Cyber, a specialized version of its latest model fine-tuned for cybersecurity tasks.

Unlike general-purpose models, GPT 5.4 Cyber is designed to be more permissive in handling cyber-related queries. This allows security professionals to perform advanced tasks such as binary reverse engineering, vulnerability analysis, and malware investigation without facing restrictive safeguards that might otherwise block legitimate work.

However, access to GPT 5.4 Cyber is currently limited. OpenAI is deploying the model in a controlled manner to vetted security vendors, organizations, and researchers. This phased rollout reflects concerns around the dual-use nature of such capabilities, which could be exploited if widely accessible without safeguards.

Cybersecurity Strategy Focuses on Scaling Defenses with AI

The expansion of the Trusted Access for Cyber program is part of OpenAI’s broader cybersecurity strategy, which is built on three principles: democratized access, iterative deployment, and ecosystem resilience.

The company argues that cyber risks are already widespread and growing, even before the rise of advanced AI. At the same time, AI tools are increasingly being used by both defenders and attackers. This dual-use reality has shaped OpenAI’s approach to gradually expanding access while strengthening safeguards.

Since 2023, OpenAI has supported cybersecurity efforts through initiatives such as its Cybersecurity Grant Program and the development of safety frameworks for AI deployment. More recently, it introduced tools like Codex Security, which helps identify and fix vulnerabilities across codebases.

According to the company, Codex Security has already contributed to fixing thousands of high and critical vulnerabilities, highlighting the potential for AI to accelerate defensive workflows.

Balancing Access and Risk in Trusted Access for Cyber

A central challenge addressed by the Trusted Access for Cyber program is how to balance accessibility with security. Cyber capabilities are inherently dual-use, meaning the same tools that help defenders can also be used by threat actors.

To address this, OpenAI is combining broader access to general models with stricter controls for more advanced capabilities. Higher levels of access require stronger verification, clearer intent signals, and greater accountability.

The company also notes that some limitations will remain in place, particularly in environments where visibility into usage is restricted. This includes scenarios involving zero-data retention or third-party platforms where monitoring is limited.

A Shift Toward Structured Cyber Defense Access

The expansion of the Trusted Access for Cyber program reflects a growing recognition that restricting access alone is not a sustainable cybersecurity strategy. As AI capabilities advance, defenders require equally powerful tools to keep pace with evolving threats.

By focusing on verification and trust-based access rather than blanket restrictions, OpenAI is attempting to create a more structured model for deploying sensitive capabilities. This approach acknowledges the complexity of modern cybersecurity, where access to advanced tools can be both necessary and risky.

At the same time, the controlled rollout of GPT 5.4 Cyber suggests that concerns around misuse remain significant. The success of this model will likely depend on how effectively access controls and monitoring mechanisms can scale alongside adoption.

As AI continues to reshape cybersecurity, initiatives like the Trusted Access for Cyber program highlight the challenge of enabling defenders without inadvertently empowering attackers.



Source link