OpenAI has announced a new initiative aimed at strengthening digital defenses while managing the risks that come with capable artificial intelligence systems. The effort, called Trusted Access for Cyber, is part of a broader strategy to enhance baseline protection for all users while selectively expanding access to advanced cybersecurity capabilities for vetted defenders.
The initiative centers on the use of frontier models such as GPT-5.3-Codex, which OpenAI identifies as its most cyber-capable reasoning model to date, and tools available through ChatGPT.
What is Trusted Access for Cyber?
Over the past several years, AI systems have evolved rapidly. Models that once assisted with simple tasks like auto-completing short sections of code can now operate autonomously for extended periods, sometimes hours or even days, to complete complex objectives.
In cybersecurity, this shift is especially important. According to OpenAI, advanced reasoning models can accelerate vulnerability discovery, support faster remediation, and improve resilience against targeted attacks. At the same time, these same capabilities could introduce serious risks if misused.
Trusted Access for Cyber is intended to unlock the defensive potential of models like GPT-5.3-Codex while reducing the likelihood of abuse. As part of this effort, OpenAI is also committing $10 million in API credits to support defensive cybersecurity work.
Expanding Frontier AI Access for Cyber Defense
OpenAI argues that the rapid adoption of frontier cyber capabilities is critical to making software more secure and raising the bar for security best practices. Highly capable models accessed through ChatGPT can help organizations of all sizes strengthen their security posture, shorten incident response times, and better detect cyber threats. For security professionals, these tools can enhance analysis and improve defenses against severe and highly targeted attacks.


The company notes that many cyber-capable models will soon be broadly available from a range of providers, including open-weight models. Against that backdrop, OpenAI believes it is essential that its own models strengthen defensive capabilities from the outset. This belief has shaped the decision to pilot Trusted Access for Cyber, which prioritizes placing OpenAI’s most capable models in the hands of defenders first.
A long-standing challenge in cybersecurity is the ambiguity between legitimate and malicious actions. Requests such as “find vulnerabilities in my code” can support responsible patching and coordinated disclosure, but they can also be used to identify weaknesses for exploitation. Because of this overlap, restrictions designed to prevent harm have often slowed down good-faith research. OpenAI says the trust-based approach is meant to reduce that friction while still preventing misuse.
How Trusted Access for Cyber Works
Frontier models like GPT-5.3-Codex are trained with protection methods that cause them to refuse clearly malicious requests, such as attempts to steal credentials. In addition to this safety training, OpenAI uses automated, classifier-based monitoring to detect potential signals of suspicious cyber activity. During this calibration phase, developers and security professionals using ChatGPT for cybersecurity tasks may still encounter limitations.
Trusted Access for Cyber introduces additional pathways for legitimate users. Individual users can verify their identity through a dedicated cyber access portal. Enterprises can request trusted access for entire teams through their OpenAI representatives. Security researchers and teams that require even more permissive or cyber-capable models to accelerate defensive work can apply to an invite-only program. All users granted trusted access must continue to follow OpenAI’s usage policies and terms of use.
The framework is designed to prevent prohibited activities, including data exfiltration, malware creation or deployment, and destructive or unauthorized testing, while minimizing unnecessary barriers for defenders. OpenAI expects both its mitigation strategies and Trusted Access for Cyber itself to evolve as it gathers feedback from early participants.
Scaling the Cybersecurity Grant Program
To further support defensive use cases, OpenAI is expanding its Cybersecurity Grant Program with a $10 million commitment in API credits. The program is aimed at teams with a proven track record of identifying and remediating vulnerabilities in open source software and critical infrastructure systems.
By pairing financial support with controlled access to advanced models like GPT-5.3-Codex through ChatGPT, OpenAI seeks to accelerate legitimate cybersecurity research without broadly exposing powerful tools to misuse.
