OpenAI has unveiled Trusted Access for Cyber, a new identity- and trust-based framework designed to enhance cybersecurity defenses while mitigating risks posed by its most advanced AI models.
The initiative centers on GPT-5.3-Codex, OpenAI’s most cyber-capable frontier-reasoning model, which can operate autonomously for hours or days to complete complex security tasks.
Enhanced Capabilities for Defenders
The new system represents a significant evolution in AI-powered cybersecurity tools. While earlier models could only auto-complete code snippets, GPT-5.3-Codex can accelerate vulnerability discovery and remediation across entire systems.
This advancement enables security professionals to detect, analyze, and defend against sophisticated targeted attacks more effectively.
However, OpenAI recognizes the dual-use nature of these capabilities. The same tools that help defenders find malicious actors could also be used to exploit patch vulnerabilities.
This ambiguity creates challenges requests like “find vulnerabilities in my code” could support either legitimate security testing or exploitation attempts.
To address these concerns, OpenAI has implemented a multi-tiered verification system:
- Individual users can verify their identity at chatgpt.com/cyber for access to cybersecurity features
- Enterprise organizations can request trusted access for their entire security teams through OpenAI representatives
- Security researchers requiring more permissive access can apply to an invite-only program for advanced defensive work
The framework includes built-in safeguards. GPT-5.3-Codex has been trained to refuse clearly malicious requests, such as credential theft.
Automated classifier-based monitors continuously detect suspicious cyber activity patterns.
These protections aim to prevent prohibited behaviors, including data exfiltration, malware creation or deployment, and unauthorized testing.
To accelerate defensive adoption, OpenAI is committing $10 million in API credits through its Cybersecurity Grant Program.
The program targets teams with proven track records in identifying and remediating vulnerabilities in open-source software and critical infrastructure systems.
OpenAI plans to refine the Trusted Access framework based on feedback from early participants.
The company emphasizes that all users must comply with existing Usage Policies and Terms of Use, regardless of their access level.
This initiative reflects OpenAI’s commitment to ensuring advanced AI capabilities strengthen cyber defenses while minimizing potential misuse risks.
Follow us on Google News, LinkedIn, and X to Get Instant Updates and Set GBH as a Preferred Source in Google
