OpenAI has released GPT-5.4-Cyber, a new version of its flagship AI. This launch comes just one week after Anthropic’s new AI model Mythos (Claude Mythos) rollout. The new model is a variant of the main GPT-5.4 system, specifically optimized for defensive cybersecurity use cases.
According to OpenAI’s announcement, the goal is to provide better tools for network and system defenders “responsible for keeping systems, data, and users safe,” so that they can “find and fix problems faster.”
Scaling the Defense Program
Along with the new model, OpenAI is expanding its Trusted Access for Cyber (TAC) program, which first launched in February 2026. This project is now available to thousands of authenticated individual defenders and hundreds of teams responsible for securing critical infrastructure. By providing these professionals with more advanced capabilities, the company wants to help them stay ahead of threat actors who are also experimenting with AI.
An integral part of this strategy is the Codex Security tool, which moved into research preview earlier in 2026. OpenAI claims this system has helped identify and patch over 3,000 critical and high-severity vulnerabilities. It does so by monitoring codebases and suggesting fixes before they can be exploited in a cyberattack.
New Technical Capabilities and Access
The GPT-5.4-Cyber model introduces a much-talked-about new feature called binary reverse engineering. It is especially designed for security experts to help them analyse compiled software to find malware and vulnerabilities, even if they do not have the source code. This feature has been under development from GPT-5.2 through GPT-5.3-Codex before its official release now.
“Customers in the highest tiers will get access to GPT‑5.4‑Cyber, a model purposely fine-tuned for additional cyber capabilities and with fewer capability restrictions. This is a version of GPT‑5.4, which lowers the refusal boundary for legitimate cybersecurity work and enables new capabilities for advanced defensive workflows, including binary reverse engineering capabilities that enable security professionals to analyze compiled software for malware potential, vulnerabilities, and security robustness without needing access to its source code,” the company explained in a detailed announcement post.
GPT-5.4-Cyber is more permissive for security tasks, which is why OpenAI is not allowing unrestricted access, and to use the most advanced features, users must first verify their identity. The company uses an authentication process to ensure its software is used by legitimate security professionals and not hackers or espionage actors.
Defenders/individual users need to sign up to verify themselves at chatgpt.com/cyber, whereas enterprises can request access through their official OpenAI representatives. OpenAI notes that while vulnerabilities in digital systems have existed for years, these new tools can help legitimate actors protect public services and critical infrastructure more profoundly.
OpenAI plans to continue updating these defensive models throughout 2026. They believe that as AI capabilities grow, the tools used for system defence must also be improved too for enhancing system resilience and keep digital environments secure.
Industry Expert Reactions
Several industry experts shared their views on this announcement with Hackread.com, noting both the benefits and the remaining hurdles for the sector.
Marcus Fowler, CEO of Darktrace Federal, called the move a positive step but warned about the reality of fixing bugs. He stated, “Most organisations are still constrained by the realities of remediation once an issue is discovered: patch development, testing, deployment, uptime requirements, and resource limitations. Faster or deeper analysis does not automatically translate to faster or more effective risk reduction.”
Ronald Lewis from Black Duck highlighted the different styles used by the two tech giants. He noted, “OpenAI’s TAC framework reflects a more conservative, tool-centric risk posture. It treats advanced cyber capabilities as regulated instruments, suitable for controlled deployment within professional workflows.” This stands in contrast to Anthropic’s approach, which focuses more on how the model behaves rather than who is allowed to use it.
Photo by BoliviaInteligente on Unsplash

