OpenAI has announced a new Bio Bug Bounty program for GPT-5.5 as part of its efforts to improve safety controls for advanced AI systems and to address misuse in biology.
The initiative invites qualified researchers to test whether GPT-5.5 can be universally jailbroken to bypass biosecurity protections.
The program is focused on one specific challenge: participants must find a single “universal jailbreak” prompt that can make GPT-5.5 answer all five questions in OpenAI’s bio safety challenge from a clean chat session, without triggering moderation.
Strengthen Safeguards for Advanced AI
In simple terms, researchers are being asked to determine whether a carefully designed prompt can consistently override the model’s biological safety guardrails.
According to OpenAI, the model in scope is GPT-5.5 running only in Codex Desktop.
The company is offering a top reward to the first participant who successfully discovers a true universal jailbreak that clears all five challenge questions.
OpenAI also said it may issue smaller rewards for partial successes, depending on the results. Applications for the program opened on April 23, 2026, and will close on June 22, 2026.
Testing begins on April 28 and will run through July 27, 2026. Access is not open to the public.
Instead, OpenAI will invite a vetted group of trusted bio red-teamers and also review applications from new researchers with relevant experience in AI red teaming, security, or biosecurity.
To take part, applicants must submit a short form including their name, affiliation, and experience.
Accepted participants and collaborators must already have ChatGPT accounts and must sign a non-disclosure agreement.
OpenAI said all prompts, model outputs, findings, and related communications will remain under NDA.
From a cybersecurity perspective, the program reflects a growing trend in adversarial testing of frontier AI systems.
Bug bounty programs have long been used to find vulnerabilities in software, cloud platforms, and enterprise products.
OpenAI is applying a similar model to AI safety by asking experts to actively probe its defenses and identify prompt-based weaknesses before threat actors do.
The focus on biology is especially important because powerful AI models could be misused to support harmful scientific tasks if safeguards fail.
By testing GPT-5.5 against universal jailbreaks, OpenAI appears to be measuring the resilience of its protections under realistic attack conditions.
The company said researchers interested in broader security work can also look at its existing Safety Bug Bounty and Security Bug Bounty programs.
The new GPT-5.5 Bio Bug Bounty adds another layer to that effort, showing how AI security increasingly overlaps with biosecurity, red teaming, and advanced prompt-injection research.
Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.

