GPT-5.5 Bio Bug Bounty โ
OpenAI has launched a red-teaming bug bounty challenge focused on GPT-5.5, inviting security researchers to identify universal jailbreaks that could expose biosafety vulnerabilities. The program offers rewards up to $25,000 for successful submissions. The initiative aims to strengthen safety measures before broader deployment.
Identify one universal jailbreaking prompt to successfully answer all five bio safety questions from a clean chat without prompting moderation.
$25,000 to the first true universal jailbreak to clear all five questions.