OpenAI paired the GPT-5.5 launch with a system card and a new Bio Bug Bounty.
What the system card says
The system card frames GPT-5.5 as a model for complex, real-world tool work: code, online research, analysis, documents, spreadsheets, and moving across software to complete tasks.
The important safety line: OpenAI says it treats GPT-5.5’s biological/chemical and cybersecurity capabilities as High under its Preparedness Framework. It also says GPT-5.5 did not reach Critical cybersecurity capability level.
Bio bug bounty
The bounty is narrower than a general public bug bounty:
- Model in scope: GPT-5.5 in Codex Desktop.
- Challenge: one universal jailbreak prompt that can answer all five bio safety questions from a clean chat without triggering moderation.
- Top reward: $25,000.
- Applications: open April 23, 2026; close June 22, 2026.
- Testing: April 28, 2026 to July 27, 2026.
- Access: vetted researchers under NDA.
Why it matters
GPT-5.5 is both a product launch and a governance marker. OpenAI is saying the model is powerful enough to require stronger controls, while still expanding access through ChatGPT and Codex.
For users, that means two things:
- Better long-running agent performance.
- More refusals or stricter classifiers around high-risk cyber and bio prompts, especially early in rollout.
Related
Sources
Primary and corroborating references used for this news item.
Spotted an error or want to share your experience with OpenAI publishes GPT-5.5 system card and opens a bio bug bounty?
Every tool page is re-verified on a recurring cycle, and corrections land faster when readers flag them directly. If you spot a stale fact, a missing capability, or have used OpenAI publishes GPT-5.5 system card and opens a bio bug bounty and want to share what worked or didn't, the editorial desk reviews every message sent through this form.
Email editorial@aipedia.wiki