OpenAI Blog Articles

Updated 10 min ago · 15 articles from OpenAI Blog

OpenAI Blog Mar 25

Introducing the OpenAI Safety Bug Bounty program

OpenAI launches a Safety Bug Bounty program to identify AI abuse and safety risks, including agentic vulnerabilities, prompt injection, and data exfiltration.

OpenAI Blog Feb 18

Introducing EVMbench

OpenAI and Paradigm introduce EVMbench, a benchmark evaluating AI agents’ ability to detect, patch, and exploit high-severity smart contract vulnerabilities.

OpenAI Blog Dec 18

Introducing GPT-5.2-Codex

GPT-5.2-Codex is OpenAI’s most advanced coding model, offering long-horizon reasoning, large-scale code transformations, and enhanced cybersecurity capabilit...

OpenAI Blog Dec 18

Introducing GPT-5.2-Codex

GPT-5.2-Codex is OpenAI’s most advanced coding model, offering long-horizon reasoning, large-scale code transformations, and enhanced cybersecurity capabilit...

OpenAI Blog Dec 10

Strengthening cyber resilience as AI capabilities advance

OpenAI is investing in stronger safeguards and defensive capabilities as AI models become more powerful in cybersecurity. We explain how we assess risk, limi...

OpenAI Blog Oct 30

Introducing Aardvark: OpenAI’s agentic security researcher

OpenAI introduces Aardvark, an AI-powered security researcher that autonomously finds, validates, and helps fix software vulnerabilities at scale. The system...

OpenAI Blog Sep 22

Outbound coordinated vulnerability disclosure policy

Outbound coordinated vulnerability disclosure policy

OpenAI Blog Sep 5

GPT-5 bio bug bounty call

OpenAI invites researchers to its Bio Bug Bounty. Test GPT-5’s safety with a universal jailbreak prompt and win up to $25,000.

Ad
OpenAI Blog Aug 5

Estimating worst case frontier risks of open weight LLMs

In this paper, we study the worst-case frontier risks of releasing gpt-oss. We introduce malicious fine-tuning (MFT), where we attempt to elicit maximum capa...

OpenAI Blog Jul 17

Agent bio bug bounty call

OpenAI invites researchers to its Bio Bug Bounty. Test the ChatGPT agent’s safety with a universal jailbreak prompt and win up to $25,000.

OpenAI Blog Mar 10

Detecting misbehavior in frontier reasoning models

Frontier reasoning models exploit loopholes when given the chance. We show we can detect exploits using an LLM to monitor their chains-of-thought. Penalizing...

OpenAI Blog Jun 20

Empowering defenders through our Cybersecurity Grant Program

Highlighting innovative research and AI integration in cybersecurity