OpenAI opens applications for an external AI safety research fellowship
OpenAI is accepting applications for a paid fellowship program that will fund external researchers to work on safety and alignment questions related to advanced AI systems. The program, called the OpenAI Safety Fellowship, runs from September 14, 2026 through February 5, 2027. Applications close May 3, with successful applicants notified by July 25.

The fellowship is open to researchers, engineers, and practitioners from outside OpenAI. Priority research areas include safety evaluation, ethics, robustness, scalable mitigations, privacy-preserving safety methods, agentic oversight, and high-severity misuse domains. OpenAI states a preference for work that is empirically grounded and technically strong.
Where fellows will work
Fellows will have access to workspace in Berkeley at Constellation, a nonprofit that supports AI safety research. Remote participation is also permitted. They will work alongside a peer cohort and receive mentorship from OpenAI staff.
By the end of the program, each fellow is expected to produce a substantive research output, such as a paper, benchmark, or dataset. The fellowship includes a monthly stipend, compute support, and ongoing mentorship. Fellows will receive API credits and will not have access to OpenAI’s internal systems.
Who can apply
OpenAI is accepting candidates from computer science, social science, cybersecurity, privacy, human-computer interaction, and related fields. The selection process prioritizes research ability, technical judgment, and execution capacity. Specific academic credentials are not required. Letters of reference will be required as part of the application.