OpenAI Launches Safety Fellowship for External Researchers with Compute Power and Stipends

robot
Abstract generation in progress

According to monitoring by 1M AI News, OpenAI has announced the opening of applications for the Safety Fellowship, a pilot program aimed at external researchers, engineers, and practitioners. The fellowship will last approximately five months, from September 14, 2026, to February 5, 2027, focusing on AI safety and alignment research. Priority areas include safety assessment, ethics, robustness, scalable mitigation strategies, privacy-preserving safety methods, agent supervision, and high-risk abuse scenarios. Selected fellows will receive a monthly stipend, compute support, and mentorship from OpenAI, with the option to work at the Constellation co-working space in Berkeley or remotely. At the end of the program, participants are required to produce substantial research outputs such as papers, benchmark datasets, or data sets. Scholars will receive API credits and related resources but will not have access to OpenAI’s internal systems. Applications are open to candidates from multidisciplinary backgrounds including computer science, social sciences, cybersecurity, privacy, and human-computer interaction, and require a letter of recommendation. The application deadline is May 3, with results to be announced by July 25.

This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin