The AI Safety Research Program was a four-month-long project bringing together talented students and junior researchers with a deep interest in long-term AI safety research, aiming to together create a creative and inspiring research environment, help prospective alignment researchers work on AI safety research problems, gain a better understanding of the AI alignment field, improve their research skills, and bring them closer to the existing research community.
Update June 2020: Updated the project outputs and status.
To hear about similar events in the future, you can subscribe to our mailing list.
The 2019 program ran from October 2019 to March 2020, with following structure:
- Application phase: 44 people went through the application process, which included writing a research proposal for a topic in AI safety. Out of these, we accepted 20 to participate in the program.
- Preparation phase: Afterwards, the participants had two weeks to remotely comment on each other’s ideas and expand them.
- Project selection workshop: The participants then attended a five-day workshop at the Luckley Farm near Oxford, where they discussed their project ideas in more detail. This was further helped by ten senior researchers giving invited talks, discussing with the participants, and providing feedback on their research ideas (either by attending the full workshop, or by visiting for a shorter time). The workshop concluded by the participants forming six smaller teams, each pursuing a specific research project.
- Remote phase: After the initial workshop, each team had two weeks to finalize the research proposal and give further feedback on other team’s proposals, followed by ten weeks of remote work on the topic.
- Research retreat: The teams then regathered for an eight-day research retreat near Prague for intensive collaboration on their projects and gathering more feedback. For each team, this effort culminated in a draft of a technical report and a plan for further work. You can find the informal reports on the Projects page.
- Finalizing projects and future work: After the retreat, the teams have three weeks to finalize the drafts produced during the program. We believe that some of the participants will pursue research on topics related to their project in the future, independently of AISRP.
The program focused on providing the participants with technical and strategic feedback on their projects, improving their high-level understanding of the AI alignment research landscape, providing friendly and productive environment, and continuously offered motivational and research mentoring.