TLDR: We're hosting a 3-month, fully-funded fellowship to do AI safety research drawing on inspiration from fields like evolutionary biology, neuroscience, dynamical systems theory, and more. Past fellows have been mentored by John Wentworth, Davidad, Abram Demski, Jan Kulveit and others, and gone on to work at places like Anthropic, Apart research, or as full-time PIBBSS research affiliates. 

​Apply here: https://www.pibbss.ai/fellowship (deadline Feb 4, 2024)


''Principles of Intelligent Behavior in Biological and Social Systems' (PIBBSS) is a research initiative focused on supporting AI safety research by making a specific epistemic bet: that we can understand key aspects of the alignment problem by drawing on parallels between intelligent behaviour in natural and artificial systems. 

Over the last years we've financially supported around 40 researchers for 3-month full-time fellowships, and are currently hosting 5 affiliates for a 6-month program, while seeking the funding to support even longer roles. We also organise research retreats, speaker series, and maintain an active alumni network. 

We're now excited to announce the 2024 round of our fellowship series!

The fellowship

Our Fellowship brings together researchers from fields studying complex and intelligent behavior in natural and social systems, such as evolutionary biology, neuroscience, dynamical systems theory, economic/political/legal theory, and more. 

Over the course of 3-months, you will work on a project at the intersection of your own field and AI safety, under the mentorship of experienced AI alignment researchers. In past years, mentors included  John Wentworth, Abram Demski, Davidad, Jan Kulveit - and we also have a handful of new mentors join us every year. 

In addition, you'd get to attend in-person research retreats with the rest of the cohort (past programs have taken place in Prague, Oxford and San Francisco), and choose to join our regular speaker events where we host scholars who work in areas adjacent to our epistemic bet, like Michael Levin, Alan Love, and Steve Byrnes and a co-organised an event with Karl Friston.  

The program is centrally aimed at Ph.D. or Postdoctoral researchers. However, we encourage interested individuals with substantial prior research experience in their field of expertise to apply regardless of their credentials.

Past scholars have pursued projects with titles ranging from: "Detecting emergent capabilities in multi-agent AI Systems" to "Constructing Logically Updateless Decision Theory" and "Tort law as a tool for mitigating catastrophic risk from AI". You can meet our alumni here, and learn more about their research by checking out talks at our YouTube channel PIBBSS summer symposium

Our alumni have gone on to work at different organisations including OpenAI, Anthropic, ACS, AI Objectives Institute, APART research, or as full-time researchers on our own PIBBSS research affiliate program

Apply!

For any questions, you can reach out to us at fellowship@pibbss.ai, or join one of our information sessions: 

Feel free to share this post with other who might be interested in applying!

Apply here: https://www.pibbss.ai/fellowship (deadline Feb 4, 2024)

The 2023 cohort at our research retreat outside Prague. 
A retreat session with our 2022 cohort, near Oxford
Snapshots from talks at our 2023 Fellowship Symposium. Videos are available on YouTube here