Application deadline:
- Main deadline: Midnight, 7th December, Sydney time
- If we have unfilled slots, we may still accept applications until the 14th of December
Location: Sydney (definite); Melbourne (likely; contingent on sufficient high-quality applications)
When: January/February 2026 with remote activities pre and post the main fellowship (detail further down)
Apply nowWe are looking for a small group of strongly motivated, highly agentic individuals with good strategic judgement (from technical researchers to governance thinkers to entrepreneurs) who want to spend the summer developing situational awareness, figuring out where they can best contribute, and working on a project that demonstrates their potential.
Our core promise: Other programs in AI safety are primarily designed to accelerate your career as fast as possible. In contrast, whilst we believe that participating in this fellowship will increase your chance of breaking into the field, this objective is merely secondary for us. Our promise to you is that we will do our best to equip you to make a difference, as difficult as this is given the extreme level of uncertainty around AGI.
Program details - 10-week hybrid structure:
- Main program (ie. in-person components) will occur over 7 weeks between 10th January - 1st March:
- The opening unconference will be on the 10th-11th of January
- Saturday will be the main day (ie. mandatory for all fellows; 10am-6pm)
- We'll have a secondary day on a weekday.
- Pre-program: You will have the opportunity to meet the other fellows on a call before the start of the main program. You will also be expected to spend time figuring out/refining your project idea before the start of the fellowship.
- Follow-up phase: In the three weeks following the in-person components, we'll continue to provide support to help you bring your project to a close and to figure out your next steps.
- Standard weekly activities (illustrative only): 1 hour readings; 30 min discussion prep; 2 hour discussion; 1 hour speaker; 2 hours lunch; 30 min mentorship or research management; 1 hour self-organised activities; 5-10 hours to working on your project (we expect that fellows who are more senior will be able to make progress with fewer hours)
This fellowship offers:
- An opportunity to build skills and develop intellectually as part of a cohort
- An opening unconference and then 6 weekly discussions customised to the needs of the cohort; NOT a standardised curriculum (this may be Sydney exclusive)
- Speakers chosen based on who we think you'll learn the most from, not who's high-status. We expect the most common format to be a short speech followed by a Q&A. Participants provided strong positive feedback about the choice of speakers for the last iteration.
- Access to a co-working space two days per week with a proper coffee machine. We’ll be providing a free lunch.
- An opening dinner, 2-3 socials and a closing dinner
- Compute for those conducting empirical research
- And, of course, mentorship, networking and career/application advice
- Potentially: reimbursement for flights for top candidates. We’re unfortunately unable to assist with visas and the amount of any potential reimbursement will be capped based on the cost of regional flights (ie. from Australia or New Zealand). The bar will also be significantly higher for international applicants.
Sadly, we're unable to offer stipends or accommodation.
More information on our website: including a guide to help you figure out whether this program is for you and a discussion of alumni outcomes.
Fellowship organisers - Sydney: Chris Leong and Jack Payne; Melbourne: Michael Kerrison
- Chris is an experienced program organiser, having co-founded AI Safety Australia and New Zealand and run the first two iterations of this fellowship. He has an unusually wide range of AI safety experience, including conducting agent foundations research, being selected as an ERA Technical Governance fellow, leading an AI Safety Camp project, attending the Co-operative AI Summer School and participating in Arena. In addition, he has facilitated courses for BlueDot Impact and the Center for AI Safety.
- Jack is an ML Engineer and aspiring technical safety researcher. He's been involved in the AI Safety ANZ community for 18 months and has helped in field-building and community outreach. He's completed technical safety programs such as ARBOx and TARA and is currently participating in SPAR under the mentorship of Georg Lang on Automating Interpretability with Agents, and will likely be conducting independent research during the program period. His primary safety concerns are increasing non-myopic type behaviour, such as deceptive alignment and self-preservation.
- Michael is the Principal at Proof Consulting. He previously worked at Nous Group as a manager and senior data scientist.
Apply now