The Fund for Alignment Research (FAR) is hiring research engineers and communication specialists to work closely with AI safety researchers. We believe these roles are high-impact, contributing to some of the most interesting research agendas in safety. We also think they offer an excellent opportunity to build skills and connections via mentorship and working closely with researchers at a variety of labs.
We were inspired to start FAR after noticing that many academic and independent researchers wanted to work with more collaborators, but often lacked the institutional framework and hiring pipeline to do so. On the other hand, there are a large number of talented people who would like to contribute to AI safety, but lack a stepping stone into the field. Our hope is that FAR can help fill this gap, both directly accelerating valuable research and helping to address the talent pipeline.
In the remainder of this post we'll give a bit more information about FAR and our current collaborators, and then summarize our current openings. Please consider applying or forwarding them to a friend who might be interested! We are also actively brainstorming other ways that FAR could be useful to the community. If you have any ideas, we’d love to hear from you!
About Us
FAR is a non-profit led by Ethan Perez, Adam Gleave, Scott Emmons, and Claudia Shi: a group of AI safety researchers looking to reduce existential risk from artificial intelligence. Ethan recently graduated from the PhD program at New York University, Adam and Scott are PhD candidates at UC Berkeley, and Claudia is a PhD candidate at Columbia University.
FAR provides services to AI safety researchers to accelerate their research agendas. We are currently focused on supporting the agendas of Ethan, Adam, Scott, and Claudia. We are also trialing a collaboration with the labs of David Kruger and Jacob Steinhardt, professors at the University of Cambridge and UC Berkeley.
Our services are currently provided free-of-charge to recipients out of FAR's general support funding. In the future we plan to charge partners who use large quantities of our services on an at-cost basis. This could be paid for from a partner's existing grant, or we can also assist the partner with fundraising for this purpose.
We anticipate supporting many of the same people that BERI currently works with. However, our organisations have differing emphases. First, our core services are different: to a first approximation, BERI provides "operations as a service" whereas FAR provides "a technical team as a service". That is, FAR recruits, manages and trains our own team; whereas BERI hires people primarily at the request of their partner's. Second, FAR works primarily with individual researchers whereas BERI works primarily with entire labs, although this distinction may blur in the future. Finally, FAR is more opinionated than BERI: if we have more demand for our services than our team can support, then we will prioritize based on our internal view of which agendas are most promising.
Although FAR is a new organization, our research has already led to a method for learning from language feedback as a data-efficient alternative to RL from human feedback. We have analyzed challenges associated with treating a language model as an RL policy and launched a competition on inverse scaling for language models. We are currently pursuing several other early-stage AI safety projects. Once we have beta-tested our model, we plan to expand the number of partners we work with. Feel free to get in touch at hello@alignmentfund.org if you think you might be a good fit!
Operations Manager
We are seeking an Operations Manager to help us scale to support more AI alignment researchers. Your job will be varied and include tasks such as recruitment, procurement, finances, fundraising and events management. You might be a good fit if you’re impact-motivated, organized and like developing scalable business processes.
You will work closely with the FAR executive team. You will be responsible for FAR’s ongoing operations across areas such as recruitment, finances and procurement. Additionally, you will be expected to identify and work on any areas that are currently a bottleneck for FAR’s growth. This could range from researching non-profit regulations, helping write grant proposals, and interviewing researchers who we are supporting to understand their needs.
This is a full-time contractor role compensated at $30-$70/hour ($60-$140k/year), depending on experience and location. Ideally you could work in person from our Berkeley, CA office. However, we are open to hiring remote candidates, especially those willing and able to visit our office periodically.
We are hiring research engineers to help with AI safety projects in reinforcement learning, natural language processing, and adversarial robustness. Specifically, our research engineers will initially work on one of the following topics:
Reward and imitation learning. Developing a reliable set of baseline implementations for algorithms that can learn from human feedback. Extensions may include developing standardized benchmark environments, datasets, and evaluation procedures.
Natural language processing. Training language models with language feedback. Integrating large language models with reinforcement learning in order to better understand human intentions. Creating text datasets and using them to fine-tune large language models.
Adversarial robustness. Applying reinforcement learning techniques to test for vulnerabilities in narrowly superhuman systems, such as KataGo.
We are looking to offer contracts with the following details:
Location: Both remote and in-person (Berkeley, CA) are possible.
Hours: Full-time preferred, part-time possible (minimum 20 hours/week).
Contract length: 6–12 months with possibility for renewal.
Compensation: $50-$100/hour depending on experience and location. We will also pay for work-related travel and equipment expenses.
We are seeking freelance science communicators with a range of skills from technical writing, graphic design, presentation development and/or web/social media management. In this role you will help communicate research results to the AI research community and key stakeholders.
Our communications specialists will work on a variety of projects led by different researchers. We want this role to help them evaluate their personal fit for this type of work and help launch their career in this field. Communications contributions will be recognized with either co-authorship or acknowledgements as appropriate, enabling our hires to develop a portfolio. We would also be excited for our hires to pursue independent side-projects, such as starting a blog which could be part of the Effective Ideas ecosystem.
In terms of logistics:
Applications: Applications will be evaluated on a rolling basis, so we encourage you to apply soon. We would prefer to hire candidates by the end of August 2022.
Compensation: This is a contractor role compensated at $30-$50/hour (communications specialist) or $50-$80/hour (senior communications specialist), depending on experience.
Duration: This role is temporary but may lead to a long-term role either with us or the labs you will be collaborating with. This is dependent both on your fit for the role and the demand for communication services from our collaborators.
Hours: between 10-40 hours/week, depending on what tasks you can perform and how many individuals we hire.
Location: The position is remote, although we can pay for periodic travel to visit collaborators.
Cross-posted to the EA Forum
The Fund for Alignment Research (FAR) is hiring research engineers and communication specialists to work closely with AI safety researchers. We believe these roles are high-impact, contributing to some of the most interesting research agendas in safety. We also think they offer an excellent opportunity to build skills and connections via mentorship and working closely with researchers at a variety of labs.
We were inspired to start FAR after noticing that many academic and independent researchers wanted to work with more collaborators, but often lacked the institutional framework and hiring pipeline to do so. On the other hand, there are a large number of talented people who would like to contribute to AI safety, but lack a stepping stone into the field. Our hope is that FAR can help fill this gap, both directly accelerating valuable research and helping to address the talent pipeline.
In the remainder of this post we'll give a bit more information about FAR and our current collaborators, and then summarize our current openings. Please consider applying or forwarding them to a friend who might be interested! We are also actively brainstorming other ways that FAR could be useful to the community. If you have any ideas, we’d love to hear from you!
About Us
FAR is a non-profit led by Ethan Perez, Adam Gleave, Scott Emmons, and Claudia Shi: a group of AI safety researchers looking to reduce existential risk from artificial intelligence. Ethan recently graduated from the PhD program at New York University, Adam and Scott are PhD candidates at UC Berkeley, and Claudia is a PhD candidate at Columbia University.
FAR provides services to AI safety researchers to accelerate their research agendas. We are currently focused on supporting the agendas of Ethan, Adam, Scott, and Claudia. We are also trialing a collaboration with the labs of David Kruger and Jacob Steinhardt, professors at the University of Cambridge and UC Berkeley.
Our services are currently provided free-of-charge to recipients out of FAR's general support funding. In the future we plan to charge partners who use large quantities of our services on an at-cost basis. This could be paid for from a partner's existing grant, or we can also assist the partner with fundraising for this purpose.
We anticipate supporting many of the same people that BERI currently works with. However, our organisations have differing emphases. First, our core services are different: to a first approximation, BERI provides "operations as a service" whereas FAR provides "a technical team as a service". That is, FAR recruits, manages and trains our own team; whereas BERI hires people primarily at the request of their partner's. Second, FAR works primarily with individual researchers whereas BERI works primarily with entire labs, although this distinction may blur in the future. Finally, FAR is more opinionated than BERI: if we have more demand for our services than our team can support, then we will prioritize based on our internal view of which agendas are most promising.
Although FAR is a new organization, our research has already led to a method for learning from language feedback as a data-efficient alternative to RL from human feedback. We have analyzed challenges associated with treating a language model as an RL policy and launched a competition on inverse scaling for language models. We are currently pursuing several other early-stage AI safety projects. Once we have beta-tested our model, we plan to expand the number of partners we work with. Feel free to get in touch at hello@alignmentfund.org if you think you might be a good fit!
Operations Manager
We are seeking an Operations Manager to help us scale to support more AI alignment researchers. Your job will be varied and include tasks such as recruitment, procurement, finances, fundraising and events management. You might be a good fit if you’re impact-motivated, organized and like developing scalable business processes.
You will work closely with the FAR executive team. You will be responsible for FAR’s ongoing operations across areas such as recruitment, finances and procurement. Additionally, you will be expected to identify and work on any areas that are currently a bottleneck for FAR’s growth. This could range from researching non-profit regulations, helping write grant proposals, and interviewing researchers who we are supporting to understand their needs.
This is a full-time contractor role compensated at $30-$70/hour ($60-$140k/year), depending on experience and location. Ideally you could work in person from our Berkeley, CA office. However, we are open to hiring remote candidates, especially those willing and able to visit our office periodically.
You can read more about the role and apply here!
Research Engineer
We are hiring research engineers to help with AI safety projects in reinforcement learning, natural language processing, and adversarial robustness. Specifically, our research engineers will initially work on one of the following topics:
We are looking to offer contracts with the following details:
You can read more about the role and apply here!
Communications Specialist
We are seeking freelance science communicators with a range of skills from technical writing, graphic design, presentation development and/or web/social media management. In this role you will help communicate research results to the AI research community and key stakeholders.
Our communications specialists will work on a variety of projects led by different researchers. We want this role to help them evaluate their personal fit for this type of work and help launch their career in this field. Communications contributions will be recognized with either co-authorship or acknowledgements as appropriate, enabling our hires to develop a portfolio. We would also be excited for our hires to pursue independent side-projects, such as starting a blog which could be part of the Effective Ideas ecosystem.
In terms of logistics:
You can read more about the role and apply here (communications specialist) and here (senior communications specialist)!