The Center on Long-Term Risk, formerly Foundational Research Institute, is an effective altruist is a research group affiliated with the Swiss/German Effective Altruism Foundation. It investigates cooperative strategies to reduce risks of astronomical suffering in humanity'humanity's future (s-risks). This includes not only (post-)human suffering, but also the suffering of non-human animals and potential digital sentience. Their research is interdisciplinary, drawing on insights from artificial intelligence, anthropic reasoning, international relations, sociology, philosophy, and other fields.
The Center on Long-Term Risk, formerly Foundational Research Institute, is an effective altruist organization focusedis a research group affiliated with the Swiss/German Effective Altruism Foundation. It investigates cooperative strategies to reduce risks of astronomical suffering in humanity's future (s-risks). This includes not only (post-)human suffering, but also the suffering of non-human animals and potential digital sentience. Their research is interdisciplinary, drawing on preventinginsights from Risks of Astronomical artificial intelligence, anthropic reasoning, international relations, sociology, philosophy, and other fields.
The Center on Long-Term Risk, formerly Foundational Research Institute, is
aneffective altruista research groupaffiliated with the Swiss/GermanEffective Altruism Foundation. Itthat investigates cooperative strategies to reduce risks of astronomical sufferingin humanity's future(s-risks). This includes not only (post-)human suffering, but alsothe suffering of non-human animals andpotential digital sentience. Their research is interdisciplinary, drawing on insights from artificial intelligence, anthropic reasoning, international relations,sociology,philosophy, and other fields. Its research agenda focuses on encouraging cooperative behavior in and avoiding conflict between transformative AI systems.AbolitionismMindcrimeEffective Altruism Wiki article on FRI