Effective Altruism (EA) is a movement trying to invest time and money in causes that do the most good per some unit of effort. The label applies broadly, including a philosophy, a community, a set of organisations and set of behaviours. Likewise it also sometimes means how to donate effectively to charities, choose one's career, do the most good per $, do good in general or ensure the most good happens. All of these different framings have slightly different implications.
The basic concept behind EA is that one would really struggle to donate 100 times more money or time to charity than you currently do but, spending a little time researching who to donate to could have an impact on roughly this order of magnitude. The same argument works for doing good with your career or volunteer hours.
The Effective Altruism movement also has its own forum, The EA Forum. It runs on the same software as LessWrong.
Despite a broad diversity of ideas within the EA community on which areas are most pressing, there are a handful of criteria that are generally agreed make an area potentially impactful to work on (either directly or through donation). These are:
A fourth semi-area is:
One morning, I say to them, you notice a child has fallen in and appears to be drowning. To wade in and pull the child out would be easy but it will mean that you get your clothes wet and muddy, and by the time you go home and change you will have missed your first class.
I then ask the students: do you have any obligation to rescue the child? Unanimously, the students say they do. The importance of saving a child so far outweighs the cost of getting one’s clothes muddy and missing a class, that they refuse to consider it any kind of excuse for not saving the child. Does it make a difference, I ask, that there are other people walking past the pond who would equally be able to rescue the child but are not doing so? No, the students reply, the fact that others are not doing what they ought to do is no reason why I should not do what I ought to do.
Once we are all clear about our obligations to rescue the drowning child in front of us, I ask: would it make any difference if the child were far away, in another country perhaps, but similarly in danger of death, and equally within your means to save, at no great cost – and absolutely no danger – to yourself?[1]
It is not clear why, under many moral systems we should care more about people who are in our country than to those who aren't. But those who are in developing nations can be helped about 100x more cheaply than those in the US.
The question is not, Can they reason?, nor Can they talk? but, Can they suffer?
If states of wellbeing matter, then they matter regardless of a being's ability to express or change the situation. A sleeping person can be tormented by nightmares but we still consider that suffering meaningful. Likewise animals are capable of states of pleasure and pain, regardless of their ability to tell us of their situation.
And there are many animals. Likewise, they cannot vote and cannot earn money so are unable to change their own situation. This suggests that supporting animal welfare legislation might be a very cheap way to improve wellbeing.
On a deeper level, EAs say that species is not the marker of moral worth. If we had evolved from dolphins rather than apes, would we be less deserving of moral consideration? If this logic follows, it implies significant low-cost opportunities to improve welfare.
A large portion of the EA community are by and large, longtermist. This refers to the idea that, if there are many future generations (100s, 1000s or more), and their lives are as valuable as ours, then even very small impacts on all of their lives-- or things like moving good changes forwards in time or bad ones back-- far outweigh impacts on people who are currently alive. Because this concept is less broadly-accepted than charity for currently-alive people, longtermist solutions are also generally considered to be neglected. Longtermist interventions generally focus on S-risks or X-risks.
Examples of longtermist interventions include AI safety, pandemic preparedness, and nanotechnology security. Examples of other popular EA interventions include global poverty alleviation, malaria treatments, and vitamin supplementation in sub-saharan Africa.
If many unrelated factors point towards doing the same action, beware that you may be using motivated reasoning[2].
From [scale tractability neglectendness], we can see a vast number of charities do not meet all or indeed any of these criteria. A major issue with EA is that some areas are much easier to track progress in than others (think tracking the cost per life saved of malaria nets vs existential AI risk, for instance). What is clear, however, is that some of the more effective charities (of those which are easy to track) have far more benefit over the average charity than people think-- perhaps as much as 10,000% as effective.
Zvi wrote a set of axioms of EA as well as his disagreement with them in Criticism of EA Criticism Contest. This list is very roughly based on that, though with very substantial changes.
Lives saved - 90% CI [50,000, 10mn] - Nathan Young
The Against Malaria Foundation has distributed more than 70 million bednets to protect people (mostly children) from a debilitating parasite. (Source) [number of lives saved]
GiveDirectly has facilitated more than $100 million in direct cash transfers to families living in extreme poverty, who determine for themselves how best to spend the money. (Source) [number of lives saved]
The Schistosomiasis Control Initiative and Deworm the World Initiative invests in people's health and future well-being by treating preventable diseases that often get little attention. They have given out hundreds of millions of deworming treatments to fight intestinal parasites, which may help people earn higher incomes later in life. (Sources for SCI and DWI)
Chicken equivalent lives saved per year: 90% CI [10m , 100T] - Nathan Young
The Humane League and Mercy for Animals, alongside many other organizations, have orchestrated corporate campaigns and legal reforms to fight the use of battery cages. Because of this work, more than 100 million hens that would have been caged instead live cage-free. (This includes all cage-free reform work, of which a sizable fraction was funded by EA-aligned donors.)
The Good Food Institute works with scientists, entrepreneurs, and investors to develop and promote meat alternatives that don't require the suffering of farmed animals.
[how much lower higher? risk of existential catastrphe as a result][3]
Organizations like the Future of Humanity Institute and the Centre for the Study of Existential Risk work on research and policy related to some of the biggest threats facing humanity, from pandemics and climate change to nuclear war and superintelligent AI systems.
Some organizations in this space, like the Center for Human-Compatible AI and the Machine Intelligence Research Institute, focus entirely on solving issues posed by advances in artificial intelligence. AI systems of the future could be very powerful and difficult to control --- a dangerous combination.
Sherlock Biosciences is developing a diagnostic platform that could reduce threats from viral pandemics. (They are a private company, but much of their capital comes from a grant made by Open Philanthropy, an EA-aligned grantmaker.)
Stefan Shubert's criticisms and responses
Kuhn, Ben (2013) A critique of effective altruism, Ben Kuhn’s Blog, December 2.
McMahan, Jeff (2016) Philosophical critiques of effective altruism, The Philosophers’ Magazine, vol. 73, pp. 92–99.
Nielsen, Michael (2022) Notes on effective altruism, Michael’s Notebook, June 2.
Rowe, Abraham (2022) Critiques of EA that I want to read, Effective Altruism Forum, June 19.
Wiblin, Robert & Keiran Harris (2019) Vitalik Buterin on effective altruism, better ways to fund public goods, the blockchain’s problems so far, and how it could yet change the world, 80,000 Hours, September 3.
Zhang, Linchuan (2021) The motivated reasoning critique of effective altruism, Effective Altruism Forum, September 14.
The winners of this https://forum.effectivealtruism.org/posts/YgbpxJmEdFhFGpqci/winners-of-the-ea-criticism-and-red-teaming-contest
Peter Singer - https://newint.org/features/1997/04/05/peter-singer-drowning-child-new-internationalist
https://forum.effectivealtruism.org/posts/omoZDu8ScNbot6kXS/beware-surprising-and-suspicious-convergence
Because existential risk is so important compared to anything else, there is some chance that EA has made this a little worse and so is a net negative enterprise
https://twitter.com/KerryLVaughan/status/1545063368695898112?s=20&t=xgaSuh22V6y44Wkcebo22Q
https://twitter.com/xriskology/status/1579832304503259136?s=20&t=e8IFDZuxC5gLO2vdCldwyg