Anthropic Decision Theory (ADT) replaces anthropic probabilities (SIA and SSA) with a decision theory that doesn't need anthropic probabilities to function. And, roughly speaking, ADT shows that total utilitarians will have a behaviour that looks as if it was using SIA, while average utilitarians look like they are using SSA.
That means that the various paradoxes of SIA and SSA can be translated into ADT format. This post will do that, and show how the paradoxes feel a lot less counter-intuitive under ADT. Some of these have been presented before, but I wanted to gather them in one location. The paradoxes examined are:
The first three are are paradoxes of SSA (which increases the probability of "small" universes with few observers), while the last three are paradoxes of SIA (which increases the probability of "large" universes with many observers).
No Doomsday, just a different weighting of rewards
The famous Doomsday Argument claims that, because of SSA's preferences for small numbers of observers, the end of the human species is closer than we might otherwise think.
How can we translate that into ADT? I've found it's generally harder to translate SSA paradoxes into ADT that SIA ones, because average utilitarianism is a bit more finicky to work with.
But here is a possible formulation: a disaster may happen 10 years from now, with 50% probability, and will end humanity with a total of ω humans. If humans survive the disaster, there will be Ω humans total.
The agent has the option of consuming X resources now, or consuming Y resources in 20 years time. If this were a narrow-minded selfish agent, then it will consume early if X>Y/2, and late if X<Y/2.
However, if the agent is an average utilitarian, the amount of expected utility they derive from from consuming early is (X/ω+X/Ω)/2 (the expected average utility of X, averaged over survival and doom), while the expected utility for consuming late is (Y/Ω)/2 (since consuming late means survival).
This means that the breakeven point for the ADT average utilitarian is when:
Y=X(1+Ω/ω).
If Ω is much larger than ω, then the ADT agent will only delay consumption if Y is similarly larger than X.
This looks like a narrow-minded selfish agent that is convinced that doom is almost certain. But it's only because of the weird features of average utilitarianism.
Adam and Eve and differentially pleasurable sex and pregnancy
In the Adam and Eve thought experiment, the pair of humans want to sleep together, but don't want to get pregnant. The snake reassures them that because a pregnancy would lead to billions of descendants, SSA's preferences for small universes means that this is almost impossibly unlikely, so, time to get frisky.
There are two utilities to compare here: the positive utility of sex (+S), and the negative utility of pregnancy (−P). Assume a 50% chance of pregnancy from having sex, and a subsequent Ω descendants.
Given an average utilitarian ADT couple, the utility derived from sex is (S/2+S/(2+Ω))/2, while the disutility from pregnancy is −P/(2+Ω). For large enough Ω, those terms will be approximately S/4 and 0.
So the disutility of pregnancy is buried in the much larger population.
There are more extreme versions of the Adam and Eve problem, but they are closely related to the next paradox.
UN++: more people to dilute the sorrow
In the UN++ thought experiment, a future world government seeks to prevent damaging but non-fatal gamma ray bursts by committing to creating many many more humans, if the bursts happen. The paradox is that SSA implies that this should lower the probability of the bursts.
In ADT, this behaviour is perfectly rational: if we assume that the gamma ray-bursts will cause pain to the current population, then creating a lot of new humans (of same baseline happiness) will dilute this pain, by averaging it out over a larger population.
So in ADT, the SSA paradoxes just seem to be artefacts of the weirdness of average utilitarianism.
Philosopher: not presumptuous, but gambling for high rewards
We turn now to SIA, replacing our average utilitarian ADT agent with a total utilitarian one.
In the Presumptuous Philosopher thought experiment, there are only two possible theories about the universe: T1 and T2. Both posit large universes, but T2 posits a much larger universe than T1, with trillions of times more observers.
Physicists are about to do an experiment to see which theory is true, but the SIA-using Presumptuous Philosopher (PP) interrupts them, saying that T2 is almost certain because of SIA. Indeed, they are willing to bet on T2 at odds of up to a trillion-to-one.
With that betting idea, the problem is quite easy to formulate in ADT. Assume that all PP are total utilitarians towards each other, and will all reach the same decision. Then there are a trillion times more PPs in T2 than in T1. Which means that winning a bet in T2 is a trillion times more valuable than winning it in T1.
Thus, under ADT, the Presumptuous Philosopher will indeed bet on T2 at odds of up to a trillion to one, but the behaviour is simple to explain: they are simply going for a low-probability, high-utility bet with higher expected utility than the opposite. There does not seem to be any paradox remaining.
SIA Doomsday: care more about mosquito nets in large universes
Back to SIA. The SIA Doomsday Argument, somewhat simplified, is since SIA means that we should expect there to be a lot of observers like ourselves, then it is more likely that the Fermi paradox is explained by a late Great Filter (which kills civilizations that are more advanced than us) than a early Great Filter (which kills life at an earlier stage or stops it from evolving in the first place). The reason for this is that, obviously, there are more observers like us for a late Great Filter than an early one.
To analyse this in decision theory, use the same setup as for the standard Doomsday Argument: choosing between consuming X now (or donating X to AMF, or similar), or Y in twenty years, with a risk of human extinction in ten years.
To complete the model, assume that if the Great Filter is early, there will be no human extinction, while if it is late, there is a 99% chance of extinction. If the Great Filter is late, there are Ω advanced civilizations across the universe, while if it is early, there are only ω<Ω. Assume that the agent currently estimates late-vs-early Great Filters as 50-50.
With the usual ADT agent assuming that all their almost-copies reach the same decision in every civilization, the utility from early consumption is ωX/2+ΩX/2 (total utility averaged over late vs early Great Filters), while the utility from late consumption is ωY/2+ΩY/(2×100).
So a total utilitarian ADT agent will be more likely to go for early consumption than the objective odds would imply. And the more devastating the late Great Filter, the stronger this effect.
For large Ω, these approximate to ΩX/2 and ΩY(2×100).
Anthropic Decision Theory (ADT) replaces anthropic probabilities (SIA and SSA) with a decision theory that doesn't need anthropic probabilities to function. And, roughly speaking, ADT shows that total utilitarians will have a behaviour that looks as if it was using SIA, while average utilitarians look like they are using SSA.
That means that the various paradoxes of SIA and SSA can be translated into ADT format. This post will do that, and show how the paradoxes feel a lot less counter-intuitive under ADT. Some of these have been presented before, but I wanted to gather them in one location. The paradoxes examined are:
The first three are are paradoxes of SSA (which increases the probability of "small" universes with few observers), while the last three are paradoxes of SIA (which increases the probability of "large" universes with many observers).
No Doomsday, just a different weighting of rewards
The famous Doomsday Argument claims that, because of SSA's preferences for small numbers of observers, the end of the human species is closer than we might otherwise think.
How can we translate that into ADT? I've found it's generally harder to translate SSA paradoxes into ADT that SIA ones, because average utilitarianism is a bit more finicky to work with.
But here is a possible formulation: a disaster may happen 10 years from now, with 50% probability, and will end humanity with a total of ω humans. If humans survive the disaster, there will be Ω humans total.
The agent has the option of consuming X resources now, or consuming Y resources in 20 years time. If this were a narrow-minded selfish agent, then it will consume early if X>Y/2, and late if X<Y/2.
However, if the agent is an average utilitarian, the amount of expected utility they derive from from consuming early is (X/ω+X/Ω)/2 (the expected average utility of X, averaged over survival and doom), while the expected utility for consuming late is (Y/Ω)/2 (since consuming late means survival).
This means that the breakeven point for the ADT average utilitarian is when:
If Ω is much larger than ω, then the ADT agent will only delay consumption if Y is similarly larger than X.
This looks like a narrow-minded selfish agent that is convinced that doom is almost certain. But it's only because of the weird features of average utilitarianism.
Adam and Eve and differentially pleasurable sex and pregnancy
In the Adam and Eve thought experiment, the pair of humans want to sleep together, but don't want to get pregnant. The snake reassures them that because a pregnancy would lead to billions of descendants, SSA's preferences for small universes means that this is almost impossibly unlikely, so, time to get frisky.
There are two utilities to compare here: the positive utility of sex (+S), and the negative utility of pregnancy (−P). Assume a 50% chance of pregnancy from having sex, and a subsequent Ω descendants.
Given an average utilitarian ADT couple, the utility derived from sex is (S/2+S/(2+Ω))/2, while the disutility from pregnancy is −P/(2+Ω). For large enough Ω, those terms will be approximately S/4 and 0.
So the disutility of pregnancy is buried in the much larger population.
There are more extreme versions of the Adam and Eve problem, but they are closely related to the next paradox.
UN++: more people to dilute the sorrow
In the UN++ thought experiment, a future world government seeks to prevent damaging but non-fatal gamma ray bursts by committing to creating many many more humans, if the bursts happen. The paradox is that SSA implies that this should lower the probability of the bursts.
In ADT, this behaviour is perfectly rational: if we assume that the gamma ray-bursts will cause pain to the current population, then creating a lot of new humans (of same baseline happiness) will dilute this pain, by averaging it out over a larger population.
So in ADT, the SSA paradoxes just seem to be artefacts of the weirdness of average utilitarianism.
Philosopher: not presumptuous, but gambling for high rewards
We turn now to SIA, replacing our average utilitarian ADT agent with a total utilitarian one.
In the Presumptuous Philosopher thought experiment, there are only two possible theories about the universe: T1 and T2. Both posit large universes, but T2 posits a much larger universe than T1, with trillions of times more observers.
Physicists are about to do an experiment to see which theory is true, but the SIA-using Presumptuous Philosopher (PP) interrupts them, saying that T2 is almost certain because of SIA. Indeed, they are willing to bet on T2 at odds of up to a trillion-to-one.
With that betting idea, the problem is quite easy to formulate in ADT. Assume that all PP are total utilitarians towards each other, and will all reach the same decision. Then there are a trillion times more PPs in T2 than in T1. Which means that winning a bet in T2 is a trillion times more valuable than winning it in T1.
Thus, under ADT, the Presumptuous Philosopher will indeed bet on T2 at odds of up to a trillion to one, but the behaviour is simple to explain: they are simply going for a low-probability, high-utility bet with higher expected utility than the opposite. There does not seem to be any paradox remaining.
SIA Doomsday: care more about mosquito nets in large universes
Back to SIA. The SIA Doomsday Argument, somewhat simplified, is since SIA means that we should expect there to be a lot of observers like ourselves, then it is more likely that the Fermi paradox is explained by a late Great Filter (which kills civilizations that are more advanced than us) than a early Great Filter (which kills life at an earlier stage or stops it from evolving in the first place). The reason for this is that, obviously, there are more observers like us for a late Great Filter than an early one.
To analyse this in decision theory, use the same setup as for the standard Doomsday Argument: choosing between consuming X now (or donating X to AMF, or similar), or Y in twenty years, with a risk of human extinction in ten years.
To complete the model, assume that if the Great Filter is early, there will be no human extinction, while if it is late, there is a 99% chance of extinction. If the Great Filter is late, there are Ω advanced civilizations across the universe, while if it is early, there are only ω<Ω. Assume that the agent currently estimates late-vs-early Great Filters as 50-50.
With the usual ADT agent assuming that all their almost-copies reach the same decision in every civilization, the utility from early consumption is ωX/2+ΩX/2 (total utility averaged over late vs early Great Filters), while the utility from late consumption is ωY/2+ΩY/(2×100).
So a total utilitarian ADT agent will be more likely to go for early consumption than the objective odds would imply. And the more devastating the late Great Filter, the stronger this effect.
For large Ω, these approximate to ΩX/2 and ΩY(2×100).