Stuart, what's your view on the problem I described in Is the potential astronomical waste in our universe too small to care about? Translated to this setting, the problem is that if you do a normalisation when you're uncertain about the size of the universe (i.e., is computed under this uncertainty), and then later find out the actual size of the universe (or just gets some information that shifts your expectation of the size of the universe or of how many lives or observer-moments it can support), you'll end up putting almost all of your efforts into Total Utilitarianism (if the shift is towards the universe being bigger) or almost none of your efforts into it (if the shift is in the opposite direction).
Hum... It seems that we can stratify here. Let represent the values of a collection of variables that we are uncertain about, and that we are stratifying on.
When we compute the normalising factor for utility under two policies and , we normally do it as:
And then we replace with .
Instead we might normalise the utility separately for each value of :
The problem is that, since we're dividing by the , the expectation of is not the same .
Is there an obvious improvement on this?
Note that here, total utilitarianism get less weight in large universes, and more in small ones.
I'll think more...
For some time, me and others have been looking at ways of normalising utility functions, so that we can answer questions like:
...without having to worry about normalising U1 or U2 (since utility functions are only defined up to positive affine transformations).
I've long liked the mean-max normalisation; in this view, what matters is the difference between a utility's optimal policy, and a random policy. So, in a sense, each utility function has a equal shot of moving the outcome away from an expected random policy, and towards themselves.
The intuition still seems good to me, but the "random policy" is a bit of a problem. First of all, it's not all that well defined - are we talking about a policy that just spits out random outputs, or one that picks randomly among outcomes? Suppose there are three options, option A (if A is output), option B' (if B' is output), or do nothing (any other output), should we really say that A happens twice as often as B' (since typing out A randomly is twice as likely that typing out B'?).
Relatedly, if we add another option C, which is completely equivalent to A for all possible utilities, then this redefines the random policy. There's also a problem with branching - what if option A now leads to twenty choices later, while B leads to no further choices, are we talking about twenty-one equivalent choices, or twenty equivalent choices and one other one as likely as all of them put together? Also, the concept has some problem with infinite option sets.
A more fundamental problem is that the random policy includes options that neither U1 nor U2 would ever consider sensible.
Random dictator policy
These problems can be solved by switching instead to the random dictator policy as the default, rather than a random policy.
Assume we are hesitating between utility functions U1, U2, ... Un, with π∗i the optimal policy for utility Ui. Then the random dictator policy is just πrd which picks a π∗i at random and then follows that. So
Normalising to the random dictator policy
This πrd is an excellent candidate for replacing the random policy in the normalisation. It is well defined, it would never choose options that all utilities object to, and it doesn't care about how options are labelled or about how to count them.
Therefore we can present the random dictator normalisation: if you are hesitating between utility functions U1, U2, ... Un, then normalise each one to ˆUi as follows:
where Eπ∗i[Ui] is the expected utility of Ui given optimal policy, and Eπrd[Ui] is its expected utility given the random dictator policy.
Our overall utility to maximise then becomes:
Note that that normalisation has a singularity when Eπ∗i[Ui]=Eπrd[Ui]. But realise what that means: it means that the random dictator policy is optimal for Ui. That means that every single π∗j is optimal for Ui. So, though the explosion in the normalisation means that we must pick an optimal policy for Ui, this set is actually quite large, and we can use the normalisations of the other Uj to pick from among it (so maximising Ui becomes a lexicographic preference for us).
Normalising a distribution over utilities
Now suppose that there is a distribution over the utilities - we're not equally sure of each Ui, instead we assign a probability pi to them. Then the random dictator policy is defined quite obviously as:
And the normalisation can proceed as before, generating the ˆUi, and maximising the normalised sum:
Properties
The random dictator normalisation has all the good properties of the mean-max normalisation in this post, namely that the utility is continuous in the data and that it respects indistinguishable choices. It is also invariant under cloning (ie adding another option that is completely equivalent to one of the options already there), which the mean-max normalisation does not.
But note that, unlike all the normalisations in that post, it is not a case of normalising each Ui without looking at the other Uj, and only then combining them. Each normalisation of Ui takes the other Uj into account, because of the definition of the random dictator policy.
Problems? Double counting, or the rich get richer
Suppose we are hesitating between utilities U1 (with 9/10 probability) and U2 (with 1/10) probability.
Then πrd=(9/10)π∗1+(1/10)π∗1 is the random dictator policy, and is likely to be closer to optimal for U1 than for U2.
Because of this, we expect U1 to get "boosted" more by the normalisation process than U2 does (since the normalisation is the inverse of the difference between πrd and the optimal policies).
But then when we take the weighted sum, this advantage is compounded, because the boosted ˆU1 is weighted 9/10 versus 1/10 for the relatively unboosted ˆU2. It seems that the weight of U1 thus gets double-counted.
A similar phenomena happens when we are equally indifferent between utilities U1, U2, ... U10, if the U1, ... U9 all roughly agree with each other while U10 is completely different: the similarity of the first nine utilities seems to give them a double boost effect.
There are some obvious ways to fix this (maybe use √pi rather than pi), but they all have problems with continuity, either when pi→0, or when Ui→Uj.
I'm not sure how much of a problem this is.