Most of the normalisation techniques can be conceived of as a game with two outcomes, and each player can pay a certain amount of their utility to flip from one one outcome to another. Then we can use the maximal amount of utility they are willing to pay, as the common measuring stick for normalisation.
Consider for example the min-max normalisation: this assigns utility 0 to the expected utility if the agent makes the worst possible decisions, and 1 if they make the best possible ones.
So, if your utility function is u, the question is: how much utility would you be willing to pay to prevent your nemesis (a −u maximiser) from controlling the decision process, and let you take it over instead? Dividing u by that amount[1] will give you the min-max normalisation (up to the addition of a constant).
Now consider the mean-max normalisation. For this, the game is as follows: how much would you be willing to pay to prevent a policy from choosing randomly amongst the outcomes ("mean"), and let you take over the decision process instead?
Conversely, the mean min-mean normalisation asks how much you would be willing to pay to prevent your nemesis from controlling the decision process, and shifting to a random process instead.
The mean difference method is a bit different: here, two outcomes are chosen at random, and you are asked now much you are willing to pay to shift from the worst outcome to the best. The expectation of that amount is used for normalisation.
The mutual Worth bargaining solution has a similar interpretation: how much would you be willing to pay to move from the default option, to one where you controlled all decisions?
A few normalisations don't seem to fit into the this framework, most especially those that depend on the square of the utility, such as variance normalisation or the Nash Bargaining solution. The Kalai–Smorodinsky bargaining solution uses a similar normalisation as the mutual worth bargaining solution, but chooses the outcome differently: if the default point is at the origin, it will pick the point (x,x) with largest x.
This, of course, would incentivise you to lie - but that problem is unavoidable in bargaining anyway. ↩︎
I've thought of a framework that puts most of the methods of interteoretic utility normalisation and bargaining on the same footing. See this first post for a reminder of the different types of utility function normalisation.
Most of the normalisation techniques can be conceived of as a game with two outcomes, and each player can pay a certain amount of their utility to flip from one one outcome to another. Then we can use the maximal amount of utility they are willing to pay, as the common measuring stick for normalisation.
Consider for example the min-max normalisation: this assigns utility 0 to the expected utility if the agent makes the worst possible decisions, and 1 if they make the best possible ones.
So, if your utility function is u, the question is: how much utility would you be willing to pay to prevent your nemesis (a −u maximiser) from controlling the decision process, and let you take it over instead? Dividing u by that amount[1] will give you the min-max normalisation (up to the addition of a constant).
Now consider the mean-max normalisation. For this, the game is as follows: how much would you be willing to pay to prevent a policy from choosing randomly amongst the outcomes ("mean"), and let you take over the decision process instead?
Conversely, the mean min-mean normalisation asks how much you would be willing to pay to prevent your nemesis from controlling the decision process, and shifting to a random process instead.
The mean difference method is a bit different: here, two outcomes are chosen at random, and you are asked now much you are willing to pay to shift from the worst outcome to the best. The expectation of that amount is used for normalisation.
The mutual Worth bargaining solution has a similar interpretation: how much would you be willing to pay to move from the default option, to one where you controlled all decisions?
A few normalisations don't seem to fit into the this framework, most especially those that depend on the square of the utility, such as variance normalisation or the Nash Bargaining solution. The Kalai–Smorodinsky bargaining solution uses a similar normalisation as the mutual worth bargaining solution, but chooses the outcome differently: if the default point is at the origin, it will pick the point (x,x) with largest x.
This, of course, would incentivise you to lie - but that problem is unavoidable in bargaining anyway. ↩︎