Personal Blog

6

A previous post introduced the theory of intertheoretic utility comparison. This post will give examples of how to do that comparison, by normalising individual utility functions.

The methods

All methods presented here obey the axioms of Relevant data, Continuity, Individual normalisation, and Symmetry. Later, we'll see which ones follow Utility reflection, Cloning indifference, Weak irrelevance, and Strong irrelevance.

Max, min, mean

The maximum of a utility function is , while the minimum is . The mean of .

  • The max-min normalisation of is the such that the maximum of is and the minimum is .

  • The max-mean normalisation of is the such that the maximum of is and the mean is .

The max-mean normalisation has an interesting feature: it's precisely the amount of utility that an agent completely ignorant of its own utility, would pay to discover that utility (as a otherwise the agent would employ a random, 'mean', strategy).

For completeness, there is also:

  • The mean-min normalisation of is the such that the mean of is and the minimum is .

Controlling the spread

The last two methods find ways of controlling the spread of possible utilities. For any utility , define the mean difference: . And define the variance: , where is the mean defined previously.

These lead naturally to:

  • The mean difference normalisation of is the such that has a mean difference of .

  • The variance normalisation of is the such that has a variance of .

Properties

The different normalisation methods obey the following axioms:

Property Max-min Max-mean Mean-min Mean difference Variance
Utility reflection YES NO NO YES YES
Cloning indifference YES NO NO NO NO
Weak Irrelevance YES YES YES NO YES
Strong Irrelevance YES YES YES NO NO

As can be seen, max-min normalisation, despite its crudeness, is the only one that obeys all the properties. If we have a measure on , then ignoring the cloning axiom becomes more reasonable. Strong irrelevance can in fact be seen as an anti-variance; it's because of its second order aspect that it fails this.

Personal Blog
New Comment
1 comment, sorted by Click to highlight new comments since:

This is very interesting - I hadn't thought about utility aggregation for a single agent before, but it seems clearly important now that it has been pointed out.

I'm thinking about this in the context of both the human brain as an amalgamation of sub-agents, and organizations as an amalgamation of individuals. Note that we can treat organizations as rationally maximizing some utility function in the same way we can treat individuals as doing so - but I think that for many or most voting or decision structures, we should be able to rule out the claim that they are following any weighted combination of normalized utilities of the agents involved in the system using any intertheoretic comparison. This seems like a useful result if we can prove it. (Alternatively, it may be that certain decision rules map to specific intertheoretic comparison rules, which would be even more interesting.)