I'll build on my model of the previous post, and consider close and distant situation.
Recall that each world in W is defined by 0≤n≤100, which is the number of people smiling in it. I'll extend the model by adding another Boolean variable b, which determines, say, whether the world contains chocolate (b=0) or art (b=1). So worlds can be described by the pair (n,b).
The default situation - the one if the AI does nothing - is (45,1), say. So 45 smiling people in a world of art.
Then let's introduce two new partial preferences/pre-orders:
P3={(n,1)≤(m,1)∣40≤n≤m≤50}.
P4={(n,b)≤(m,1)∣40≤m≤50}.
So P3 say that, within a range of the default world (art, and the number of smiling people not being within 5 people of 45), the more smiling people, the better. While P4 says that worlds in this range are better than worlds outside them.
Again, I've felt free to translate ˆU4 to improve the clarity of the normalised version.
If we plot ˆU3, we get the following:
Here I've slightly offset the b=1 (purple) from the b=0 (blue) worlds, for clarity of exposition, though they would of course be on top of each other.
Note that ˆU3 does not in itself distant situations, as this post recommends doing. Five close worlds are ranked about the distant worlds; but, conversely, five are close worlds are ranked below them.
To avoid distant situations, we need to add in ˆU4, which explicitly punishes distant worlds, and hence plot ˆU3+ˆU4:
This is much more like it! All the close worlds are now all ranked above the more distant ones.
But this is a close-run thing: the difference between the worse close world and the distant worlds is small. So, in general, when penalising distant worlds, we have to do this with some care, and maybe use a different standard of normalisation (since the downgrading of distant worlds is not really a preference of ours, rather a meta-tool of the synthesis method).
I'll build on my model of the previous post, and consider close and distant situation.
Recall that each world in W is defined by 0≤n≤100, which is the number of people smiling in it. I'll extend the model by adding another Boolean variable b, which determines, say, whether the world contains chocolate (b=0) or art (b=1). So worlds can be described by the pair (n,b).
The default situation - the one if the AI does nothing - is (45,1), say. So 45 smiling people in a world of art.
Then let's introduce two new partial preferences/pre-orders:
So P3 say that, within a range of the default world (art, and the number of smiling people not being within 5 people of 45), the more smiling people, the better. While P4 says that worlds in this range are better than worlds outside them.
These result in the following utility functions:
After normalisation, these become:
Again, I've felt free to translate ˆU4 to improve the clarity of the normalised version.
If we plot ˆU3, we get the following:
Here I've slightly offset the b=1 (purple) from the b=0 (blue) worlds, for clarity of exposition, though they would of course be on top of each other.
Note that ˆU3 does not in itself distant situations, as this post recommends doing. Five close worlds are ranked about the distant worlds; but, conversely, five are close worlds are ranked below them.
To avoid distant situations, we need to add in ˆU4, which explicitly punishes distant worlds, and hence plot ˆU3+ˆU4:
This is much more like it! All the close worlds are now all ranked above the more distant ones.
But this is a close-run thing: the difference between the worse close world and the distant worlds is small. So, in general, when penalising distant worlds, we have to do this with some care, and maybe use a different standard of normalisation (since the downgrading of distant worlds is not really a preference of ours, rather a meta-tool of the synthesis method).