In a previous post, I glibly suggested a utility function u# that would allow a u-maximising agent to print out the expectation of the utility it was maximising.
But u# turns out to be very peculiar indeed.
For this post, define
u#=−q2+2qu,
for some q that is a future output of the AI at time t (assume that q won't be visible or known to anyone except the AI). Assume u only takes non-negative values).
Peculiar subjectivity
I'd always thought that utility functions were clear on the difference between objectivity and subjectivity - that probabilities of probabilities or such didn't make sense, and that we couldn't eg ask the AI to maximise E(u)2 (though we could ask it to maximise E(u2), no problem).
But u# blurs this. It seem a perfectly respectable utility function - the fact that one component is user-defined shouldn't change this. What will a u#-maximiser do?
Well, first of all, at time t, it will pick q=Et(u), and, afterwards, maximise E(u). This fill give it utility f(Et(u)).
Here f(q)=q2, but it turns out that there are versions that work with any differentiable convex function - q4, 1/q, −log(q), exp(q), exp(−q), ...
Thus, maximising u#, before t, involves maximising Et(u)2. Note that this is distinct from maximising either Et(u) or Et(u2).
Consider the following three options the AI can take:
A) u=1.
B) u=0 with 51% probability, u=2 with 49% probability, the AI will not know which happens before t.
C) u=0 with 52% probability, u=2 with 48% probability, the AI will know which happens before t.
Then a u-maximiser will choose A>B>C, while a u2-maximiser will choose B>C>A. But a u# maximiser will choose C>A>B.
Note that since f(q)=q2 is convex, the AI will always benefit from finding out more information about u (and will never suffer from it, in expectation).
And note that this happens without there being any explicit definition of Et(u) in the utility function.
Peculiar corrigibility
The AI will shift smoothly from a f(Et(u)) maximiser before time t to a simple u-maximiser after time t, making this a very peculiar form of corrigibility.
In a previous post, I glibly suggested a utility function u# that would allow a u-maximising agent to print out the expectation of the utility it was maximising.
But u# turns out to be very peculiar indeed.
For this post, define
for some q that is a future output of the AI at time t (assume that q won't be visible or known to anyone except the AI). Assume u only takes non-negative values).
Peculiar subjectivity
I'd always thought that utility functions were clear on the difference between objectivity and subjectivity - that probabilities of probabilities or such didn't make sense, and that we couldn't eg ask the AI to maximise E(u)2 (though we could ask it to maximise E(u2), no problem).
But u# blurs this. It seem a perfectly respectable utility function - the fact that one component is user-defined shouldn't change this. What will a u#-maximiser do?
Well, first of all, at time t, it will pick q=Et(u), and, afterwards, maximise E(u). This fill give it utility f(Et(u)).
Here f(q)=q2, but it turns out that there are versions that work with any differentiable convex function - q4, 1/q, −log(q), exp(q), exp(−q), ...
Thus, maximising u#, before t, involves maximising Et(u)2. Note that this is distinct from maximising either Et(u) or Et(u2).
Consider the following three options the AI can take:
A) u=1. B) u=0 with 51% probability, u=2 with 49% probability, the AI will not know which happens before t. C) u=0 with 52% probability, u=2 with 48% probability, the AI will know which happens before t.
Then a u-maximiser will choose A>B>C, while a u2-maximiser will choose B>C>A. But a u# maximiser will choose C>A>B.
Note that since f(q)=q2 is convex, the AI will always benefit from finding out more information about u (and will never suffer from it, in expectation).
And note that this happens without there being any explicit definition of Et(u) in the utility function.
Peculiar corrigibility
The AI will shift smoothly from a f(Et(u)) maximiser before time t to a simple u-maximiser after time t, making this a very peculiar form of corrigibility.