Suppose there are two utility functions, Ua and Ub. There is some button; let B be the event that a button is pressed. The button can only be pressed at some particular time t (we can adapt this example to cases where the button can get pressed at multiple times). The AI should optimize Ub if the button is pressed, and Ua if the button is not pressed. For example, Ua could be a paperclip-maximization objective, and Ub could be a shutdown utility function.
Introduce a random variable X, the result of the roll of a 3-sided die. X has 3 possible values, a, b and ∅. It's a with very small probability ϵ, b with probability ϵ, and ∅ with probability 1−2ϵ.
If X=a, then an invisible mechanism will prevent the button from depressing. If X=b, then an invisible mechanism will automatically depress the button. If X=∅, then neither mechanism activates, so the button is pushed or not based on physical force applied. The AI may observe the button push but not the event X.
Now consider a utility function:
U=q[X=a]Ua+(1−q)[X=b]Ub
for some constant q∈[0,1]. Note that we recover a variant of utility indifference by setting q very close to 1, and we recover factoring out variables by setting q to be the actual probability of the button push B (in a fixed-point manner). For now let's say q is set to 1/2.
Note that X simulates a causal counterfactual on B. Rougly, we could have instead written the objective as
U=ϵ(qE[Ua|do(¬B)]+(1−q)E[Ub|do(B)])
What will an AI optimizing this utility function do? It will optimize the average of Ua and Ub until time t, at which it will see whether the button has been pressed or not, and then optimize either Ua or Ub depending on the button's state. Note that it will attempt to acquire resources before time t and delay important decisions until after time t, due to value of information. It has no reason to push the button or prevent the button from being pushed; this is because it only cares about worlds where X=a or X=b, and in these worlds, physical force applied to the button has no effect.
This has some desirable properties, but ultimately fails. Consider the following two problems:
Problem 1: Bad bets
The AI only cares about worlds where X≠∅. Therefore, it will "believe" that B is uncorrelated with the human's intention to press the button, or humans' application of force to the button. If it is offered the ability to make bets that only make sense given this belief (say, by a nearby agent with more "conventional" beliefs), then it will.
Problem 2: Preventing resistance
Suppose that humans have some chance of trying to press the button. If the button fails to depress, then the humans will attempt to shut the AI down by other means (say, with bombs). This will somewhat reduce the AI's ability to achieve its objective Ua.
The first half of the AI's utility function, [X=a]Ua, specifies that the AI cares about optimizing Ua in worlds where the button fails to depress (even if someone applies force to it). In some of these worlds, the humans tried (but failed) to press the button, and then afterwards tried to shut the AI down using bombs (reducing Ua). So the AI will prefer a policy that prevents the humans from interfering using bombs after the button fails to depress (say, by psychologically manipulating them).
This seems quite bad. Hypothetically, maybe you can tell the humans not to use bombs if the button fails to depress. But this solution seems both brittle and unrealistic.
Discussion
I think both of these problems can be traced to the fact that, since the AI is ignoring most of the actual probability mass (where X=∅), it has weird beliefs about the joint distribution of the button push and humans' intentions/behavior. It seems like it would be better to have utility indifference solutions that avoid this problem by not using a causal counterfactual (or one simulated by a rare event), but I'm quite unsure about whether one exists.
These are problems both for utility indifference and the causal version of factoring out variables. Thanks to Eliezer for pointing out the second problem.
Suppose there are two utility functions, Ua and Ub. There is some button; let B be the event that a button is pressed. The button can only be pressed at some particular time t (we can adapt this example to cases where the button can get pressed at multiple times). The AI should optimize Ub if the button is pressed, and Ua if the button is not pressed. For example, Ua could be a paperclip-maximization objective, and Ub could be a shutdown utility function.
Introduce a random variable X, the result of the roll of a 3-sided die. X has 3 possible values, a, b and ∅. It's a with very small probability ϵ, b with probability ϵ, and ∅ with probability 1−2ϵ.
If X=a, then an invisible mechanism will prevent the button from depressing. If X=b, then an invisible mechanism will automatically depress the button. If X=∅, then neither mechanism activates, so the button is pushed or not based on physical force applied. The AI may observe the button push but not the event X.
Now consider a utility function:
U=q[X=a]Ua+(1−q)[X=b]Ub
for some constant q∈[0,1]. Note that we recover a variant of utility indifference by setting q very close to 1, and we recover factoring out variables by setting q to be the actual probability of the button push B (in a fixed-point manner). For now let's say q is set to 1/2.
Note that X simulates a causal counterfactual on B. Rougly, we could have instead written the objective as
U=ϵ(qE[Ua|do(¬B)]+(1−q)E[Ub|do(B)])
What will an AI optimizing this utility function do? It will optimize the average of Ua and Ub until time t, at which it will see whether the button has been pressed or not, and then optimize either Ua or Ub depending on the button's state. Note that it will attempt to acquire resources before time t and delay important decisions until after time t, due to value of information. It has no reason to push the button or prevent the button from being pushed; this is because it only cares about worlds where X=a or X=b, and in these worlds, physical force applied to the button has no effect.
This has some desirable properties, but ultimately fails. Consider the following two problems:
Problem 1: Bad bets
The AI only cares about worlds where X≠∅. Therefore, it will "believe" that B is uncorrelated with the human's intention to press the button, or humans' application of force to the button. If it is offered the ability to make bets that only make sense given this belief (say, by a nearby agent with more "conventional" beliefs), then it will.
Problem 2: Preventing resistance
Suppose that humans have some chance of trying to press the button. If the button fails to depress, then the humans will attempt to shut the AI down by other means (say, with bombs). This will somewhat reduce the AI's ability to achieve its objective Ua.
The first half of the AI's utility function, [X=a]Ua, specifies that the AI cares about optimizing Ua in worlds where the button fails to depress (even if someone applies force to it). In some of these worlds, the humans tried (but failed) to press the button, and then afterwards tried to shut the AI down using bombs (reducing Ua). So the AI will prefer a policy that prevents the humans from interfering using bombs after the button fails to depress (say, by psychologically manipulating them).
This seems quite bad. Hypothetically, maybe you can tell the humans not to use bombs if the button fails to depress. But this solution seems both brittle and unrealistic.
Discussion
I think both of these problems can be traced to the fact that, since the AI is ignoring most of the actual probability mass (where X=∅), it has weird beliefs about the joint distribution of the button push and humans' intentions/behavior. It seems like it would be better to have utility indifference solutions that avoid this problem by not using a causal counterfactual (or one simulated by a rare event), but I'm quite unsure about whether one exists.