A system that is optimizing a function of n variables, where the objective depends on a subset of size k<n, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable.
It is not immediately obvious that this is true. If we are maximising (or minimising) some of these k variables, then it's likely true, if this maximisation or minimisation brings them to unusual values that we wouldn't encounter "naturally". But things might be different if the variables need only be set to certain "plausible" values.
Suppose, for example, that an AI is building widgets, and that it is motivated to increase widget production w. It can choose the following policies, with the following consequences:
π1: build widgets the conventional way; w=10.
π2: build widgets efficiently; w=15.
π3: introduce new innovative ways of building widgets; w=30.
π4: dominate the world's widget industry; w=10,000.
π5: take over the world, optimise the universe for widget production; w=1060.
If the AI's goal is to maximise w without limit, then the fifth option becomes attractive to it. Even if it just wants to set w to a limited but high value - w=10,000 - it benefits from more control of the world. In short:
The more unusual the values of the variables the AI wants to reach, the more benefit it gets from strong control over the universe.
But what if the AI was designed to set w to 15, or in the range 20−30? Then it would seem that it has lower incentive for control, and might just "do its job", the way we'd like it to; the other variables would not be set to extreme values, since there is no need for the AI to change things much.
Eliezer and Nick and others have made the point that this is still not safe, in posts that I can't currently find. They use examples like the AI taking over the world and building cameras to be sure that it constructed 15 widgets exactly. These scenarios seem extreme as intuitions pumps, to some, so I thought it would be simpler to rephrase this as: moving the variance to unusual values.
Variance control
Suppose that the AI was designed to keep v at 15. We could give it the utility function −(v−15)2, for instance. Would it then stick to policy π2?
Now assume further that the world is not totally static. Random events happen, increasing or decreasing the production of widgets. If the AI follows policy π, then its expected reward is:
The second term, −(E(v∣π)−15)2, the AI could control by "doing its job" and picking a human-safe policy. But is also wants to control the variance of v, specifically it wants to lower it. Even more specifically, it wants to move that variance to a very low, highly unusual value.[1]
So the previous problem appears again: it wants to move a variable - the variance of v - to a very unusual value. In the real world, this could translate to it building excess capacity, taking control of its supply chains, removing any humans that might get in the way, etc... Since "humans that might get in the way" would end up being most humans - few nations would tolerate a powerful AI limiting their power and potential - this tends to the classic "take control of the world" scenario.
Conclusion
So, minimising or maximising a variable, or setting it to an unusual value, is dangerous, as it incentives the AI to take control of the world to achieve those unusual values. But setting a variable to a usual value can also be dangerous, in an uncertain world, as it incentivises the AI to take control of the world to set the variability of that variable to unusually low levels.
Thanks to Rebecca Gorman for the conversation which helped me clarify these thoughts.
This is not a specific feature of using a square in −(v−15)2. To incentivise the AI to set v=15, we need a function of v that peaks at 15. This makes it concave-ish around 15, which is what penalises spread and uncertainty and variance. ↩︎
Let's look again at Stuart Russell's quote:
It is not immediately obvious that this is true. If we are maximising (or minimising) some of these k variables, then it's likely true, if this maximisation or minimisation brings them to unusual values that we wouldn't encounter "naturally". But things might be different if the variables need only be set to certain "plausible" values.
Suppose, for example, that an AI is building widgets, and that it is motivated to increase widget production w. It can choose the following policies, with the following consequences:
If the AI's goal is to maximise w without limit, then the fifth option becomes attractive to it. Even if it just wants to set w to a limited but high value - w=10,000 - it benefits from more control of the world. In short:
But what if the AI was designed to set w to 15, or in the range 20−30? Then it would seem that it has lower incentive for control, and might just "do its job", the way we'd like it to; the other variables would not be set to extreme values, since there is no need for the AI to change things much.
Eliezer and Nick and others have made the point that this is still not safe, in posts that I can't currently find. They use examples like the AI taking over the world and building cameras to be sure that it constructed 15 widgets exactly. These scenarios seem extreme as intuitions pumps, to some, so I thought it would be simpler to rephrase this as: moving the variance to unusual values.
Variance control
Suppose that the AI was designed to keep v at 15. We could give it the utility function −(v−15)2, for instance. Would it then stick to policy π2?
Now assume further that the world is not totally static. Random events happen, increasing or decreasing the production of widgets. If the AI follows policy π, then its expected reward is:
E[−(v−15)2∣π]=−E(v2∣π)+2E(v∣π)−152=−var(v∣π)−(E(v∣π)−15)2.
The second term, −(E(v∣π)−15)2, the AI could control by "doing its job" and picking a human-safe policy. But is also wants to control the variance of v, specifically it wants to lower it. Even more specifically, it wants to move that variance to a very low, highly unusual value.[1]
So the previous problem appears again: it wants to move a variable - the variance of v - to a very unusual value. In the real world, this could translate to it building excess capacity, taking control of its supply chains, removing any humans that might get in the way, etc... Since "humans that might get in the way" would end up being most humans - few nations would tolerate a powerful AI limiting their power and potential - this tends to the classic "take control of the world" scenario.
Conclusion
So, minimising or maximising a variable, or setting it to an unusual value, is dangerous, as it incentives the AI to take control of the world to achieve those unusual values. But setting a variable to a usual value can also be dangerous, in an uncertain world, as it incentivises the AI to take control of the world to set the variability of that variable to unusually low levels.
Thanks to Rebecca Gorman for the conversation which helped me clarify these thoughts.
This is not a specific feature of using a square in −(v−15)2. To incentivise the AI to set v=15, we need a function of v that peaks at 15. This makes it concave-ish around 15, which is what penalises spread and uncertainty and variance. ↩︎