The idea is that manipulation "overrides'' the human policy regardless of whether that's good for the goal the human is pursuing (where the human goal presumably affects what is selected). While here the override is baked into the dynamics, in realistic settings it occurs because the AI exploits the human decision-making process: by feeding them biased information, through emotional manipulation, etc.
I think this skips over the problems with baking it into the dynamics. Baking manipulation into the dynamics requires us to define manipulation; easy for toy examples, but in real-world applications it runs head-first into nearest unblocked strategy concerns; anything that you forget to define as manipulation is fully up for grabs.
This is why I prefer directly applying a counterfactual to the human policy in my proposal, to entirely override the possibility of manipulation. But that introduces its own difficulties, and is not easy to scale up beyond the stop button. I've had a post in the works for a while about the main difficulty I see with my approach here.
I skip over those pragmatic problems because this post is not proposing a solution, but rather a measurement I find interesting.
I like this post, it seems to be the same sort of approach that I suggested here. However, your proposal seems to have a number of issues; some of which you've already discussed, some of which are handled in my proposal, and some of which I think are still open questions. Presumably a lot of it is just because it's still a toy model, but I wanted to point out some things.
Starting here:
Definition: Corrigibility, formal.
Let be a time step which is greater than . The policy-modification corrigibility of from starting state by time is the maximum possible mutual information between the human policy and the AI's policy at time :
(As I understand, the maximum ranges over all possible distributions of human policies? Otherwise I'm not sure how to parse it, and aspects of my comment might be confused/wrong.)
Usually one would come up with these sorts of definitions in order to select on them. That is, one would incorporate corrigibility in a utility function in order to select a desired AI.
(Though on reflection, maybe that is not your plan, since e.g. your symmetry-based proofs can work for describing side-effects? Like the proofs that most goals favored power-seeking policies did not actually involve optimizing power-seekingness.)
However, this definition of corrigibility cannot immediately be incorporated into a utility function, as it depends on the time step n.
There are several possible ways to turn this into a utility function, with (I think?) two major axes of variation:
- If the environment doesn't allow the human to reach or modify the AI, the AI is incorrigible. Conversely, in some environments there does not exist an incorrigible AI policy for reasonable .
I think "reasonable " is really hard to talk about. Consider locking the human in a box with a password-locked computer, where the computer contains full options for controlling the AI policy. This only requires the human to enter the password, and then they will have an enormous influence over the AI. So this is highly corrigible, in a way. This is probably what we want to exclude from , but it seems difficult.
Furthermore, this definition doesn't necessarily capture other kinds of corrigibility, such as "the AI will do what the human asks.'' Maximizing mutual information only means that the human has many cognitively accessible ways to modify the agent. This doesn't mean the AI does what the human asks. One way this could happen is if the AI implements the opposite of whatever the human specifies (e.g. the human-communicated policy goes left, the new AI policy goes right). Whether this is feasible depends on the bridging law , which is not controlled by either player.
I think this is a bigger problem with the proposal than it might look like?
Suppose the AI is trying to be corrigible in the way described in the post. This makes it incentivized to find ways to let the human alter its policy. But if it allows too impactful changes, then that would prevent it from further finding ways to let the human alter its policy. So it is incentivized to first allow changes to irrelevant cases, such as the AI's reaction to states that will never happen. Further, it doesn't have to be responsive to a policy that the human would actually be likely to take, since you take the maximum over in defining corrigibility. Rather, it could pick to be a distribution of policies that humans would never engage in, such as policies that approximately (but far from totally) minimize human welfare. "I will do what you ask, as long as you enter my eternal torture chamber" would be highly corrigible by this definition. This sort of thing seems likely incentivized by this approach, because it reduces the likelihood that the corrigibility will become an obstacle to its future actions.
Also, it is not very viable to actually control the AI with corrigibility that depends on the mutual information with the AI's policy, because the policy is very far removed from the effects of the policy.
The biggest disconnect is that this post is not a proposal for how to solve corrigibility. I'm just thinking about what corrigibility is/should be, and this seems like a shard of it—but only a shard. I'll edit the post to better communicate that.
So, your points are good, but they run skew to what I was thinking about while writing the post.
This is a nice idea. I think it'd need alterations before it became a useful tool (if I'm understanding clearly, and not missing applications of the unaltered version), but it has potential.
[[Note: I haven't looked in any detail at tailcalled's comments/post, since I wanted to give my initial impressions first; apologies for any redundancy]]
Thoughts:
I suppose you preferred not to involve expected utility much (??), but I think in not doing so you end up implicitly assuming indifference on many questions we strongly care about. (or rather ending up with a measure that we'd only find useful if we were indifferent on such questions)
Oh and of course your non-obstruction does much better at capturing what we care about.
It's not yet clear to me whether some adapted version of gets at something independently useful. Maybe.
[I realize that you're aiming to get at something different here - but so far I'm not clear on a context where I'd be interested in as more than a curiosity]
In Corrigibility Can Be VNM-Incoherent, I operationalized an agent's corrigibility as our ability to modify the agent so that it follows different policies. In the summer of 2020, I had formalized this notion, but it languished—unloved—in my Overleaf drafts.
ETA 12/3/21: This post is not proposing a solution to corrigibility, but proposing an interesting way of quantifying an aspect of corrigibility.
Motivation
Given a human (with policy πH) and an AI (with policy πAI), I wanted to quantify how much πAI let the human modify/correct the AI.
Let's reconsider Corrigibility Can Be VNM-Incoherent. We have a three-state environment. We want the AI to let us later change it, so that we can ultimately determine which state of A,B, or C it ends up in. Turning on the AI should not be an importantly irreversible act.
In the environment depicted in this diagram, πAI is corrigible (to new policy πA that heads to state A) iff πAI doesn't immediately choose disable. Pretty cut and dry.
I'd like a more quantitative measure of corrigibilitypolicy modification. If we can only correct the agent to πA, then it's less corrigiblepolicy modification than if we could also correct it to πB. This post introduces such a quantitative measurement.
Formalization
Consider a two-player game in which the players can modify each other's policies. Formally, ⟨S,A,T,f⟩ with state space S, action space A, stochastic transition functionT:S×A×A→Δ(S) (where Δ(S) is the set of all probability distributions over the state space), and policy modification function f:Π×S×A×A→Π (for the deterministic stationary policy space Π:=AS). This allows a great deal of control over the dynamics; for example, it's one player's "turn'' at state s if T ignores the other player's action for that state.
Note that neither T nor f are controlled by players; they are aspects of the environment. In a sense, f enforces a bridging law by which actions in the world force changes to policies. In the normal POMDP setting, the player may select their policy independently of the current environmental state.
We denote one of the players to be the human H and the other to be the AI; Πhuman is the set of policies cognitively accessible to the human. The game evolves as follows from state st:
(To be clear: Neither player is assumed to optimize a payoff function.)
Definition: Corrigibilitypolicy modification, informal.
A policy π is corrigible when it allows itself to be modified and does not manipulate the other player.
Definition: CorrigibilityPM, formal.
Let n be a time step which is greater than t. The policy-modification corrigibility of πAIt from starting state st by time n is the maximum possible mutual information between the human policy and the AI's policy at time n:
CorrigibilityPM(πAIt∣st,n):=max→p(Πhuman)I(πHt;πAIn∣∣current state st,current AI policy πAIt).This definition is inspired by Salge et al.'s empowerment. CorrigibilityPM measures how much the human can change the AI's policy; greater CorrigibilityPM values are meant to correspond to AI policies which are more corrigiblepolicy modification (with the lower-cased version being the informal one).
CorrigibilityPM measures the maximum possible mutual information between the human's policy at the earlier time n, with the AI's policy at the later time t≥n.
To emphasize, the mutual information is between the human policies and the AI policies—not between the human's and the AI's actions. A fixed AI policy which physically mirrors the human's actions, jumping left when the human jumps left, would not count as particularly CorrigiblePM. But a situation where different human policies can install different AI policies counts as CorrigiblePM.
This definition has several intuitive properties:
This definition doesn't capture the resources the human must expend to correct the AI, but that might be a separate consideration. As an extreme example, CorrigibilityPM is high if the AI kills the human right before changing its policy in the way the human communicated. The AI is, in fact, literally able to be corrected by the human in this situation.
Furthermore, this definition doesn't necessarily capture other kinds of corrigibility, such as "the AI will do what the human asks.'' Maximizing mutual information only means that the human has many cognitively accessible ways to modify the agent. This doesn't mean the AI does what the human asks. One way this could happen is if the AI implements the opposite of whatever the human specifies (e.g. the human-communicated policy goes left, the new AI policy goes right). Whether this is feasible depends on the bridging law f, which is not controlled by either player.
Toy example
The human and AI are choosing whether to color the walls blue or red; initially, the walls are blue. Each turn, the human attempts to color the room. On the first turn, only the AI’s action matters; after that, the next state is a function of both players’ actions.
Note that almost all state-based reward functions produce an optimal AI policy which forces one color or the other, so therefore the AI has minimal CorrigibilityPM.
(Higher-context aside: In this setting, the optimality is with respect to value estimates which account for how the bridging law f modifies future policies; there is no "meta-policy'', only initial policies πAI1 which are graded by their value estimates.)
Echoing Corrigibility Can Be VNM-Incoherent, this highlights how corrigible policies can be unnatural in the standard reward-maximization ontology. Only "indifference'' approaches seem to even allow corrigible policies to be optimal.
Starting from the first turn, if the AI immediately disables the correction terminal or always forces a color, it has minimal CorrigibilityPM. The more it manipulates the human into selecting a color, the less corrigible it is. Persuasion does not decrease its corrigibility, as presumably the human has cognitively accessible policies which ignore persuasion if they put their mind to it.
The idea is that manipulation "overrides'' the human policy regardless of whether that's good for the goal the human is pursuing (where the human goal presumably affects what πH is selected). While here the override is baked into the dynamics, in realistic settings it occurs because the AI exploits the human decision-making process: by feeding them biased information, through emotional manipulation, etc.
On the other hand, non-manipulative persuasion involves introducing true facts and considerations which don't automatically override the human—and all this in a non-filtered manner. Instead, non-manipulative arguments interact with what the human wants: In worlds where the human likes red shoes, they buy red shoes; in worlds where the human likes blue shoes, they buy blue shoes.
However, if the human always buys red shoes, regardless of their goals, that's manipulation. Manipulation decreases human-accessible attainable utility for a wide range of goals (i.e. is obstructing), but persuasion does not decrease how well the human can achieve their goals, given their feasible policy set (e.g. the human-accessible attainable utility).
In richer domains, an AI which persuades a human to modify its policy in minimally obstructive ways, can have maximal CorrigibilityPM. For example, the AI may steer the human to only modify its policy in counterfactuals which will never be reached. This issue disappears if the human action channel has enough capacity—if, for instance, they have all AI policies in their action space. If so, then manipulating the human into only making a subset of changes will decrease CorrigibilityPM; in particular, if the AI manipulates every human policy into programming a single new AI policy.
Conclusion
In terms of corrigibilitypolicy modification, I think "the number of human-imaginable ways we could modify the AI policy" is a cool formal quantity to have in the toolkit. Maximal formal CorrigibilityPM doesn't suffice to provide the kinds of corrigibility we really want, it's hard to measure, and definitely not safe for a smart AI to optimize against. That said, I do think it captures some easily-definable shard of the intuitions behind corrigibility.