However, using counterfactual tools, we can construct uninfluenceable ˆP and P, starting from biased and influenceable ones.
Why is uninfluenceable necessary? Well, an unbiased agent can still take actions such as 'randomise their own reward (independent of human choice)', as long as the choice of their randomisation is unbiased. For instance, let π0 be some default policy, and let the tidying (R0) versus cooking (R1) agent currently consider both options to be equally likely. So, ultimately, the human will choose one or the other equally: Eπ0μP(R0)=Eπ0μP(R1)=1/2. Then if the agent chooses to randomise its own reward immediately with 50% probability, this is an unbiased policy. And one the agent may prefer to do, because then it will know its own reward immediately, rather than waiting for the human to decide.
Let μ=(S,A,O,T,O,T0) be the environment, P a potentially biased and influenceable distribution, and π0 some default policy.
Then define the counterfactual reward learning distribution Pπ0μ on the reward R, given history hm, as:
Pπ0μ(R∣hm)=∑s∈SEπ0μ[P(R)∣s0=s]μ(s0=s∣hm).
Thus Pπ0μ(R∣hm) is estimated by using hm to compute the probability that s0=s, for each s, and then estimating the (counterfactual) expectation of P given s0=s and the default policy π0.
It's not hard to see that Pπ0μ is uninfluenceable: given knowledge of the initial state, it's just Eπ0μ[P(⋅)∣s0=s], which is independent of the agent's actions. The learning process is simply about figuring out what the initial
Dependence on POMDP structure
Note that if two environments are counterfactually equivalent, they can still generate different Pπ0μ even for the same P and π0.
This can be seen immediately from examples μ and μ′′ from this post, which are counterfactually equivalent.
In that setting, aw corresponds to waiting for a human to decide between R0 and R1, while ar corresponds to the agent randomising their own reward.
Set P(Ri∣s0awswi)=1 and P(Ri∣s0awsri)=1 -- the final state determines Ri, and let the default policy be π0=aw, the wait action.
Since Eπ0μ[P(Ri)]=0.5 and μ has only one possible state for s0, Pπ0μ(Ri)=0.5 is the only possibility.
In contrast Eπ0μ′′[P(Ri)|s0=si0]=1, so Pπ0μ′′(Ri∣s0=si0)=1.
Thus Pπ0μ≠Pπ0μ′′.
Interpretation
What could we use for π0? There are two natural choices: a default where π0 does nothing (equivalent with the agent not being active or on), or π0 being a pure learning policy for P. Since a pure learning process does not have problem with bias or influence (it simply wants to learn), but could be dangerous if unconstrained, having this sort of 'counterfactual learning' might be a good idea (though be careful of the incentives that a badly defined pure learning process might have).
For an example: imagine the agent's correct reward is what was written on a certain paper an hour ago. This is very clearly uninfluenceable: the agent simply needs to learn data that is out in the universe. If instead the agent's correct reward was what would be written on a certain paper in an hour, then it's clearly influenceable: the agent can simply write what it wants on that paper.
The counterfactual (for π0="do nothing") is then simply 'what would have been written on the paper, if the agent had done nothing'. If the agent can figure that out early, then it doesn't care about the paper or the writing at all, except as far as its counterfactual evidence.
A putative new idea for AI control; index here.
Techniques used to counter agents taking biased decisions do not produce uninfluenceable agents.
However, using counterfactual tools, we can construct uninfluenceable ˆP and P, starting from biased and influenceable ones.
Why is uninfluenceable necessary? Well, an unbiased agent can still take actions such as 'randomise their own reward (independent of human choice)', as long as the choice of their randomisation is unbiased. For instance, let π0 be some default policy, and let the tidying (R0) versus cooking (R1) agent currently consider both options to be equally likely. So, ultimately, the human will choose one or the other equally: Eπ0μP(R0)=Eπ0μP(R1)=1/2. Then if the agent chooses to randomise its own reward immediately with 50% probability, this is an unbiased policy. And one the agent may prefer to do, because then it will know its own reward immediately, rather than waiting for the human to decide.
As usual, the notation of this post is used.
The counterfactual distribution
Let μ=(S,A,O,T,O,T0) be the environment, P a potentially biased and influenceable distribution, and π0 some default policy.
Then define the counterfactual reward learning distribution Pπ0μ on the reward R, given history hm, as:
Thus Pπ0μ(R∣hm) is estimated by using hm to compute the probability that s0=s, for each s, and then estimating the (counterfactual) expectation of P given s0=s and the default policy π0.
It's not hard to see that Pπ0μ is uninfluenceable: given knowledge of the initial state, it's just Eπ0μ[P(⋅)∣s0=s], which is independent of the agent's actions. The learning process is simply about figuring out what the initial
Dependence on POMDP structure
Note that if two environments are counterfactually equivalent, they can still generate different Pπ0μ even for the same P and π0.
This can be seen immediately from examples μ and μ′′ from this post, which are counterfactually equivalent.
In that setting, aw corresponds to waiting for a human to decide between R0 and R1, while ar corresponds to the agent randomising their own reward.
Set P(Ri∣s0awswi)=1 and P(Ri∣s0awsri)=1 -- the final state determines Ri, and let the default policy be π0=aw, the wait action.
Since Eπ0μ[P(Ri)]=0.5 and μ has only one possible state for s0, Pπ0μ(Ri)=0.5 is the only possibility.
In contrast Eπ0μ′′[P(Ri)|s0=si0]=1, so Pπ0μ′′(Ri∣s0=si0)=1.
Thus Pπ0μ≠Pπ0μ′′.
Interpretation
What could we use for π0? There are two natural choices: a default where π0 does nothing (equivalent with the agent not being active or on), or π0 being a pure learning policy for P. Since a pure learning process does not have problem with bias or influence (it simply wants to learn), but could be dangerous if unconstrained, having this sort of 'counterfactual learning' might be a good idea (though be careful of the incentives that a badly defined pure learning process might have).
For an example: imagine the agent's correct reward is what was written on a certain paper an hour ago. This is very clearly uninfluenceable: the agent simply needs to learn data that is out in the universe. If instead the agent's correct reward was what would be written on a certain paper in an hour, then it's clearly influenceable: the agent can simply write what it wants on that paper.
The counterfactual (for π0="do nothing") is then simply 'what would have been written on the paper, if the agent had done nothing'. If the agent can figure that out early, then it doesn't care about the paper or the writing at all, except as far as its counterfactual evidence.
Thus this model is equivalent with the old stratified agents.