In 2015, Jessica introduced quantilization as a countermeasure for Goodhart's Law and specification-gaming. Since these are such central problems in AI safety, I consider quantilization to be one of the best innovations in AI safety so far, but it has received littleattention from the AI safety field. I think one reason for this is that researchers aren't quite clear what problem, formally, quantilization solves, that other algorithms don't. So in this piece, I define a robust reward problem, and then discuss when quantilization solves this problem well, and when it doesn't.
Definition 1 (Robust reward problem)
We can define a robust reward problem as a tuple: ⟨A,U,I,D,k⟩.
A is the action space
U:A→R is the explicit reward
I⊆{A→R} is the space of implicit rewards, a family of functions I:A→R
D is a distribution over actions
k∈R+ is the maximum implicit loss in the demonstrations
The goal of the agent is to maximize V:=U+I for any I∈I. If that was the end of the definition, the task would be too difficult, because an adversarial reward could thwart any strategy. So we need to assume that I is pretty well-behaved in the region D.
Ea∼D[I(a)]≥−k(1)
Formally, the goal is to select a strategy S that maximizes the following:
minI∈I:Ea∼D[I(a)]≥−kEa∼S[V(a)]
The intuition behind (1) is that a teacher may forget to include some possible failures in their reward function, but they ought not to leave out the same mistakes that they themselves frequently make. And at any rate, without (1), assuring good performance is impossible.
When quantilization works
You can skip this section if you're familiar with Jessica's work
If we set I={A→(−∞,0]}, we recover the original setting in which quantilizers were described. Then (as V Kosoy has argued) the ordinary quantilizer theorems mean that we get the best lower bound for V.
We will need to reuse the definitions, so to briefly recap:
Definition 2 (Quantilizer). A q-quantilizer is an agent that, when faced with a decision problem, returns a random action in the top q proportion of the base distribution D, sorted by the explicit expected utility U achieved if that action is executed.
Jessica proves three things. First, that a quantilizer does not have much worse implicit loss than the base distribution D:
Ea∼Qq[I(a)]≥Ea∼D[I(a)]q
Where Qq is the distribution over actions selected by the quantilizer.
Second, she proves that no other strategy S can get a lower-bound better than:
Ea∼D[I(a)]maxa∈A,pS(a)>0pS(a)/pD(a)
(There is no lower bound if there exists a∈A such that pS(a)>0,p(D)=0.)
This means the quantilizer has the best guarantee that can be obtained. Basically, any strategy that puts much more weight on any action a can be defeated by an adversarial loss function that places all of the loss there. In a continuous action space, this is really bad. It means simply optimizing U can lead to arbitrarily bad values of I and hence V, because the action you choose might contain unlimited counterfeit utility. More surprisingly, the same is true for agents that maximize a mixed reward, such as a′∈argmaxA(U(a)−λlog(pD(a)) because these mixed reward agents also tend to land on one fixed action. In the literature, there are many variations on this theme. One can start with imitating, and then optimize and imitate in alternating steps, and so on. But any commonscheme, and any remotely similar scheme can land on a fixed action and have unbounded loss.
Anyway, the third thing that Jessica proves is that if you can't let t:=maxapS(a)/pD(a) be too high anywhere, then the best thing you can do from the point of view of obtaining U is to select the top actions according to U with some probability tpD(a)=pD(a)/q and to place 0 probability on all other actions, in other words, to quantilize. The important thing to notice is that our I-guarantee is in terms of t , and given a certain t, the most U is obtained by quantilizing. As V Kosoy has said, quantilizing with different values of t carves out the pareto frontier of U and guaranteed-I , and so the maximum guaranteed-V is also obtained by quantilizing. Specifically, one is guaranteed
Ea∼Qq[V(a)]≥Ea∼Qq[U(a)]+Ea∼D[I(a)]q
For this kind of robust reward problem, quantilization does best. Imitation, which is the subtype of quantilization with q=1 obtains a guarantee on E[I(a)], whereas optimization has E[I(a)] that is bounded above, and so is doomed!
So Jessica tells a compelling story about one particular situation (with I=(−∞,0]), where actions cannot be (much) better than expected, but can be much worse. But what if we remove that assumption by choosing a value of I ?
When optimization and imitation work better than quantilization
Suppose that I={A→[−c,∞)}. Actions can be arbitrarily good, but losses are bounded. In this situation, any strategy has a bound on V, which is Ea∼S[V(a)]≥Ea∼S[U(a)]−c. Given this, you might as well pick the best action a∗ every time, giving the following guarantee:
V(a∗)≥U(a∗)−c
Alternatively, you can imitate, and get the guarantee:
Ea∼D[V(a)]≥Ea∼D[U(a)]−k
Which course of action is preferable depends on the values of k and c.
Suppose alternatively that I={A→(−∞,∞)}. Then, the losses of an optimizer are unbounded, but the losses of any q-quantilizer with q∈(0,1) are unbounded too. In this case, the only way to get any lower-bound on your losses is to just imitate the given distribution, having your strategy S be equal to the base distribution D. Then you obtain the bound
Ea∼D[V(a)]≥Ea∼D[U(a)]+Ea∼D[I(a)]
How it can depend on U
Now that we've covered a few values of I , let's double down on considering I={A→(−∞,0]}. In this scenario, we said that quantilization is optimal, but we didn't yet say whether the best form of quantilization might have q→0 (optimization) or q=1 (imitation).
Intuitively, if U is completely flat, it makes sense to perform imitation, because diverging from D has something to lose but nothing to gain. Conversely, if k is zero (there is no hidden loss), then one can optimize, so long as one is constrained to the support of D, because the hidden loss is zero for that set of actions. But can we say more?
The case of optimization is pretty clear-cut, because for any k>0, an infinite amount of loss is incurred. Optimization would only maximize V if U increases faster than hyperbolically as q→0. Basically, this would require that Ea∼D[U(a))] diverged, which would be a really pathological situation. So we can basically rule out optimization for I={A→(−∞,0]} , k>0.
What about imitation? Imitation will be optimal for many reward functions. Basically, decreasing q just increases Ea∼Qq[U(a)] while decreasing Ea∼Qq[I(a)]=kq. If there exists some sweet-spot of actions that are pretty common but substantially outperform imitation, then quantilization will be best, and otherwise, the best approach is imitation.
Which set of assumptions best matches reality?
In general, our actions, or those of an AI can bring astronomical benefits or harms, so I={A→[0,∞)} or I={A→(−a,b)} is unrealistic.
When training for a fully general, autonomous task, it is apt to model the scenario as I={A→(−∞,∞)}, because the demonstrated actions could have complex downstream effects (see Taylor on "butterfly effects") that bear out on the whole light cone. But at least, in this setting, we can take consolation that imitation is theoretically safe, and try to advance projects like brain-emulation and factored cognition, that would imitate human reasoning. The disadvantage of these proposals is that they basically can only make speed-superintelligence, rather than quality-superintelligence.
The question of whether Jessica's assumption of I={A→(−∞,0]} is a reasonable model for some tasks is interesting. Following Jessica, we need (1) it to be sufficient to perform some small factor 1/q better than a human demonstrator, (2) for the human to encode all important information either in the explicit utility function or in the demonstrations, and (3) for the AI system not to decrease the frequency of astronomical, unseen positive impacts. For quantilization to do any better than imitation, we do also need (4) U to have sufficient slope that it is worthwhile to deviate from imitation. It would certainly be nice if some realistic, and potentially pivotal tasks could be quantilized, but I think the jury is still out, and now primarily awaits experimental investigation.
In 2015, Jessica introduced quantilization as a countermeasure for Goodhart's Law and specification-gaming. Since these are such central problems in AI safety, I consider quantilization to be one of the best innovations in AI safety so far, but it has received little attention from the AI safety field. I think one reason for this is that researchers aren't quite clear what problem, formally, quantilization solves, that other algorithms don't. So in this piece, I define a robust reward problem, and then discuss when quantilization solves this problem well, and when it doesn't.
Definition 1 (Robust reward problem)
We can define a robust reward problem as a tuple: ⟨A,U,I,D,k⟩.
A is the action space
U:A→R is the explicit reward
I⊆{A→R} is the space of implicit rewards, a family of functions I:A→R
D is a distribution over actions
k∈R+ is the maximum implicit loss in the demonstrations
The goal of the agent is to maximize V:=U+I for any I∈I. If that was the end of the definition, the task would be too difficult, because an adversarial reward could thwart any strategy. So we need to assume that I is pretty well-behaved in the region D.
Formally, the goal is to select a strategy S that maximizes the following:
The intuition behind (1) is that a teacher may forget to include some possible failures in their reward function, but they ought not to leave out the same mistakes that they themselves frequently make. And at any rate, without (1), assuring good performance is impossible.
When quantilization works
You can skip this section if you're familiar with Jessica's work
If we set I={A→(−∞,0]}, we recover the original setting in which quantilizers were described. Then (as V Kosoy has argued) the ordinary quantilizer theorems mean that we get the best lower bound for V.
We will need to reuse the definitions, so to briefly recap:
Definition 2 (Quantilizer). A q-quantilizer is an agent that, when faced with a decision problem, returns a random action in the top q proportion of the base distribution D, sorted by the explicit expected utility U achieved if that action is executed.
Jessica proves three things. First, that a quantilizer does not have much worse implicit loss than the base distribution D:
Where Qq is the distribution over actions selected by the quantilizer.
Second, she proves that no other strategy S can get a lower-bound better than:
(There is no lower bound if there exists a∈A such that pS(a)>0,p(D)=0.)
This means the quantilizer has the best guarantee that can be obtained. Basically, any strategy that puts much more weight on any action a can be defeated by an adversarial loss function that places all of the loss there. In a continuous action space, this is really bad. It means simply optimizing U can lead to arbitrarily bad values of I and hence V, because the action you choose might contain unlimited counterfeit utility. More surprisingly, the same is true for agents that maximize a mixed reward, such as a′∈argmaxA(U(a)−λlog(pD(a)) because these mixed reward agents also tend to land on one fixed action. In the literature, there are many variations on this theme. One can start with imitating, and then optimize and imitate in alternating steps, and so on. But any common scheme, and any remotely similar scheme can land on a fixed action and have unbounded loss.
Anyway, the third thing that Jessica proves is that if you can't let t:=maxapS(a)/pD(a) be too high anywhere, then the best thing you can do from the point of view of obtaining U is to select the top actions according to U with some probability tpD(a)=pD(a)/q and to place 0 probability on all other actions, in other words, to quantilize. The important thing to notice is that our I-guarantee is in terms of t , and given a certain t, the most U is obtained by quantilizing. As V Kosoy has said, quantilizing with different values of t carves out the pareto frontier of U and guaranteed-I , and so the maximum guaranteed-V is also obtained by quantilizing. Specifically, one is guaranteed
For this kind of robust reward problem, quantilization does best. Imitation, which is the subtype of quantilization with q=1 obtains a guarantee on E[I(a)], whereas optimization has E[I(a)] that is bounded above, and so is doomed!
So Jessica tells a compelling story about one particular situation (with I=(−∞,0]), where actions cannot be (much) better than expected, but can be much worse. But what if we remove that assumption by choosing a value of I ?
When optimization and imitation work better than quantilization
Suppose that I={A→[−c,∞)}. Actions can be arbitrarily good, but losses are bounded. In this situation, any strategy has a bound on V, which is Ea∼S[V(a)]≥Ea∼S[U(a)]−c. Given this, you might as well pick the best action a∗ every time, giving the following guarantee:
Alternatively, you can imitate, and get the guarantee:
Which course of action is preferable depends on the values of k and c.
Suppose alternatively that I={A→(−∞,∞)}. Then, the losses of an optimizer are unbounded, but the losses of any q-quantilizer with q∈(0,1) are unbounded too. In this case, the only way to get any lower-bound on your losses is to just imitate the given distribution, having your strategy S be equal to the base distribution D. Then you obtain the bound
How it can depend on U
Now that we've covered a few values of I , let's double down on considering I={A→(−∞,0]}. In this scenario, we said that quantilization is optimal, but we didn't yet say whether the best form of quantilization might have q→0 (optimization) or q=1 (imitation).
Intuitively, if U is completely flat, it makes sense to perform imitation, because diverging from D has something to lose but nothing to gain. Conversely, if k is zero (there is no hidden loss), then one can optimize, so long as one is constrained to the support of D, because the hidden loss is zero for that set of actions. But can we say more?
The case of optimization is pretty clear-cut, because for any k>0, an infinite amount of loss is incurred. Optimization would only maximize V if U increases faster than hyperbolically as q→0. Basically, this would require that Ea∼D[U(a))] diverged, which would be a really pathological situation. So we can basically rule out optimization for I={A→(−∞,0]} , k>0.
What about imitation? Imitation will be optimal for many reward functions. Basically, decreasing q just increases Ea∼Qq[U(a)] while decreasing Ea∼Qq[I(a)]=kq. If there exists some sweet-spot of actions that are pretty common but substantially outperform imitation, then quantilization will be best, and otherwise, the best approach is imitation.
Which set of assumptions best matches reality?
In general, our actions, or those of an AI can bring astronomical benefits or harms, so I={A→[0,∞)} or I={A→(−a,b)} is unrealistic.
When training for a fully general, autonomous task, it is apt to model the scenario as I={A→(−∞,∞)}, because the demonstrated actions could have complex downstream effects (see Taylor on "butterfly effects") that bear out on the whole light cone. But at least, in this setting, we can take consolation that imitation is theoretically safe, and try to advance projects like brain-emulation and factored cognition, that would imitate human reasoning. The disadvantage of these proposals is that they basically can only make speed-superintelligence, rather than quality-superintelligence.
The question of whether Jessica's assumption of I={A→(−∞,0]} is a reasonable model for some tasks is interesting. Following Jessica, we need (1) it to be sufficient to perform some small factor 1/q better than a human demonstrator, (2) for the human to encode all important information either in the explicit utility function or in the demonstrations, and (3) for the AI system not to decrease the frequency of astronomical, unseen positive impacts. For quantilization to do any better than imitation, we do also need (4) U to have sufficient slope that it is worthwhile to deviate from imitation. It would certainly be nice if some realistic, and potentially pivotal tasks could be quantilized, but I think the jury is still out, and now primarily awaits experimental investigation.