Director of AI research at ALTER, where I lead a group working on the learning-theoretic agenda for AI alignment. I'm also supported by the LTFF. See also LinkedIn.
E-mail: {first name}@alter.org.il
You now understand correctly. The reason I switch to colored operads is to add even more generality. My key use case is when the operad consists of terms-with-holes in a programming language, in which case the colors are the types of the terms/holes.
The following are my thoughts on the definition of learning in infra-Bayesian physicalism (IBP), which is also a candidate for the ultimate prescriptive agent desideratum.
In general, learning of hypotheses about the physical universe is not possible because of traps. On the other hand, learning of hypotheses about computable mathematics is possible in the limit of ample computing resources, as long as we can ignore side effects of computations. Moreover, learning computable mathematics implies approximating Bayesian planning w.r.t the prior about the physical universe. Hence, we focus on this sort of learning.
We consider an agent comprised of three modules, that we call Simulator, Learner and Controller. The agent's history consists of two phases. In the Training phase, the Learner interacts with the Simulator, and in the end produces a program for the Controller. In the Deployment phase, the Controller runs the program.
Roughly speaking:
We will refer to this as the SiLC architecture.
Let be our hypothesis class about computable mathematics. Let be our prior about the physical universe[1]. These have to satisfy the coherence conditions
Here, means that .
Together, these ensure that is a coherent IBP hypothesis. Notice that for any satisfying the first condition[2], there is a unique minimal coherent s.t. . Moreover, given a coherent and any , there is a unique minimal coherent s.t. .
The duration of the Training phase will be denoted by [3]. We can think of it as "computational time".
Let the source codes of the Learner (obtained by quining), the Simulator and the Controller respectively be denoted by
Here, the argument of corresponds to and is a probability distribution in which all probabilities are rational numbers[4].
We assume that the simulator can indeed run any computation, and that any given halting computation would run fast for . These are assumptions on (or, on some combination of (i) , (ii) the definition of , and (iii) the support of all ) that we will not spell out here.
We will say that a policy is a mapping of type and a metapolicy is a mapping of type .
Given any , we can compose it with and in the obvious way[5] to yield
In particular, we can take for some metapolicy by postulating no dependence on the argument.
Denote by the set of all policies. Given metapolicy and , we define by
Given any , we say that is a -consistent counterpossible when the following conditions hold:
We denote by the set of -consistent counterpossibles.
A (deterministic) copolicy is a mapping of signature . We denote the set of copolicies by . Given a policy and a copolicy , we define in the obvious way. Given policies , we define their total variation distance[6] to be
Given , , and metapolicy , we will use the notation
Intuitively, should be thought as the counterfactual expectation of loss function assuming metapolicy , while adding a "bonus" to account for "fair" treatment of randomization by the agent. More on that below.
Given a metapolicy and , we define by
Intuitively, is the set of universe states for which at least one copy of the agent exists which followed the metapolicy until computational time .
Given a loss function [7] (which we allow to explicitly depend on computational time for greater generality), the learning condition on a metapolicy and hypothesis is
Here, is the "regret bound" function which should vanish in the limit.
Some remarks on the particulars of this definition:
This framework assumes all our hypotheses are disintegrable w.r.t. the factorization into and . It is an interesting question to understand whether we should or can relax this assumption.
For example, we can imagine to be a Solomonoff-like prior along the following lines. Every hypothesis comprising is defined by a Turing machine with access to two oracles representing and two tapes of random and "ambiguous" bits respectively. is defined by running with one oracle fielding queries about (i.e. we given a program we can request to know its counterpossible output ) and the other oracle fielding queries about some s.t. we want to decide whether for . is only allowed to return NO if there was at least one query to which the two oracles gave different answers.
We use the "duration" interpretation for simplicity, but more generally can be some parameter controlling the computing resources available in the Training phase, and we can also allow the computing resources of the Controller to scale with .
The reason we restrict to rational numbers is because we need a notion of computing the distribution. It is in principle possible to generalize further to computable numbers. On the other hand, it might be more realistic to constrain even further to e.g. dyadic rationals (which can be implemented via fair coinflips). We stick to for simplicity.
We let the Learner interact with the Simulator for timesteps, producing some output , and then run the Controller with as an input.
This is not technically a distance since it is possible to have if so long as and only disagree on histories that are inconsistent with these policies. Such and are morally equivalent.
We could also allow to have a argument, but then we would have to remove the factor from the learning condition, because the choice of policy would matter intrinsically even if the agent doesn't exist. Alternatively, we could modify the definition of to avoid that. Or perhaps use some normalization factor more complicated than .
No? The elements of an operad have fixed arity. When defining a free operad you need to specify the arity of every generator.
Another excellent catch, kudos. I've really been sloppy with this shortform. I corrected it to say that we can approximate the system arbitrarily well by VNM decision-makers. Although, I think it's also possible to argue that a system that selects a non-exposed point is not quite maximally influential, because it's selecting somethings that's very close to delegating some decision power to chance.
Also, maybe this cannot happen when is the inverse limit of finite sets? (As is the case in sequential decision making with finite action/observation spaces). I'm not sure.
Example: Let , and consist of the probability intervals , and . Then, it is (I think) consistent with the desideratum to have .
Not only that interpreting requires an unusual decision rule (which I will be calling "utility hyperfunction"), but applying any ordinary utility function to this example yields a non-unique maximum. This is another point in favor of the significance of hyperfunctions.
You're absolutely right, good job! I fixed the OP.
TLDR: Systems which locally maximal influence can be described as VNM decision-makers.
There are at least 3 different motivations leading to the concept of "agent" in the context of AI alignment:
Motivation #1 naturally suggests a descriptive approach, motivation #2 naturally suggests a prescriptive approach, and motivation #3 is sort of a mix of both: on the one hand, we're describing something that already exists, on the other hand, the concept of "preferences" inherently comes from a normative perspective. There are also reasons to think these different motivation should converge on a single, coherent concept.
Here, we will focus on motivation #1.
A central reason why we are concerned about powerful unaligned agents, is that they are influential. Agents are the sort of system that, when instantiated in a particular environment is likely to heavily change this environment, potentially in ways inconsistent with the preferences of other agents.
Consider a nice space[1] of possible "outcomes", and a system that can choose[2] out of a closed set of distributions . I propose that an influential system should satisfy the following desideratum:
The system cannot select which can be represented as a non-trivial lottery over other elements in . In other words, has to be an extreme point of the convex hull of .
Why? Because a system that selects a non-extreme point leaves something to chance. If the system can force outcome , or outcome but chooses instead outcome , for and , this means the system gave up on its ability to choose between and in favor of a -biased coin. Such a system is not "locally[3] maximally" influential[4].
[EDIT: The original formulation was wrong, h/t @harfe for catching the error.]
The desideratum implies that there is a convergent sequence of utility functions s.t.
In other words, such a system can be approximated by a VNM decision-maker within any precision. For finite , we don't need the sequence, instead there is some s.t. is the unique maximum of over . This observation is mathematically quite simple, but I haven't seen it made elsewhere (but I would not be surprised if it did appear in the decision theory literature somewhere).
Now, let's say that the system is choosing out of a set of credal sets (crisp infradistributions) . I propose the following desideratum:
[EDIT: Corrected according to a suggestion by @harfe, original version was too weak.]
Let be the closure of w.r.t. convex combinations and joins[5]. Let be selected by the system. Then:
The justification is, a locally maximal influential system should leave the outcome neither to chance nor to ambiguity (the two types of uncertainty we have with credal sets).
We would like to say that this implies that the system is choosing according to maximin relatively to a particular utility function. However, I don't think this is true, as the following example shows:
Example: Let , and consist of the probability intervals , and . Then, it is (I think) consistent with the desideratum to have .
Instead, I have the following conjecture:
Conjecture: There exists some space , some and convergent sequence s.t.
As before, the maxima should be unique.
Such a "generalized utility function" can be represented as an ordinary utility function with a latent -valued variable, if we replace with defined by
However, using utility functions constructed in this way leads to issues with learnability, which probably means there are also issues with computational feasibility. Perhaps in some natural setting, there is a notion of "maximally influential under computational constraints" which implies an "ordinary" maximin decision rule.
This approach does rule out optimistic or "mesomistic" decision-rules. Optimistic decision makers tend to give up on influence, because they believe that "nature" would decide favorably for them. Influential agents cannot give up on influence, therefore they should be pessimistic.
What would be the implications in a sequential setting? That is, suppose that we have a set of actions , a set of observations , , a prior and
In this setting, the result is vacuous because of an infamous issue: any policy can be justified by a contrived utility functions that favors it. However, this is only because the formal desideratum doesn't capture the notion of "influence" sufficiently well. Indeed, a system whose influence boils down entirely to its own outputs is not truly influential. What motivation #1 asks of us, is talk about systems that influence the world-at-large, including relatively "faraway" locations.
One way to fix some of the problem is, take and define accordingly. This singles out systems that have influence over their observations rather than only their actions, which is already non-vacuous (some policies are not such). However, such a system can still be myopic. We can take this further, and select "long-term" influence by projecting onto late observations or some statistics over observations. However, in order to talk about actually "far-reaching" influence, we probably need to switch to the infra-Bayesian physicalism setting. There, we can set , i.e. select for system that have influence over physically manifest computations.
I won't keep track of topological technicalities here, probably everything here works at least for compact Polish spaces.
Meaning that the system has some output, and different counterfactual outputs correspond to different elements of .
I say "locally" because it refers to something like a partial order, not a global scalar measure of influence.
See also Yudkowsky's notion of efficient systems "not leaving free energy".
That is, if then their join (convex hull) is also in , and so is for every . Moreover, is the minimal closed superset of with this property. Notice that this implies is closed w.r.t. arbitrary infra-convex combinations, i.e. for any , and , we have .
Master post for selection/coherence theorems. Previous relevant shortforms: learnability constraints decision rules, AIT selection for learning.
Do you mean that seeing the opponent make dumb moves makes the AI infer that its own moves are also supposed to be dumb, or something else?
Thanks for this!
What I was saying up there is not a justification of Hurwicz' decision rule. Rather, it is that if you already accept the Hurwicz rule, it can be reduced to maximin, and for a simplicity prior the reduction is "cheap" (produces another simplicity prior).
Why accept the Hurwicz' decision rule? Well, at least you can't be accused of a pessimism bias there. But if you truly want to dig deeper, we can start instead from an agent making decisions according to an ambidistribution, which is a fairly general (assumption-light) way of making decisions. I believe that a similar argument (easiest to see in the LF-dual cramble set representation) would allow reducing that to maximin on infradistributions (credal sets).
To make such an approach even more satisfactory, it would be good to add a justification for a simplicity ambi/infra-prior. I think this should be possible by arguing from "opinionated agents": the ordinary Solomonoff prior is the unique semicomputable one that dominates all semicomputable measures, which decision-theoretically corresponds to something like "having preferences about as many possible worlds as we can". Possibly, the latter principle formalized can be formalized into something which ends up picking out an infra-Solomonoff prior (and, replacing "computability" by a stronger condition, some other kind of simplicity infra-prior).