Vanessa Kosoy

Director of AI research at ALTER, where I lead a group working on the learning-theoretic agenda for AI alignment. I'm also supported by the LTFF. See also LinkedIn.

E-mail: {first name}@alter.org.il

Wikitag Contributions

Comments

Sorted by

Thanks for this!

What I was saying up there is not a justification of Hurwicz' decision rule. Rather, it is that if you already accept the Hurwicz rule, it can be reduced to maximin, and for a simplicity prior the reduction is "cheap" (produces another simplicity prior).

Why accept the Hurwicz' decision rule? Well, at least you can't be accused of a pessimism bias there. But if you truly want to dig deeper, we can start instead from an agent making decisions according to an ambidistribution, which is a fairly general (assumption-light) way of making decisions. I believe that a similar argument (easiest to see in the LF-dual cramble set representation) would allow reducing that to maximin on infradistributions (credal sets).

To make such an approach even more satisfactory, it would be good to add a justification for a simplicity ambi/infra-prior. I think this should be possible by arguing from "opinionated agents": the ordinary Solomonoff prior is the unique semicomputable one that dominates all semicomputable measures, which decision-theoretically corresponds to something like "having preferences about as many possible worlds as we can". Possibly, the latter principle formalized can be formalized into something which ends up picking out an infra-Solomonoff prior (and, replacing "computability" by a stronger condition, some other kind of simplicity infra-prior).

You now understand correctly. The reason I switch to colored operads is to add even more generality. My key use case is when the operad consists of terms-with-holes in a programming language, in which case the colors are the types of the terms/holes.

The following are my thoughts on the definition of learning in infra-Bayesian physicalism (IBP), which is also a candidate for the ultimate prescriptive agent desideratum.

In general, learning of hypotheses about the physical universe is not possible because of traps. On the other hand, learning of hypotheses about computable mathematics is possible in the limit of ample computing resources, as long as we can ignore side effects of computations. Moreover, learning computable mathematics implies approximating Bayesian planning w.r.t the prior about the physical universe. Hence, we focus on this sort of learning.

We consider an agent comprised of three modules, that we call Simulator, Learner and Controller. The agent's history consists of two phases. In the Training phase, the Learner interacts with the Simulator, and in the end produces a program for the Controller. In the Deployment phase, the Controller runs the program.

Roughly speaking:

  • The Simulator is a universal computer whose function is performing computational experiments, which we can think of as "thought experiments" or coarse-grained simulations of potential plans. It receives commands from the Learner (which computations to run / threads to start/stop) and reports to the Learner the results. We denote the Simulator's input alphabet by  and output alphabet by .
  • The Learner is the machine learning (training) module. The algorithm whose desideratum we're specifying resides here.
  • The Controller (as in "control theory") is a universal computer connected to the agent's external interface (environmental actions  and observations ). It's responsible for real-time interaction with the environment, and we can think of it as the learned policy. It is programmed by the Learner, for which purpose it has input alphabet .

We will refer to this as the SiLC architecture.

Let  be our hypothesis class about computable mathematics. Let  be our prior about the physical universe[1]. These have to satisfy the coherence conditions

Here,  means that .

Together, these ensure that  is a coherent IBP hypothesis. Notice that for any  satisfying the first condition[2], there is a unique minimal coherent  s.t. . Moreover, given a coherent  and any , there is a unique minimal coherent  s.t. .

The duration of the Training phase will be denoted by [3]. We can think of it as "computational time".

Let the source codes of the Learner (obtained by quining), the Simulator and the Controller respectively be denoted by

Here, the  argument of  corresponds to  and  is a probability distribution in which all probabilities are rational numbers[4].

We assume that the simulator can indeed run any computation, and that any given halting computation would run fast for . These are assumptions on  (or, on some combination of (i) , (ii) the definition of , and (iii) the support of all ) that we will not spell out here.

We will say that a policy is a mapping of type  and a metapolicy is a mapping of type 

Given any , we can compose it with  and  in the obvious way[5] to yield

In particular, we can take  for some metapolicy  by postulating no dependence on the  argument.

Denote by  the set of all policies. Given metapolicy  and , we define  by

Given any , we say that  is a -consistent counterpossible when the following conditions hold:

  • For all  and 
  • For all  and 

We denote by  the set of -consistent counterpossibles.

A (deterministic) copolicy is a mapping of signature . We denote the set of copolicies by . Given a policy  and a copolicy , we define  in the obvious way. Given policies , we define their total variation distance[6] to be

Given  and metapolicy , we will use the notation

Intuitively,  should be thought as the counterfactual expectation of loss function  assuming metapolicy , while adding a "bonus" to account for "fair" treatment of randomization by the agent. More on that below.

Given a metapolicy  and , we define  by

Intuitively,  is the set of universe states for which at least one copy of the agent exists which followed the metapolicy  until computational time .

Given a loss function [7] (which we allow to explicitly depend on computational time for greater generality), the learning condition on a metapolicy  and hypothesis  is

Here,  is the "regret bound" function which should vanish in the  limit.

Some remarks on the particulars of this definition:

  • There are several reasons we impose  rather than :
  • First, we want to ignore the side effects of running computations on the Simulator (both causal side effects and "mindcrime", i.e. the direct contribution of those computations to ). Because, taking side effects into account is usually inconsistent with the unlimited experimentation needed for learning.
  • Second, learning requires trusting the reports of the Simulator, which means we should only impose the policy on copies of  that are actually connected to .
  • Third, we should also be able to trust the Controller, because otherwise we lose the semantic grounding of the agent's external interface. (Even though this is not necessary for learning per se.).
  • On the other hand, we impose  in the computational past because that's valid additional information that doesn't interfere with the learning or decision theory.
  • The learning criterion treats computational time myopically, so that we won't have to worry about traps in computational time.
  • The reason we need randomization is, it's often necessary for efficient learning. In the simplest non-trivial examples, we learn by IID sampling a distribution over computations (e.g. we simulate the interaction between a particular policy and our physical prior ). If we sampled deterministically instead, Murphy would be able to fool us by changing behavior precisely at the sampled instances.
  • The reason we need  is, randomization only helps if low probability events can be ignored. However, if sufficiently many copies of the agents are instantiated, even a low probability even would be detectable. Hence, we use a "forgiving" metric that assigns low loss even to distributions that technically have high loss but are close to a different distribution with low loss.
  • We can consider Newcombian problems where Omega makes decisions based on the agent's action probabilities. I suspect that if Omega's policy is Lipschitz in the agent policy, the behavior advised by the  counterfactual will converge to FDT-optimal in the limit of sufficiently many iterations.
  • Both in the case of ignoring side effects of computations and in the case of the treatment of randomization, we can be accused of departing from priorism ("updatelessness"). However, I think that this departure is justified. In the original TDT paper, Yudkowsky addressed the "Omega rewards irrationality" objection by postulating that, a decision problem is "fair" when it only depends on the agent's decisions rather than on how the agent makes those decisions. Here, we use the same principle: the agent should not be judged based on its internal thought process (side effects), and it should in some sense be judged based on its decisions rather than the probabilities assigned to those decisions.
  • Also about priorism, this kind of agents will not endorse iterated-in-computational-time "logical" counterfactual mugging when the same coin is reused indefinitely, but will endorse it when a new coin is used every time, for an appropriate definition of "new" (or e.g. we switch to a new coin every  rounds). Arguably, this solves the tension between priorism and learning observed by Demski. Formulating the precise criterion when Learning-IBP behavior converges to priorist / FDT-optimal is left for further research.
  • The dependence of  on  should ultimately involve some kind of description complexity. However, it will also involve something in the vein of "what are the computational resource bounds, according to the belief , for running certain computations, selected for their importance in testing ". In particular, we won't require the agent to learning anything about non-halting computations. Indeed, any hypothesis about such computations will either assert a time bound on running the non-halting computations (in which case it is false) or will fail to assert any such bound, in which case its learning complexity is known to be infinite.
  • We could make do without the  factor but that would make the learning criterion weaker. The presence of this factor means that, roughly speaking, regret should be low even conditional on the agent existing, which seems like a reasonable demand.
  • Given an AI designed along these principles, we might worry about the impact of the side effects that are ignored. Maybe these can produce non-Cartesian daemons. However, during the Training phase, the algorithm has no access to external observation, which arguably makes it unlikely anything inside it can learn how to cyberattack. Moreover, during the Deployment phase, any reason for concern would be mediated by the particular algorithm the Controller runs (rather than the particulars of how it's implemented), which is what we do take into account in our optimization. Finally, the agent might be able to self-modify to make itself safer: we can even intentionally give it the means to do so (as part of its action space ). This probably requires careful prior-shaping to work well.
  1. ^

    This framework assumes all our hypotheses are disintegrable w.r.t. the factorization into  and . It is an interesting question to understand whether we should or can relax this assumption.

  2. ^

    For example, we can imagine  to be a Solomonoff-like prior along the following lines. Every hypothesis comprising  is defined by a Turing machine  with access to two oracles representing  and two tapes of random and "ambiguous" bits respectively.  is defined by running  with one oracle fielding queries about  (i.e. we given a program  we can request to know its counterpossible output ) and the other oracle fielding queries about some  s.t. we want to decide whether  for  is only allowed to return NO if there was at least one query to which the two oracles gave different answers.

  3. ^

    We use the "duration" interpretation for simplicity, but more generally  can be some parameter controlling the computing resources available in the Training phase, and we can also allow the computing resources of the Controller to scale with .

  4. ^

    The reason we restrict to rational numbers is because we need a notion of computing the distribution. It is in principle possible to generalize further to computable numbers. On the other hand, it might be more realistic to constrain even further to e.g. dyadic rationals (which can be implemented via fair coinflips). We stick to  for simplicity.

  5. ^

    We let the Learner interact with the Simulator for  timesteps, producing some output , and then run the Controller with  as an input.

  6. ^

    This is not technically a distance since it is possible to have  if  so long as  and  only disagree on histories that are inconsistent with these policies. Such  and  are morally equivalent.

  7. ^

    We could also allow  to have a  argument, but then we would have to remove the  factor from the learning condition, because the choice of policy would matter intrinsically even if the agent doesn't exist. Alternatively, we could modify the definition of  to avoid that. Or perhaps use some normalization factor more complicated than .

No? The elements of an operad have fixed arity. When defining a free operad you need to specify the arity of every generator.

Another excellent catch, kudos. I've really been sloppy with this shortform. I corrected it to say that we can approximate the system arbitrarily well by VNM decision-makers. Although, I think it's also possible to argue that a system that selects a non-exposed point is not quite maximally influential, because it's selecting somethings that's very close to delegating some decision power to chance.

Also, maybe this cannot happen when  is the inverse limit of finite sets? (As is the case in sequential decision making with finite action/observation spaces). I'm not sure.

Example: Let , and  consist of the probability intervals  and . Then, it is (I think) consistent with the desideratum to have .

 

Not only that interpreting  requires an unusual decision rule (which I will be calling "utility hyperfunction"), but applying any ordinary utility function to this example yields a non-unique maximum. This is another point in favor of the significance of hyperfunctions.

You're absolutely right, good job! I fixed the OP.

TLDR: Systems which locally maximal influence can be described as VNM decision-makers.

There are at least 3 different motivations leading to the concept of "agent" in the context of AI alignment:

  1. The sort of system we are concerned about (i.e. which poses risk)
  2. The sort of system we want to build (in order to defend from dangerous systems)
  3. The sort of systems that humans are (in order to meaningfully talk about "human preferences")

Motivation #1 naturally suggests a descriptive approach, motivation #2 naturally suggests a prescriptive approach, and motivation #3 is sort of a mix of both: on the one hand, we're describing something that already exists, on the other hand, the concept of "preferences" inherently comes from a normative perspective. There are also reasons to think these different motivation should converge on a single, coherent concept.

Here, we will focus on motivation #1.

A central reason why we are concerned about powerful unaligned agents, is that they are influential. Agents are the sort of system that, when instantiated in a particular environment is likely to heavily change this environment, potentially in ways inconsistent with the preferences of other agents.

Bayesian VNM

Consider a nice space[1]  of possible "outcomes", and a system that can choose[2] out of a closed set of distributions . I propose that an influential system should satisfy the following desideratum:

The system cannot select  which can be represented as a non-trivial lottery over other elements in . In other words,  has to be an extreme point of the convex hull of .

Why? Because a system that selects a non-extreme point leaves something to chance. If the system can force outcome , or outcome  but chooses instead outcome , for  and , this means the system gave up on its ability to choose between  and  in favor of a -biased coin. Such a system is not "locally[3] maximally" influential[4].

[EDIT: The original formulation was wrong, h/t @harfe for catching the error.]

The desideratum implies that there is a convergent sequence of utility functions  s.t.

  • For every  has a unique maximum  in .
  • The sequence  converges to .

In other words, such a system can be approximated by a VNM decision-maker within any precision. For finite , we don't need the sequence, instead there is some  s.t.  is the unique maximum of  over . This observation is mathematically quite simple, but I haven't seen it made elsewhere (but I would not be surprised if it did appear in the decision theory literature somewhere).

Infra-Bayesian VNM?

Now, let's say that the system is choosing out of a set of credal sets (crisp infradistributions) . I propose the following desideratum:

[EDIT: Corrected according to a suggestion by @harfe, original version was too weak.]

Let  be the closure of  w.r.t. convex combinations and joins[5]. Let  be selected by the system. Then:

  • For any  and , if  then .
  • For any , if  then .

The justification is, a locally maximal influential system should leave the outcome neither to chance nor to ambiguity (the two types of uncertainty we have with credal sets).

We would like to say that this implies that the system is choosing according to maximin relatively to a particular utility function. However, I don't think this is true, as the following example shows:

Example: Let , and  consist of the probability intervals  and . Then, it is (I think) consistent with the desideratum to have .

Instead, I have the following conjecture:

Conjecture: There exists some space , some  and convergent sequence  s.t.

As before, the maxima should be unique.

Such a "generalized utility function" can be represented as an ordinary utility function with a latent -valued variable, if we replace  with  defined by

However, using utility functions constructed in this way leads to issues with learnability, which probably means there are also issues with computational feasibility. Perhaps in some natural setting, there is a notion of "maximally influential under computational constraints" which implies an "ordinary" maximin decision rule.

This approach does rule out optimistic or "mesomistic" decision-rules. Optimistic decision makers tend to give up on influence, because they believe that "nature" would decide favorably for them. Influential agents cannot give up on influence, therefore they should be pessimistic.

Sequential Decision-Making

What would be the implications in a sequential setting? That is, suppose that we have a set of actions , a set of observations , a prior  and

In this setting, the result is vacuous because of an infamous issue: any policy can be justified by a contrived utility functions that favors it. However, this is only because the formal desideratum doesn't capture the notion of "influence" sufficiently well. Indeed, a system whose influence boils down entirely to its own outputs is not truly influential. What motivation #1 asks of us, is talk about systems that influence the world-at-large, including relatively "faraway" locations.

One way to fix some of the problem is, take  and define  accordingly. This singles out systems that have influence over their observations rather than only their actions, which is already non-vacuous (some policies are not such). However, such a system can still be myopic. We can take this further, and select "long-term" influence by projecting onto late observations or some statistics over observations. However, in order to talk about actually "far-reaching" influence, we probably need to switch to the infra-Bayesian physicalism setting. There, we can set , i.e. select for system that have influence over physically manifest computations.

  1. ^

    I won't keep track of topological technicalities here, probably everything here works at least for compact Polish spaces.

  2. ^

    Meaning that the system has some output, and different counterfactual outputs correspond to different elements of .

  3. ^

    I say "locally" because it refers to something like a partial order, not a global scalar measure of influence.

  4. ^

    See also Yudkowsky's notion of efficient systems "not leaving free energy".

  5. ^

    That is, if  then their join (convex hull)   is also in , and so is  for every . Moreover,  is the minimal closed superset of  with this property. Notice that this implies  is closed w.r.t. arbitrary infra-convex combinations, i.e. for any  and , we have .

Do you mean that seeing the opponent make dumb moves makes the AI infer that its own moves are also supposed to be dumb, or something else?

Load More