I appreciate this generalization of the results - I think it's a good step towards showing the underlying structure involved here.
One point I want to comment on is transitivity of , as a relation on induced functions . Namely, it isn't, and can even contain cycles of non-equivalent elements. (This came up when I was trying to apply a version of these results, and hoping that would be the preference relation I was looking for out of the box.) Quite possibly you noticed this since you give 'limited transitivity' in Lemma B.1 rather than full transitivity, but to give a concrete example:
Let and . The permutations are with the usual action on . Then we have [1] (and ). This also works on retargetability directly, with being , , retargetable. Notice also that is invariant under joint permutations (constant diagonals), and I think can be represented as EU-determined, so neither of these save it.
A narrow point is that for a non-transitive relation, I think the notation should be something other than (maybe ).
But more importantly, I think we would really rather a transitive (at least acyclic) relation, if we want to interpret this is 'most prefer' or any kind of preference / aggregation of preferences. If our theorem gives us only an intransitive relation as our conclusion, then we should tweak it.
One way you can do this: aim for a stronger relation like :
Definition (Orbit-mean dominance?): Let . Write if .
Since the orbits are under i.e. finite, it's easy to just sum over them. More generally, you could parameterize this with an arbitrary aggregator in place of summation; I'm not sure whether this general form or the case should be the focus.
This is transitive for and acyclic for[2] (consider by ); and possibly any orbit-based transitive relation is representable in basically this form[3] (with some ), since I'd guess any partial order on sets with cardinality can be represented as a pointwise inequality of functions, but I haven't thought about this too carefully.
With this notion of , we also need a stronger version of retargetability for the main theorem to hold. For the version, this could be
Definition (scalar-retargetability): Write is if there exists such that for all with we have (and likewise multiply scalar-retargetable).
Then scalar-retargetability from to will imply .
And: I think many (all?) of the main power-seeking results are already secretly in this form. For example, -wise comparison of gives a preference relation identical to the relation . Assuming this also works for the other rationalities, then the cases we care about were transitive all along exactly because the relations can be expressed in this way.
What do you think?
We get the same single orbit for all a.k.a. ; the orbit elements with are the columns where row row . There are always two such columns when comparing row and row (mod 3). For example, ↩︎
We exclude s.t. in this version of the definition to match the behaviour of with , and allow -scalar-retargetability to imply . There's a case that you should include them, in which case you do get transitivity, and even the stronger property: if , then . I think this corresponds to looking at likelihood ratios of vs. . ↩︎
Compare also what would give you a total order (instead of partial order): aggregating over all of at once, like , instead of aggregating orbitwise at each . ↩︎
This is a nice contribution, thank you!
I agree with the parts I could verify within about 10 minutes of staring (it's been a while). The scalar-retargetability is nice, and I like the delineation of what definitions yield what properties. Seems like an additional hour of work would yield a good AF post, where I'd expect most of the useful additional work to come from fleshing out the example more and justifying the claims in a bit more detail.
To clarify:
This also works on retargetability directly, with being , , retargetable. Notice also that is invariant under joint permutations (constant diagonals), and I think can be represented as EU-determined, so neither of these save it.
What are here?
Thanks for the reply. I'll clean this up into a standalone post and/or cover this in a related larger post I'm working on, depending on how some details turn out.
What are here?
Variables I forgot to rename, when I changed how I was labelling the arguments of in my example. This should be , , retargetable (as arguments to ).
I'm finally engaging with this after having spent too long afraid of the math. Initial thoughts:
This paper—accepted as a poster to NeurIPS 2022— is the sequel to Optimal Policies Tend to Seek Power. The new theoretical results are extremely broad, discarding the requirements of full observability, optimal policies, or even requiring a finite number of options.
Abstract:
Examples of agent designs the power-seeking theorems now apply to:
The key insight is that the original results hinge not on optimality per se, but on the retargetability of the policy-generation process via a reward or utility function or some other parameter. See Satisficers Tend To Seek Power: Instrumental Convergence Via Retargetability for intuitions and illustrations.
Why am I only now posting this?
First, I've been way more excited about shard theory. I still think these theorems are really cool, though.
Second, I think the results in this paper are informative about the default incentives for decision-makers which "care about things." IE, make decisions on the basis of e.g. how many diamonds that decision leads to, or how many paperclips, and so on. However, I think that conventional accounts and worries around "utility maximization" are subtly misguided. Whenever I imagined posting this paper, I felt like "ugh sharing this result will just make it worse." I'm not looking to litigate that concern right now, but I do want to flag it.
Third, Optimal Policies Tend to Seek Power makes the "reward is the optimization target" mistake super strongly. Parametrically retargetable decision-makers tend to seek power makes the mistake less hard, both because it discusses utility functions and learned policies instead of optimal policies, and also thanks to edits I've made since realizing my optimization-target mistake.
Conclusion
This paper isolates the key mechanism—retargetability—which enables the results in Optimal Policies Tend to Seek Power. This paper also takes healthy steps away from the optimal policy regime (which I consider to be a red herring for alignment) and lays out a bunch of theory I found—and still find—beautiful.
This paper is both published in a top-tier conference and, unlike the previous paper, actually has a shot of being applicable to realistic agents and training processes. Therefore, compared to the original[1] optimal policy paper, I think this paper is better for communicating concerns about power-seeking to the broader ML world.
I've since updated the optimal policy paper with disclaimers about Reward is not the optimization target, so the updated version is at least passable in this regard. I still like the first paper, am proud of it, and think it was well-written within its scope. It also takes a more doomy tone about AGI risk, which seems good to me.