Reflective oracles can be approximated by computing Nash equilibria. But is there some procedure that produces a Pareto-optimal equilibrium in a game, aka, a point produced by a Cooperative oracle? It turns out there is. There are some interesting philosophical aspects to it, which will be typed up in the next post.

The result is not original to me, it's been floating around MIRI for a while. I think Scott, Sam, and Abram worked on it, but there might have been others. All I did was formalize it a bit, and generalize from the 2-player 2-move case to the n-player n-move case. With the formalism here, it's a bit hard to intuitively understand what's going on, so I'll indicate where to visualize an appropriate 3-dimensional object.


Definitions:

First, we will define cell sets and cell products.

Cell Set and Cell Product:

A family of sets is a cell set of some convex polytope in finite-dimensional euclidean space, iff all elements of the family are compact convex polytopes, the union of the family equals the original convex polytope, and for all pairs of elements of the family, the intersection has Lebesgue measure 0.

More intuitively, a cell set is just a set of cells where they perfectly cover some space you want, the cells only overlap at boundaries, and the cells are all finitely big, convex, and include their boundaries (so a point may be part of multiple cells if it's on a boundary, that's intentional). If you make a big cube out of a bunch of little cubes arranged in a space-filling honeycomb pattern, the set of all those little cubes is a cell set of the big cube.

Now the cell product will be defined. Fix two cell sets, and . Index their elements with the index sets and . Let be the cell in with the index . Define the cell product as follows:

This family of sets has the following properties:

1: All elements are compact convex polytopes, because the product of two convex compact polytopes is a convex and compact polytope.

2: Given a point in , the projection of that point to lies inside some cell in , and the same goes for . Therefore, the point lies in the cell that is the product of the cells in and it was projected into. Similarly, given some arbitrary point in an arbitrary cell of the product, that cell is associated with a pair of indices, so the projection of the point to the space containing lies in the cell associated with the first index, so it lies in . A similar argument applies for . Therefore, the union of all cells in the cell product of and equals the space .

3: The intersection of two cells with indices and is the product of the intersection of cell and cell , and the intersection of cell and cell . All these intersections have Lebesgue measure 0, so the intersection of those two cells must also have Lebesgue measure 0.

Therefore, the cell product of any two cell sets and is a cell set of .

Now, let's visualize a concrete example of a cell product. Let be the unit side length triangle, and be the set of -length triangles in the triangular tessellation of . Let be the unit length line segment, and be the set of -length line segments that go end-to-end to cover the unit length line segment. The set would be a unit side length triangular prism.

The cell product would be the set of little triangular prisms of side length . These little cells completely fill the big triangular prism, each one of them is the product of a cell from and a cell from , and all pairs of indices from the index sets correspond to a unique little triangular prism.

Given a sequence of cell sets, let be the cell product of the sequence.

Game Theory Definitions, With Cells:

is the probability distribution over possible moves for player . is the probability of player playing move

is the space of strategies for the finite game, a product of probability distributions over finitely many moves for each player.

is some small finite number of the form where x is a large integer.

Given some probability distribution , divide it into cells as follows:

Slice it along the lines/planes/spaces where , and the lines/planes/spaces where , and so on. As a simple example, in the 2-move case, this is splitting a line segment into smaller line segments, in the 3-move case, this is splitting a triangle into smaller triangles via a triangular tiling, and in the 4-move case, this is splitting a tetrahedron into tetrahedra and octahedra via the tetrahedral-octahedral honeycomb.

is the resulting cell set of . Let be the index set for some indexing of the cells in .

is defined in the obvious way, as the cell product of the cell sets for each player, which is also a canonical cell set for .

Let . This is the index set for . An element of this set, , specifies a particular cell . Let be the 'th element of the sequence . This is an element of and specifies a particular cell .

Given a point , let be the 'th component of the point.

A Kakutani strategy for player is a function that is nonempty, convex, and has closed graph, for all inputs.

If all players are using a Kakutani strategy, then the function where is nonempty, convex, and has closed graph, for all inputs, and by Kakutani's fixed point theorem, there are some fixed points in .

Given a Kakutani strategy for player , let the function be the function such that:

This could be thought of as the function where everyone else has locked in their move, and only player may respond with some other move.

A cell is ruled out by player if there is no point in the cell such that . Pretty much, no matter the point in the cell, if everyone else tries to play it, player will go "nope, not playing this point". If a cell is ruled out by some player, the fixed point of cannot lie within that cell.

Given some utility function for each player over the outcome of the game, is the utility vector made of the utility each player attains, assuming the center point of the cell is played in the game. Yes, not all points in attain the same utility, but for a sufficiently small , the cell is really small, and for any player, the utility over the cell is well-approximated by the utility attained at the middle point in the cell. is the utility that the 'th player attains.

Every player also has a ranking of cells, that works as follows:

In short, the ranking is ordered by player 's utility function first and foremost.

Also, if is a Pareto improvement on , then

.

This can be extended to a ranking of a subset of cells in the obvious way, by just removing all cells outside of the subset.

The Procedure:

The following game is played: There is a set of ruled out cells in . A random number generator is consulted, and used to assign every player a set of ruled out cells that they are responsible for. The players may modify their strategy to whatever they want, subject to three constraints.

1: Their new strategy must be a Kakutani function.

2: Their new strategy must rule out all cells that they are responsible for.

3: The set of cells that haven't been ruled out yet, that their new strategy rules out, must be an initial segment of the ranking made by applying to the set of cells that aren't ruled out.

If there is a player who cannot modify their strategy accordingly, or if no additional cells can be removed, then the game repeats with a new random assignment, and only ends if one cell is left.

The starting set of ruled out cells is .

Intuitively, the game is something like "everyone sorts the list of outcomes by their own utility first, and by everyone else's second, and then you loop through game rounds where everyone has to implement a realistic strategy that rules out some strategies that someone else has vowed to rule out first, and also rules out personally abhorrent points if it's possible, until it's narrowed down and everyone agrees which cell to play".

This isn't quite as cooperative as it seems. Sorting equivalent outcomes by Pareto-optimality for everyone else isn't motivated by purely selfish interest, but it's at least compatible with it. Everyone can simulate what strategy everyone else picked, so everyone knows which cells have gotten ruled out. And taking responsibility for ruling out someone else's cells isn't really an imposition, because if the game ended right there, those cells couldn't possibly have been played anyways.

If this process eventually converges to one cell remaining, then the cell must be on the Pareto frontier of the set of cells. The argument is very simple. If there was a Pareto improvement on the final cell, then it would have had a higher rank than the final cell in everybody's ranking, so it can't possibly have been eliminated.

The hard part is showing that this process eventually converges to one cell remaining and doesn't get stuck. It will be proved by assuming there are at least 2 cells that have not been ruled out, and showing that there exists an assignment of cells to players that result in at least one more cell getting ruled out, so the game can always progress.

Proof of Above-Mentioned Hard Part:

Let be the set of ruled-out cells.

Assuming there is some cell being used as a backup cell:

Let the domain of player , be the set of cells

By the way the cell product was defined, both of these sets are subsets of . To visualize it, consult this image of a cube in Minecraft. The glowstone is the backup cell, the green is the domain of player 3, the black is the domain of player 2, and the purple is the domain of player 1. This can be verified by plugging into the definition of domain. The first half of the domain specifies all the cells in the cube, and the second half specifies the plane of cells where their third coordinates equal the third coordinate of the cube, . Thus, all the cells that don't share the same plane as the backup point are green/in the domain of player 3. For , this equation specifies the plane of black cells, minus the column of purple cells. And so on.

Let the ruled out cells for player be , the ruled out cells for player be , and so on.

Fix some . Consider the function that takes a point as input, measures the distance to the nearest cell in , gets the point , and plays the point that is distance along the vector between and the center point of .

This function is nonempty for all inputs. It is convex for all inputs because it plays a set consisting of a single point. It has closed graph because it is continuous. Therefore, it is a Kakutani function. The center point of picks out the series of cells where the 'th cell of the product that defines them is , so the point that this function maps towards is outside . Furthermore, for all points in the set of cells , , so the point is moved 1 unit away from its starting point, so all these cells are ruled out. Finally, for all cells outside of , select a point on the interior of the cell that is more than away from any cells that must be ruled out. Then this function doesn't change any coordinates of that point, so has a fixed point in that cell, so this cell has not been ruled out. This argument applies for all .

Therefore, conditions 1 and 2 are fulfilled by this assignment of cells. To go back to the Minecraft cube, if you pick out some subset of the green glass cubes as ones to be prevented, all points in or really close to that subset would get mapped towards the plane of black glass cubes by player 3. Similarly, if you pick out some subset of the black glass cubes as ones to be prevented by player 2, player 2 would map all points on or really close to those cubes towards the column of purple glass cubes. Of course, the players don't have to use this exact Kakutani function, it just proves that for every player, there is some Kakutani function that lets them fulfill their obligations.

Finally, we will show that if there are at least 2 cells remaining, there is always a pair of cells and such that is minimally ranked for some player , and when is fixed as a backup cell, (which means it can be eliminated by player in the same Kakutani way as detailed before, finishing up the proof by showing that a cell gets eliminated)

Take player , and let be their least-favored cell. If there exists some other cell that isn't ruled out, and (ie, the detested cell and the backup cell have different final indices), then it can be the backup cell. Because both these cells have different final indices, .

The quick visual version of this is that if you delete some blocks out of the cube in the Minecraft image (these block holes correspond to cells that aren't ruled out yet), then if the most-hated block hole has a different player 3 coordinate than some other block hole, then you can plug the other block hole with the glowing block, and the resulting black plane won't contain the most-hated block hole, and the most-hated block hole will be in the green (which means it can be safely eliminated, just adopt a kakutani function that maps everything in or near that cell to the black plane).

But what if all the cells remaining/holes in the Minecraft cube have the same final index in /lie in the same plane? This makes it so player doesn't have a backup cell that lets them eliminate their least favorite one.

Consider player . By the same argument as before, player can eliminate their most detested cell, unless all the non-eliminated cells have both the same player index, and the same player index.

Consider player , and so on. Either there is some player that can get a crack at eliminating their least favorite cell if the random number generator assigns things right, or the sequence of indices that indexes both cells is the exact same. That would mean these two distinct cells are the same cell, which is impossible.

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 10:53 AM

Cool! I'm happy to see this written up finally. It's been a big source of intuitions for me, so it's good to see that the proof checks out.

A possible next step to all this is to try to specify proof-based DT agents which could play this game (or something similar) based on Löbian handshakes. (In fact, part of the original motivation was to try to bring the cooperative-oracle model closer to the Löb-based cooperation you can get in prisoner's dilemma with visible source code.)

It's unfortunate that you had to add the pareto-improvement condition to the cell rank. That part seems especially unlikely to drop out of a more general decision theory.

I think I see another serious complication:

Yes, not all points in attain the same utility, but for a sufficiently small ϵ, the cell is really small, and for any player, the utility over the cell is well-approximated by the utility attained at the middle point in the cell.

So, for any desired utility-approximation accuracy , you can choose sufficiently small to achieve it. But, a pareto-optima in the set of middle points can be arbitrarily worse for some player than any pareto-optima of the full game; IE, taking the midpoints can hide arbitrarily large pareto improvements.

For example, suppose that =0.001. A pareto optima of the midpoints might give the utility vector (2, 2, 3) for the three players. There could be another midpoint (100, 100, 2.9999999), very near where the true game contains a point (100, 100, 3).

So, it seems the pareto optimum of the game on midpoints which is found by the process in the post can be arbitrarily sub-optimal for all but one player, with no guarantee that this gets better as shrinks.

If you drop the Pareto-improvement condition from the cell rank, and just have "everyone sorts things by their own utility", then you won't necessarily get a Pareto-optimal outcome (within the set of cell center-points), but you will at least get a point where there are no strict Pareto improvements (no points that leave everyone better off).

The difference between the two is... let's say we've got a 2-player 2-move game that in utility-space, makes some sort of quadrilateral. If the top and right edges join at 90 degrees, the Pareto-frontier would be the point on the corner, and the set of "no strict Pareto improvements" would be the top and the right edges.

If that corner is obtuse, then both "Pareto frontier" and "no strict Pareto improvements" agree that both line edges are within the set, and if the corner is acute, then both "Pareto frontier" and "no strict Pareto improvements" agree that only the corner is within the set. It actually isn't much of a difference, it only manifests when the utilities for a player are exactly equal, and is easily changed by a little bit of noise.

The utility-approximation issue you pointed out seems to be pointing towards the impossibility of guaranteeing limiting to a point on the Pareto frontier (when you make the cell size smaller and smaller), precisely because of that "this set is unstable under arbitrarily small noise" issue.

But, the "set of all points that have no strict Pareto improvements by more than for all players", ie, the -fuzzed version of "set of points with no strict pareto improvement", does seem to be robust against a little bit of noise, and doesn't require the Pareto-improvement condition on everyone's ranking of cells.

So I'm thinking that if that's all we can attain (because of the complication you pointed out), then it lets us drop that inelegant Pareto-improvement condition.

I'll work on the proof that for sufficiently small cell size , you can get an outcome within of the set of "no strict Pareto improvements available"

Nice job spotting that flaw.