Intelligent Agent Foundations Forumsign up / log in
Distributed Cooperation
post by Alex Appel 70 days ago | Abram Demski and Scott Garrabrant like this | 2 comments

Reflective oracles can be approximated by computing Nash equilibria. But is there some procedure that produces a Pareto-optimal equilibrium in a game, aka, a point produced by a Cooperative oracle? It turns out there is. There are some interesting philosophical aspects to it, which will be typed up in the next post.

The result is not original to me, it’s been floating around MIRI for a while. I think Scott, Sam, and Abram worked on it, but there might have been others. All I did was formalize it a bit, and generalize from the 2-player 2-move case to the n-player n-move case. With the formalism here, it’s a bit hard to intuitively understand what’s going on, so I’ll indicate where to visualize an appropriate 3-dimensional object.


First, we will define cell sets and cell products.

Cell Set and Cell Product:

A family of sets is a cell set of some convex polytope in finite-dimensional euclidean space, iff all elements of the family are compact convex polytopes, the union of the family equals the original convex polytope, and for all pairs of elements of the family, the intersection has Lebesgue measure 0.

More intuitively, a cell set is just a set of cells where they perfectly cover some space you want, the cells only overlap at boundaries, and the cells are all finitely big, convex, and include their boundaries (so a point may be part of multiple cells if it’s on a boundary, that’s intentional). If you make a big cube out of a bunch of little cubes arranged in a space-filling honeycomb pattern, the set of all those little cubes is a cell set of the big cube.

Now the cell product will be defined. Fix two cell sets, \(A^{\diamond}\) and \(B^{\diamond}\). Index their elements with the index sets \(I_{A}\) and \(I_{B}\). Let \(C_{A,i}\) be the cell in \(A^{\diamond}\) with the index \(i\in I_{A}\). Define the cell product \(\tilde{\times}\) as follows:

\[A^{\diamond}\tilde{\times}B^{\diamond}:=\{C_{A,i}\times C_{B,j}|(i,j)\in I_{A}\times I_{B}\}\]

This family of sets has the following properties:

1: All elements are compact convex polytopes, because the product of two convex compact polytopes is a convex and compact polytope.

2: Given a point in \(A\times B\), the projection of that point to \(A\) lies inside some cell in \(A^{\diamond}\), and the same goes for \(B\). Therefore, the point lies in the cell that is the product of the cells in \(A^{\diamond}\) and \(B^{\diamond}\) it was projected into. Similarly, given some arbitrary point in an arbitrary cell of the product, that cell is associated with a pair of indices, so the projection of the point to the space containing \(A\) lies in the cell associated with the first index, so it lies in \(A\). A similar argument applies for \(B\). Therefore, the union of all cells in the cell product of \(A^{\diamond}\) and \(B^{\diamond}\) equals the space \(A\times B\).

3: The intersection of two cells with indices \((i,j)\) and \((k,l)\) is the product of the intersection of cell \(i\) and cell \(k\), and the intersection of cell \(j\) and cell \(l\). All these intersections have Lebesgue measure 0, so the intersection of those two cells must also have Lebesgue measure 0.

Therefore, the cell product of any two cell sets \(A^{\diamond}\) and \(B^{\diamond}\) is a cell set of \(A\times B\).

Now, let’s visualize a concrete example of a cell product. Let \(A\) be the unit side length triangle, and \(A^{\diamond}\) be the set of \(\epsilon\)-length triangles in the triangular tessellation of \(A\). Let \(B\) be the unit length line segment, and \(B^{\diamond}\) be the set of \(\epsilon\)-length line segments that go end-to-end to cover the unit length line segment. The set \(A\times B\) would be a unit side length triangular prism.

The cell product \(A^{\diamond}\tilde{\times}B^{\diamond}\) would be the set of little triangular prisms of side length \(\epsilon\). These little cells completely fill the big triangular prism, each one of them is the product of a cell from \(A^{\diamond}\) and a cell from \(B^{\diamond}\), and all pairs of indices from the index sets correspond to a unique little triangular prism.

Given a sequence of \(m\) cell sets, let \(A_{\le m}^{\diamond}\) be the cell product of the sequence.

\[A_{\le m}^{\diamond}:=\displaystyle\tilde{\prod_{k\le m}}A_{k}^{\diamond}\]

Game Theory Definitions, With Cells:

\(\Delta_{m}\) is the probability distribution over possible moves for player \(m\). \(\Delta_{m}(A)\) is the probability of player \(m\) playing move \(A\)

\(S:=\Delta_{1}\times\Delta_{2}...\times\Delta_{n}\) is the space of strategies for the finite game, a product of probability distributions over finitely many moves for each player.

\(\epsilon\) is some small finite number of the form \(\frac{1}{x}\) where x is a large integer.

Given some probability distribution \(\Delta_{m}\), divide it into cells as follows:

Slice it along the lines/planes/spaces where \(\Delta_{m}(A)=\epsilon,2\epsilon...,1-\epsilon\), and the lines/planes/spaces where \(\Delta_{m}(B)=\epsilon,2\epsilon...,1-\epsilon\), and so on. As a simple example, in the 2-move case, this is splitting a line segment into smaller line segments, in the 3-move case, this is splitting a triangle into smaller triangles via a triangular tiling, and in the 4-move case, this is splitting a tetrahedron into tetrahedra and octahedra via the tetrahedral-octahedral honeycomb.

\(\Delta_{m}^{\diamond}\) is the resulting cell set of \(\Delta_{m}\). Let \(I_{m}\) be the index set for some indexing of the cells in \(\Delta_{m}^{\diamond}\).

\(S^{\diamond}\) is defined in the obvious way, as the cell product of the cell sets for each player, which is also a canonical cell set for \(S\).

Let \(\vec{I}:=\prod_{m\le n}I_{m}\). This is the index set for \(S^{\diamond}\). An element of this set, \(\vec{i}\), specifies a particular cell \(C_{\vec{i}}\in S^{\diamond}\). Let \(\vec{i}_{m}\) be the \(m\)’th element of the sequence \(\vec{i}\). This is an element of \(I_{m}\) and specifies a particular cell \(C_{\vec{i}_{m}}\in\Delta_{m}^{\diamond}\).

Given a point \(s\in S\), let \(s_{k}\in\Delta_{k}\) be the \(k\)’th component of the point.

A Kakutani strategy for player \(m\) is a function \(f_{m}:S\to\mathcal{P}(\Delta_{m})\) that is nonempty, convex, and has closed graph, for all inputs.

If all players are using a Kakutani strategy, then the function \(f\) where \(f(s):=\prod_{m\le n}f_{m}(s)\) is nonempty, convex, and has closed graph, for all inputs, and by Kakutani’s fixed point theorem, there are some fixed points in \(S\).

Given a Kakutani strategy \(f_{m}\) for player \(m\), let the function \(F_{m}:S\to\mathcal{P}(S)\) be the function such that:

\[F_{m}(s):=\left(\prod_{k\le n,k\neq m}\{s_{k}\}\right)\times f_{m}(s)\]

This could be thought of as the function where everyone else has locked in their move, and only player \(m\) may respond with some other move.

A cell is ruled out by player \(m\) if there is no point \(s\) in the cell such that \(s\in F_{m}(s)\). Pretty much, no matter the point in the cell, if everyone else tries to play it, player \(m\) will go “nope, not playing this point”. If a cell is ruled out by some player, the fixed point of \(f\) cannot lie within that cell.

Given some utility function for each player over the outcome of the game, \(\vec{U}(C_{\vec{i}})\) is the utility vector made of the utility each player attains, assuming the center point of the cell \(C_{\vec{i}}\) is played in the game. Yes, not all points in \(C_{\vec{i}}\) attain the same utility, but for a sufficiently small \(\epsilon\), the cell is really small, and for any player, the utility over the cell is well-approximated by the utility attained at the middle point in the cell. \(\vec{U}(C_{\vec{i}})_{m}\) is the utility that the \(m\)’th player attains.

Every player also has a ranking of cells, \(R_{m}:S^{\diamond}\to\mathbb{N}\) that works as follows:

\[\vec{U}(C_{\vec{i}})_{m}<\vec{U}(C_{\vec{i}'})_{m}\to R_{m}(C_{\vec{i}})<R_{m}(C_{\vec{i}'})\]

In short, the ranking is ordered by player \(m\)’s utility function first and foremost.

Also, if \(C_{\vec{i}'}\) is a Pareto improvement on \(C_{\vec{i}}\), then


This can be extended to a ranking of a subset of cells in the obvious way, by just removing all cells outside of the subset.

The Procedure:

The following game is played: There is a set of ruled out cells in \(S^{\diamond}\). A random number generator is consulted, and used to assign every player a set of ruled out cells that they are responsible for. The players may modify their strategy \(f_{m}\) to whatever they want, subject to three constraints.

1: Their new strategy must be a Kakutani function.

2: Their new strategy must rule out all cells that they are responsible for.

3: The set of cells that haven’t been ruled out yet, that their new strategy rules out, must be an initial segment of the ranking made by applying \(R_{m}\) to the set of cells that aren’t ruled out.

If there is a player who cannot modify their strategy accordingly, or if no additional cells can be removed, then the game repeats with a new random assignment, and only ends if one cell is left.

The starting set of ruled out cells is \(\emptyset\).

Intuitively, the game is something like “everyone sorts the list of outcomes by their own utility first, and by everyone else’s second, and then you loop through game rounds where everyone has to implement a realistic strategy that rules out some strategies that someone else has vowed to rule out first, and also rules out personally abhorrent points if it’s possible, until it’s narrowed down and everyone agrees which cell to play”.

This isn’t quite as cooperative as it seems. Sorting equivalent outcomes by Pareto-optimality for everyone else isn’t motivated by purely selfish interest, but it’s at least compatible with it. Everyone can simulate what strategy everyone else picked, so everyone knows which cells have gotten ruled out. And taking responsibility for ruling out someone else’s cells isn’t really an imposition, because if the game ended right there, those cells couldn’t possibly have been played anyways.

If this process eventually converges to one cell remaining, then the cell must be on the Pareto frontier of the set of cells. The argument is very simple. If there was a Pareto improvement on the final cell, then it would have had a higher rank than the final cell in everybody’s ranking, so it can’t possibly have been eliminated.

The hard part is showing that this process eventually converges to one cell remaining and doesn’t get stuck. It will be proved by assuming there are at least 2 cells that have not been ruled out, and showing that there exists an assignment of cells to players that result in at least one more cell getting ruled out, so the game can always progress.

Proof of Above-Mentioned Hard Part:

Let \(C_{R}\) be the set of ruled-out cells.

Assuming there is some cell \(C_{\vec{i}'}\) being used as a backup cell:

Let the domain of player \(m\), be the set of cells

\[D_{m}:=\left(\Delta_{\le m}^{\diamond}\tilde{\times}\tilde{\prod_{m<k\le n}}\{C_{\vec{i}'_{k}}\}\right)/\left(\Delta_{<m}^{\diamond}\tilde{\times}\tilde{\prod_{m\le k\le n}}\{C_{\vec{i}'_{k}}\}\right)\]

By the way the cell product was defined, both of these sets are subsets of \(S^{\diamond}\). To visualize it, consult this image of a cube in Minecraft. The glowstone is the backup cell, the green is the domain of player 3, the black is the domain of player 2, and the purple is the domain of player 1. This can be verified by plugging \(m=n=3\) into the definition of domain. The first half of the domain specifies all the cells in the cube, and the second half specifies the plane of cells where their third coordinates equal the third coordinate of the backup cell, \(\vec{i}'_{3}\). Thus, all the cells that don’t share the same plane as the backup cell are green/in the domain of player 3. For \(m=2,n=3\), this equation specifies the plane of black cells, minus the column of purple cells. And so on.

Let the ruled out cells for player \(n\) be \(C_{R}\cap D_{n}\), the ruled out cells for player \(n-1\) be \(C_{R}\cap D_{n-1}\), and so on.

Fix some \(\delta<<\epsilon\). Consider the function \(f_{m}\) that takes a point \(s\) as input, measures the distance \(d\) to the nearest cell in \(C_{R}\cap D_{m}\), gets the point \(s_{m}\in\Delta_{m}\), and plays the point that is \(\max(\frac{-d}{\delta}+1,0)\) distance along the vector between \(s_{m}\) and the center point of \(C_{\vec{i}'_{m}}\in\Delta_{m}^{\diamond}\).

This function is nonempty for all inputs. It is convex for all inputs because it plays a set consisting of a single point. It has closed graph because it is continuous. Therefore, it is a Kakutani function. The center point of \(C_{\vec{i}'_{m}}\) picks out the series of cells where the \(m\)’th cell of the product that defines them is \(\{C_{\vec{i}'_{m}}\}\), so the point that this function maps towards is outside \(D_{m}\). Furthermore, for all points in the set of cells \(C_{R}\cap D_{m}\), \(d=0\), so the point is moved 1 unit away from its starting point, so all these cells are ruled out. Finally, for all cells outside of \(C_{R}\cap D_{m}\), select a point on the interior of the cell that is more than \(\delta\) away from any cells that must be ruled out. Then this function doesn’t change any coordinates of that point, so \(F_{m}\) has a fixed point in that cell, so this cell has not been ruled out. This argument applies for all \(m\).

Therefore, conditions 1 and 2 are fulfilled by this assignment of cells. To go back to the Minecraft cube, if you pick out some subset of the green glass cubes as ones to be prevented, all points in or really close to that subset would get mapped towards the plane of black glass cubes by player 3. Similarly, if you pick out some subset of the black glass cubes as ones to be prevented by player 2, player 2 would map all points on or really close to those cubes towards the column of purple glass cubes. Of course, the players don’t have to use this exact Kakutani function, it just proves that for every player, there is some Kakutani function that lets them fulfill their obligations.

Finally, we will show that if there are at least 2 cells remaining, there is always a pair of cells \(C_{\vec{i}}\) and \(C_{\vec{i}'}\) such that \(C_{\vec{i}}\) is minimally ranked for some player \(m\), and when \(C_{\vec{i}'}\) is fixed as a backup cell, \(C_{\vec{i}}\in D_{m}\) (which means it can be eliminated by player \(m\) in the same Kakutani way as detailed before, finishing up the proof by showing that a cell gets eliminated)

Take player \(n\), and let \(C_{\vec{i}}\) be their least-favored cell. If there exists some other cell \(C_{\vec{i}'}\) that isn’t ruled out, and \(\vec{i}_{n}\neq\vec{i}'_{n}\) (ie, the detested cell and the backup cell have different final indices), then it can be the backup cell. Because both these cells have different final indices, \(C_{\vec{i}}\in D_{n}\).

The quick visual version of this is that if you delete some blocks out of the cube in the Minecraft image (these block holes correspond to cells that aren’t ruled out yet), then if the most-hated block hole has a different player 3 coordinate than some other block hole, then you can plug the other block hole with the glowing block, and the resulting black plane won’t contain the most-hated block hole, and the most-hated block hole will be in the green (which means it can be safely eliminated, just adopt a kakutani function that maps everything in or near that cell to the black plane).

But what if all the cells remaining/holes in the Minecraft cube have the same final index in \(I_{n}\)/lie in the same plane? This makes it so player \(n\) doesn’t have a backup cell that lets them eliminate their least favorite one.

Consider player \(n-1\). By the same argument as before, player \(n-1\) can eliminate their most detested cell, unless all the non-eliminated cells have both the same player \(n\) index, and the same player \(n-1\) index.

Consider player \(n-2\), and so on. Either there is some player that can get a crack at eliminating their least favorite cell if the random number generator assigns things right, or the sequence of indices that indexes both cells is the exact same. That would mean these two distinct cells are the same cell, which is impossible.

by Abram Demski 70 days ago | link

Cool! I’m happy to see this written up finally. It’s been a big source of intuitions for me, so it’s good to see that the proof checks out.

A possible next step to all this is to try to specify proof-based DT agents which could play this game (or something similar) based on Löbian handshakes. (In fact, part of the original motivation was to try to bring the cooperative-oracle model closer to the Löb-based cooperation you can get in prisoner’s dilemma with visible source code.)

It’s unfortunate that you had to add the pareto-improvement condition to the cell rank. That part seems especially unlikely to drop out of a more general decision theory.

I think I see another serious complication:

Yes, not all points in \(C_{\vec{i}}\) attain the same utility, but for a sufficiently small ϵ, the cell is really small, and for any player, the utility over the cell is well-approximated by the utility attained at the middle point in the cell.

So, for any desired utility-approximation accuracy \(\delta\), you can choose \(\epsilon\) sufficiently small to achieve it. But, a pareto-optima in the set of middle points can be arbitrarily worse for some player than any pareto-optima of the full game; IE, taking the midpoints can hide arbitrarily large pareto improvements.

For example, suppose that \(\delta\)=0.001. A pareto optima of the midpoints might give the utility vector (2, 2, 3) for the three players. There could be another midpoint (100, 100, 2.9999999), very near where the true game contains a point (100, 100, 3).

So, it seems the pareto optimum of the game on midpoints which is found by the process in the post can be arbitrarily sub-optimal for all but one player, with no guarantee that this gets better as \(\epsilon\) shrinks.


by Alex Appel 70 days ago | Abram Demski likes this | link

If you drop the Pareto-improvement condition from the cell rank, and just have “everyone sorts things by their own utility”, then you won’t necessarily get a Pareto-optimal outcome (within the set of cell center-points), but you will at least get a point where there are no strict Pareto improvements (no points that leave everyone better off).

The difference between the two is… let’s say we’ve got a 2-player 2-move game that in utility-space, makes some sort of quadrilateral. If the top and right edges join at 90 degrees, the Pareto-frontier would be the point on the corner, and the set of “no strict Pareto improvements” would be the top and the right edges.

If that corner is obtuse, then both “Pareto frontier” and “no strict Pareto improvements” agree that both line edges are within the set, and if the corner is acute, then both “Pareto frontier” and “no strict Pareto improvements” agree that only the corner is within the set. It actually isn’t much of a difference, it only manifests when the utilities for a player are exactly equal, and is easily changed by a little bit of noise.

The utility-approximation issue you pointed out seems to be pointing towards the impossibility of guaranteeing limiting to a point on the Pareto frontier (when you make the cell size smaller and smaller), precisely because of that “this set is unstable under arbitrarily small noise” issue.

But, the “set of all points that have no strict Pareto improvements by more than \(\delta\) for all players”, ie, the \(\delta\)-fuzzed version of “set of points with no strict pareto improvement”, does seem to be robust against a little bit of noise, and doesn’t require the Pareto-improvement condition on everyone’s ranking of cells.

So I’m thinking that if that’s all we can attain (because of the complication you pointed out), then it lets us drop that inelegant Pareto-improvement condition.

I’ll work on the proof that for sufficiently small cell size \(\epsilon\), you can get an outcome within \(\delta\) of the set of “no strict Pareto improvements available”

Nice job spotting that flaw.






Note: I currently think that
by Jessica Taylor on Predicting HCH using expert advice | 0 likes

Counterfactual mugging
by Jessica Taylor on Doubts about Updatelessness | 0 likes

What do you mean by "in full
by David Krueger on Doubts about Updatelessness | 0 likes

It seems relatively plausible
by Paul Christiano on Maximally efficient agents will probably have an a... | 1 like

I think that in that case,
by Alex Appel on Smoking Lesion Steelman | 1 like

Two minor comments. First,
by Sam Eisenstat on No Constant Distribution Can be a Logical Inductor | 1 like

A: While that is a really
by Alex Appel on Musings on Exploration | 0 likes

> The true reason to do
by Jessica Taylor on Musings on Exploration | 0 likes

A few comments. Traps are
by Vadim Kosoy on Musings on Exploration | 1 like

I'm not convinced exploration
by Abram Demski on Musings on Exploration | 0 likes

Update: This isn't really an
by Alex Appel on A Difficulty With Density-Zero Exploration | 0 likes

If you drop the
by Alex Appel on Distributed Cooperation | 1 like

Cool! I'm happy to see this
by Abram Demski on Distributed Cooperation | 0 likes

Caveat: The version of EDT
by 258 on In memoryless Cartesian environments, every UDT po... | 2 likes

[Delegative Reinforcement
by Vadim Kosoy on Stable Pointers to Value II: Environmental Goals | 1 like


Privacy & Terms