Summary: Von Neumann and Morgenstern defined the notion of stable set in order to characterize some sort of equilibrium in cooperative games. Lucas showed that stable sets don't always exist, so they're inadequate for telling us what happens in games. I propose a generalization of stable sets that always exists, in which players can have different beliefs.
Epistemic status: This is me stretching VNM's framework and groping towards cooperative open-source game theory. The novel definitions aren't that exciting, but I think they point in the right direction. I believe the proof is solid. I've only dipped my toe into the literature, so it's likely someone has done this already.
VNM stable sets
We recall some definitions from (VNM 1944): An imputationx=(xi)i is a vector of payoffs, one for each player i. A coalition is a subset of players. A game is characterized by the maximin value v(S) for each coalition S. The function v is called the characteristic function and it satisfies v(∅)=0 and v(A)+v(B)≤v(A∪B) if A∩B=∅. Imputation xdominates imputation y, or x>y, if there's a nonempty coalition S such that xi>yi for all i∈S and ∑i∈Sxi≤v(S). A VNM stable setX is a set of imputations such that every imputation not in X is dominated by an imputation in X, and no imputation in X is dominated by an imputation in X.
VNM called stable sets "solutions", and they were supposed to represent equilibria in some unspecified dynamics of negotiation. They didn't really spell out why stable sets correspond to equilibria, but I think I figured it out.
For the dynamics, imagine that there's an "imputation under consideration" that every player can see. A coalition of unsatisfied players can form, and replace the imputation under consideration with a dominant imputation. When this process terminates, the imputation under consideration becomes the actual payoffs.
Or if you like, the players observe the actions of all players, and if a coalition of players is unsatisfied with the resulting payoffs, they take actions corresponding to a dominant imputation, thus causing a contradiction.
Beliefs
Now suppose the players are uncertain about what imputation they'll end up with. Let's represent this uncertainty with a nonempty set X of "plausible" imputations. (No probabilities.) You might think it's weird to represent uncertainty without using probabilities, but (VNM 1944) did just that in chapters 8–10 for extensive-form games.
Let's suppose the players share the same set X, and that X satisfies these conditions:
(a) Belief that all players are instrumentally rational. If y>x and y is plausible, then x is not plausible.
(b) Cromwell's rule. If x is not plausible, then there exists a y>x that is plausible.
(a) says that a coalition will be motivated to move from x to a plausible dominant imputation y if there is one, and then x can't be an equilibrium. The idea of (b) is that a coalition won't bother to move from x to an implausible imputation, and so we shouldn't rule out x.
It's easy to see that the plausible sets X that satisfy (a) and (b) are precisely the VNM stable sets.
(As a corollary, we don't have to assume that the plausible set is nonempty, since stable sets are necessarily nonempty.)
The Lucas game
(Lucas 1967) proved that a stable set need not exist by exhibiting a 10-player game with no stable set. The paper is short and sweet, so if you want to know exactly how the game is defined, go ahead and read it. Just be careful about the characteristic function v that Lucas defines: You should really take v′(S):=maxT⊂Sv(T) as the characteristic function. (Lucas 1969) has more details, but it's not really easier to read.
Lucas might have started out looking for a game with three imputations that dominate each other in a cycle. You can't partition such a set into plausible and implausible subsets. But in practice the other dominance relations in the vector space interfere with this strategy.
Instead Lucas managed to construct a game where the space of imputations contains three disjoint 3-dimensional subsets E1, E3, E5 that dominate each other and aren't dominated by anything else:
Ei={x|xj=xk=1,0≤xi<1,0≤x7+x9<1}
where (i,j,k) is a cyclic permutation of (1,3,5). (I'm ignoring the even players 2, 4, 6, 8, 10.) The relevant part of the characteristic function is:
v({1,7,9})=v({3,7,9})=v({5,7,9})=2
(There are also even players in these coalitions who have to get a positive share of the coalition's payoff, but I'm omitting them. See the paper for the full definition.) So for every x∈E1,y∈E3 such that y7>x7,y9>x9, the coalition {1,7,9} is a witness to the dominance relation y>x. And the same thing is true with (1, 3) replaced with (3, 5) or (5, 1). So the dominance relation has infinite ascending chains that cycle around E1,E3,E5 while x7 and x9 increase towards 1.
It's easy to see that there's no stable set. If x∈E1 is plausible, then all y∈E3 with y7>x7 and y9>x9 must be implausible. So there must be a plausible z∈E5 with z7>x7 and z9>x9. So all w∈E1 with w7>z7 and w9>z9 must be implausible. So there must be a plausible v∈E3 with v7>z7>x7 and v9>z9>x9, which contradicts the second sentence of this paragraph.
So what happens?
From the perspective of players 1, 3, 5, region E1∪E3∪E5 is a game of hot potato: One player has payoff <1, and it's in their power to swap payoffs with someone else. As the hot potato is passed around, players 7 and 9 sit back and enjoy monotonically increasing combined payoff.
What epistemic state do these players have? I can think of a few answers:
The dynamics allow many changes to the imputation-under-consideration, up to an unknown time limit. The players are uncertain which of E1,E3,E5 they'll end up in, so the imputations in E1∪E3∪E5 have subjective values that are less than their "face values". This changes the structure of the dominance relation, and there's a stable set based on these subjective values that excludes E1∪E3∪E5.
Each player is certain they won't end up holding the hot potato, which means they have a disagreement about what the time limit is.
The negotiation has an equilibrium in E1 because player 1 doesn't bother to counterpropose E3 because 1 believes (E3 is implausible because 3 believes (E5 is plausible because (5 believes E1 is implausible because ...)))
I'm not currently enthusiastic about 1 because it involves a kind of double counterfactual ("if imputation x obtained and (if x obtained then we'd have counterproposal y) then we'd have counterproposal z"), and that seems unnatural to me. There's something called a "far-sighted stable set"; maybe that's related?
2 seems more natural to me, so I'm going to allow players to have different beliefs. I'm not ready to give the players probabilistic beliefs because that will be complicated, so I can't model 2. But as soon as players have different beliefs, we can model 3. That's what the next section is about.
Modeling disagreement
If players have different beliefs about which imputations are plausible, we have to talk about metabeliefs and counterfactuals. Let's say an individual epistemic state is a choice of "counterfactual" global epistemic state for each imputation, together with a set of "plausible" imputations. And a global epistemic state is a choice of individual epistemic state for each player. In other words, if V is the space of imputations and P is the set of players, recursively define I:=GV×P(V) and G:=IP. In other words, a player has object-level beliefs of the form "x is a plausible imputation" as well as meta-level beliefs of the form "if imputation x obtained, then counterfactually player i would have individual epistemic state I".
Consider these axioms for global epistemic states G=(G1,…,Gn):
(1) Belief that all players are instrumentally rational. If the dominance y > x is witnessed by a coalition Z, and Gi holds that conditional on x, every member of Z believes that y is plausible, then Gi holds that x is implausible.
(2) Object-level Cromwell's rule. (Converse of 1.) If Gi holds that conditional on x, for all y>x witnessed by Z, there exists j∈Z that believes y is implausible, then Gi holds that x is plausible.
(3) Reflection. If Gi holds that conditional on x, the global state is H, then Hi=Gi.
(4) Belief that all players are epistemically rational.Gi holds that conditional on any x, the global epistemic state satisfies (1)–(4).
Sketch proof of existence
It's straightforward to prove that a zero-sum game always has a global epistemic state satisfying (1)–(4).
First, consider the simplex whose vertices are the imputations where every player but one gets their maximin value. The complement of the simplex is dominated by singleton coalitions, and it must be common knowledge that the complement of the simplex is implausible.
Next, if there are undominated imputations, it must be common knowledge that they're plausible. And it must be common knowledge that the imputations they dominate are implausible. And of the remaining imputations, the ones which are dominated only by implausible imputations must be plausible (and this is common knowledge). Continuing in this way transfinitely many times, we get a partition of the simplex into: X, the commonly-known-to-be-plausible imputations; Y, the commonly-known-to-be-implausible imputations; and Z, the remainder. Crucially, every imputation in Z is dominated by another imputation in Z.
If Z is empty, we've already described a global epistemic state and we're done. Otherwise, it's possible to complete the global epistemic state by recursively choosing the plausible sets to be X∪{z} for a varying z∈Z, starting at the object level and going up the meta hierarchy, guided by axioms (1)–(4).
Uncertain meta-beliefs
For aesthetic reasons I like players to have uncertainty about the beliefs of others, which looks like I:=P(G)V×P(V). And there are corresponding versions of axioms 1–4 as well as a meta-level Cromwell's rule. Existence follows from the existence result proved above.
Conclusion
The flaw of VNM stable sets is that they don't always exist. The flaw of these global epistemic states is that there are so many of them! In some games, any singleton set can be the plausible set of an individual epistemic state, and no doubt larger sets can be, too.
One way to summarize what we've proven is that it's possible to lock yourself into all sorts of beliefs if you believe the other players can have grossly inaccurate beliefs. What's more interesting is what sort of equilibria can exist if the players have approximately-correct beliefs. For that we need probabilities.
Another interpretation is that it's unsurprising that we get so many solutions, because we've made no assumptions about the players themselves apart from some basic rationality conditions. If we knew the source code of the players, the equilibria would be a lot more constrained.
Previously: Book report: Theory of Games and Economic Behavior (von Neumann & Morgenstern)
Summary: Von Neumann and Morgenstern defined the notion of stable set in order to characterize some sort of equilibrium in cooperative games. Lucas showed that stable sets don't always exist, so they're inadequate for telling us what happens in games. I propose a generalization of stable sets that always exists, in which players can have different beliefs.
Epistemic status: This is me stretching VNM's framework and groping towards cooperative open-source game theory. The novel definitions aren't that exciting, but I think they point in the right direction. I believe the proof is solid. I've only dipped my toe into the literature, so it's likely someone has done this already.
VNM stable sets
We recall some definitions from (VNM 1944): An imputation x=(xi)i is a vector of payoffs, one for each player i. A coalition is a subset of players. A game is characterized by the maximin value v(S) for each coalition S. The function v is called the characteristic function and it satisfies v(∅)=0 and v(A)+v(B)≤v(A∪B) if A∩B=∅. Imputation x dominates imputation y, or x>y, if there's a nonempty coalition S such that xi>yi for all i∈S and ∑i∈Sxi≤v(S). A VNM stable set X is a set of imputations such that every imputation not in X is dominated by an imputation in X, and no imputation in X is dominated by an imputation in X.
VNM called stable sets "solutions", and they were supposed to represent equilibria in some unspecified dynamics of negotiation. They didn't really spell out why stable sets correspond to equilibria, but I think I figured it out.
For the dynamics, imagine that there's an "imputation under consideration" that every player can see. A coalition of unsatisfied players can form, and replace the imputation under consideration with a dominant imputation. When this process terminates, the imputation under consideration becomes the actual payoffs.
Or if you like, the players observe the actions of all players, and if a coalition of players is unsatisfied with the resulting payoffs, they take actions corresponding to a dominant imputation, thus causing a contradiction.
Beliefs
Now suppose the players are uncertain about what imputation they'll end up with. Let's represent this uncertainty with a nonempty set X of "plausible" imputations. (No probabilities.) You might think it's weird to represent uncertainty without using probabilities, but (VNM 1944) did just that in chapters 8–10 for extensive-form games.
Let's suppose the players share the same set X, and that X satisfies these conditions:
(a) Belief that all players are instrumentally rational. If y>x and y is plausible, then x is not plausible.
(b) Cromwell's rule. If x is not plausible, then there exists a y>x that is plausible.
(a) says that a coalition will be motivated to move from x to a plausible dominant imputation y if there is one, and then x can't be an equilibrium. The idea of (b) is that a coalition won't bother to move from x to an implausible imputation, and so we shouldn't rule out x.
It's easy to see that the plausible sets X that satisfy (a) and (b) are precisely the VNM stable sets.
(As a corollary, we don't have to assume that the plausible set is nonempty, since stable sets are necessarily nonempty.)
The Lucas game
(Lucas 1967) proved that a stable set need not exist by exhibiting a 10-player game with no stable set. The paper is short and sweet, so if you want to know exactly how the game is defined, go ahead and read it. Just be careful about the characteristic function v that Lucas defines: You should really take v′(S):=maxT⊂Sv(T) as the characteristic function. (Lucas 1969) has more details, but it's not really easier to read.
Lucas might have started out looking for a game with three imputations that dominate each other in a cycle. You can't partition such a set into plausible and implausible subsets. But in practice the other dominance relations in the vector space interfere with this strategy.
Instead Lucas managed to construct a game where the space of imputations contains three disjoint 3-dimensional subsets E1, E3, E5 that dominate each other and aren't dominated by anything else:
Ei={x|xj=xk=1,0≤xi<1,0≤x7+x9<1}
where (i,j,k) is a cyclic permutation of (1,3,5). (I'm ignoring the even players 2, 4, 6, 8, 10.) The relevant part of the characteristic function is:
v({1,7,9})=v({3,7,9})=v({5,7,9})=2
(There are also even players in these coalitions who have to get a positive share of the coalition's payoff, but I'm omitting them. See the paper for the full definition.) So for every x∈E1,y∈E3 such that y7>x7,y9>x9, the coalition {1,7,9} is a witness to the dominance relation y>x. And the same thing is true with (1, 3) replaced with (3, 5) or (5, 1). So the dominance relation has infinite ascending chains that cycle around E1,E3,E5 while x7 and x9 increase towards 1.
It's easy to see that there's no stable set. If x∈E1 is plausible, then all y∈E3 with y7>x7 and y9>x9 must be implausible. So there must be a plausible z∈E5 with z7>x7 and z9>x9. So all w∈E1 with w7>z7 and w9>z9 must be implausible. So there must be a plausible v∈E3 with v7>z7>x7 and v9>z9>x9, which contradicts the second sentence of this paragraph.
So what happens?
From the perspective of players 1, 3, 5, region E1∪E3∪E5 is a game of hot potato: One player has payoff <1, and it's in their power to swap payoffs with someone else. As the hot potato is passed around, players 7 and 9 sit back and enjoy monotonically increasing combined payoff.
What epistemic state do these players have? I can think of a few answers:
The dynamics allow many changes to the imputation-under-consideration, up to an unknown time limit. The players are uncertain which of E1,E3,E5 they'll end up in, so the imputations in E1∪E3∪E5 have subjective values that are less than their "face values". This changes the structure of the dominance relation, and there's a stable set based on these subjective values that excludes E1∪E3∪E5.
Each player is certain they won't end up holding the hot potato, which means they have a disagreement about what the time limit is.
The negotiation has an equilibrium in E1 because player 1 doesn't bother to counterpropose E3 because 1 believes (E3 is implausible because 3 believes (E5 is plausible because (5 believes E1 is implausible because ...)))
I'm not currently enthusiastic about 1 because it involves a kind of double counterfactual ("if imputation x obtained and (if x obtained then we'd have counterproposal y) then we'd have counterproposal z"), and that seems unnatural to me. There's something called a "far-sighted stable set"; maybe that's related?
2 seems more natural to me, so I'm going to allow players to have different beliefs. I'm not ready to give the players probabilistic beliefs because that will be complicated, so I can't model 2. But as soon as players have different beliefs, we can model 3. That's what the next section is about.
Modeling disagreement
If players have different beliefs about which imputations are plausible, we have to talk about metabeliefs and counterfactuals. Let's say an individual epistemic state is a choice of "counterfactual" global epistemic state for each imputation, together with a set of "plausible" imputations. And a global epistemic state is a choice of individual epistemic state for each player. In other words, if V is the space of imputations and P is the set of players, recursively define I:=GV×P(V) and G:=IP. In other words, a player has object-level beliefs of the form "x is a plausible imputation" as well as meta-level beliefs of the form "if imputation x obtained, then counterfactually player i would have individual epistemic state I".
Consider these axioms for global epistemic states G=(G1,…,Gn):
(1) Belief that all players are instrumentally rational. If the dominance y > x is witnessed by a coalition Z, and Gi holds that conditional on x, every member of Z believes that y is plausible, then Gi holds that x is implausible.
(2) Object-level Cromwell's rule. (Converse of 1.) If Gi holds that conditional on x, for all y>x witnessed by Z, there exists j∈Z that believes y is implausible, then Gi holds that x is plausible.
(3) Reflection. If Gi holds that conditional on x, the global state is H, then Hi=Gi.
(4) Belief that all players are epistemically rational. Gi holds that conditional on any x, the global epistemic state satisfies (1)–(4).
Sketch proof of existence
It's straightforward to prove that a zero-sum game always has a global epistemic state satisfying (1)–(4).
First, consider the simplex whose vertices are the imputations where every player but one gets their maximin value. The complement of the simplex is dominated by singleton coalitions, and it must be common knowledge that the complement of the simplex is implausible.
Next, if there are undominated imputations, it must be common knowledge that they're plausible. And it must be common knowledge that the imputations they dominate are implausible. And of the remaining imputations, the ones which are dominated only by implausible imputations must be plausible (and this is common knowledge). Continuing in this way transfinitely many times, we get a partition of the simplex into: X, the commonly-known-to-be-plausible imputations; Y, the commonly-known-to-be-implausible imputations; and Z, the remainder. Crucially, every imputation in Z is dominated by another imputation in Z.
If Z is empty, we've already described a global epistemic state and we're done. Otherwise, it's possible to complete the global epistemic state by recursively choosing the plausible sets to be X∪{z} for a varying z∈Z, starting at the object level and going up the meta hierarchy, guided by axioms (1)–(4).
Uncertain meta-beliefs
For aesthetic reasons I like players to have uncertainty about the beliefs of others, which looks like I:=P(G)V×P(V). And there are corresponding versions of axioms 1–4 as well as a meta-level Cromwell's rule. Existence follows from the existence result proved above.
Conclusion
The flaw of VNM stable sets is that they don't always exist. The flaw of these global epistemic states is that there are so many of them! In some games, any singleton set can be the plausible set of an individual epistemic state, and no doubt larger sets can be, too.
One way to summarize what we've proven is that it's possible to lock yourself into all sorts of beliefs if you believe the other players can have grossly inaccurate beliefs. What's more interesting is what sort of equilibria can exist if the players have approximately-correct beliefs. For that we need probabilities.
Another interpretation is that it's unsurprising that we get so many solutions, because we've made no assumptions about the players themselves apart from some basic rationality conditions. If we knew the source code of the players, the equilibria would be a lot more constrained.
References
William F. Lucas 1967. A game with no solution.
William F. Lucas 1969. The proof that a game may not have a solution.
John Von Neumann and Oskar Morgenstern 1944. Theory of Games and Economic Behavior.
Edit 2021.01.25: Fixed proof of nonexistence in the Lucas game.