In summary: A probabilistic version of the Payor's Lemma holds under the logic proposed in the Definability of Truth in Probabilistic Logic. This gives us modal fixed-point-esque group cooperation even under probabilistic guarantees.
EDIT 10/24: I think the way the way this post is framed is somewhat confused and should not be taken literally. However, I do stand by some of the core intuitions here.
Background
Payor's Lemma: If ⊢□(□x→x)→x, then ⊢x.
We assume two rules of inference:
Necessitation:⊢x⟹⊢□x,
Distributivity:⊢□(x→y)⟹⊢□x→□y.
Proof:
⊢x→(□x→x), by tautology;
⊢□x→□(□x→x), by 1 via necessitation and distributivity;
⊢□(□x→x)→x, by assumption;
⊢□x→x, from 2 and 3 by modus ponens;
⊢□(□x→x), from 4 by necessitation;
⊢x, from 5 and 3 by modus ponens.
The Payor's Lemma is provable in all normal modal logics (as it can be proved in K, the weakest, because it only uses necessitation and distributivity). Its proof sidesteps the assertion of an arbitrary modal fixedpoint, does not require internal necessitation (⊢□x⟹⊢□□x), and provides the groundwork for Lobian handshake-based cooperation without Lob's theorem.
Weak Distributivity:⊢x→y⟹⊢□px→□py.
where here we take □p to be an operator which returns True if the internal credence of x is greater than p and False if not. (Formalisms incoming).
The question is then: does there exist a consistent formalism under which these rules of inference hold? The answer is yes, and it is provided by Christiano 2012.
Let L be some language and T be a theory over that language. Assume that L is powerful enough to admit a Godel encoding and that it contains terms which correspond to the rational numbers Q. Let ϕ1,ϕ2… be some fixed enumeration of all sentences in L. Let ┌ϕ┐ represent the Godel encoding of ϕ.
We are interested in the existence and behavior of a function P:L→[0,1]L, which assigns a real-valued probability in [0,1] to each well-formed sentence of L. We are guaranteed the coherency of Pwith the following assumptions:
For all ϕ,ψ∈L we have that P(ϕ)=P(ϕ∧ψ)+P(ϕ∨¬ψ).
For each tautology ϕ, we have P(ϕ)=1.
For each contradiction ϕ, we have P(ϕ)=0.
Note: I think that 2 & 3 are redundant (as says John Baez), and that these axioms do not necessarily constrain P to [0,1] in and of themselves (hence the extra restriction). However, neither concern is relevant to the result.
A coherent P corresponds to a distribution μ over models of L. A coherent P which gives probability 1 to T corresponds to a distribution μ over models of T. We denote a function which generates a distribution over models of a given theory T as PT.
Syntactic-Probabilistic Correspondence: Observe that PT(ϕ)=1⟺T⊢ϕ. This allows us to interchange the notions of syntactic consequence and probabilistic certainty.
Now, we want P to give sane probabilities to sentences which talk about the probability P gives them. This means that we need some way of giving L the ability to talk about itself.
Consider the formula Bel.Bel takes as input the Godel encodings of sentences. Bel(┌ϕ┐) contains arbitrarily precise information about P(ϕ). In other words, if P(ϕ)=p, then the statement Bel(┌ϕ┐)>a is True for all a<p, and the statement Bel(┌ϕ┐)>b is False for all b>p.Bel is fundamentally a part of the system, as opposed to being some metalanguage concept.
(These are identical properties to that represented in Christiano 2012 by P(┌ϕ┐). I simply choose to represent P(┌ϕ┐) with Bel(┌ϕ┐) as it (1) reduces notational uncertainty and (2) seems to be more in the spirit of Godel's Bew for provability logic).
Let L′ denote the language created by affixing Bel to L. Then, there exists a coherent PT for a given consistent theory T over L such that the following reflection principle is satisfied:
∀ϕ∈L′∀a,b,∈Q:(a<PT(ϕ)<b)⟹PT(a<Bel(┌ϕ┐)<b)=1.
In other words, a<PT(ϕ)<b implies T⊢a<Bel(┌ϕ┐)<b.
Proof
(From now, for simplicity, we use P to refer to PT and ⊢ to refer to T⊢. You can think of this as fixing some theory T and operating within it).
Let □p(ϕ) represent the sentence Bel(┌ϕ┐)>p, for some p∈Q. We abbreviate □p(ϕ) as □pϕ. Then, we have the following:
Probabilistic Payor's Lemma: If ⊢□p(□px→x)→x, then ⊢x.
Necessitation:⊢x⟹⊢□px. If ⊢x, then P(x)=1 by syntactic-probabilistic correspondence, so by the reflection principle we have P(□px)=1, and as such ⊢□px by syntactic-probabilistic correspondence.
Weak Distributivity:⊢x→y⟹⊢□px→□py. The proof of this is slightly more involved.
From ⊢x→y we have (via correspondence) that P(x→y)=1, so P(¬x∨y)=1. We want to prove that P(□px→□py)=1 from this, or P((Bel(┌x┐)≤p)∨(Bel(┌y┐)>p))=1. We can do casework on x. If P(x)≤p, then weak distributivity follows from vacuousness. If P(x)>p, then as
P(¬x∨y)=P(x∧(¬x∨y))+P(¬x∧(¬x∨y)),1=P(x∧y)+P(¬x∨(¬x∧y)),1=P(x∧y)+P(¬x),P(¬x)<1−p, so P(x∧y)<p, and therefore P(y)>p. Then, Bel(┌y┐)>p is True by reflection, so by correspondence it follows that ⊢□px→□py.
(I'm pretty sure this modal logic, following necessitation and weak distributivity, is not normal (it's weaker than K). This may have some implications? But in the 'agent' context I don't think that restricting ourselves to modal logics makes sense).
Bots
Consider agents A,B,C which return True to signify cooperation in a multi-agent Prisoner's Dilemma and False to signify defection. (Similar setup to Critch's ). Each agent has 'beliefs' PA,PB,PC:L→[0,1]L representing their credences over all formal statements in their respective languages (we are assuming they share the same language: this is unnecessary).
Each agent has the ability to reason about their own 'beliefs' about the world arbitrarily precisely, and this allows them full knowledge of their utility function (if they are VNM agents, and up to the complexity of the world-states they can internally represent). Then, these agents can be modeled with Christiano's probabilistic logic! And I would argue it is natural to do so (you could easily imagine an agent having access to its own beliefs with arbitrary precision by, say, repeatedly querying its own preferences).
Then, if A,B,C each behave in the following manner:
⊢□a(□eE→E)→A,
⊢□b(□eE→E)→B,
⊢□c(□eE→E)→C,
where E=A∧B∧C and e=max({a,b,c}), they will cooperate by the probabilistic Payor's lemma.
Proof:
⊢□a(□eE→E)∧□b(□eE→E)∧□c(□eE→E)→A∧B∧C, via conjunction;
⊢□e(□eE→E)→E, as if the e-threshold is satisfied all others are as well;
⊢E, by probabilistic Payor.
This can be extended to arbitrarily many agents. Moreso, the valuable insight here is that cooperation is achieved when the evidence that the group cooperates exceeds each and every member's individual threshold for cooperation. A formalism of the intuitive strategy 'I will only cooperate if there are no defectors' (or perhaps 'we will only cooperate if there are no defectors').
It is important to note that any P is going to be uncomputable. However, I think modeling agents as having arbitrary access to their beliefs is in line with existing 'ideal' models (think VNM -- I suspect that this formalism closely maps to VNM agents that have access to arbitrary information about their utility function, at least in the form of preferences), and these agents play well with modal fixedpoint cooperation.
Acknowledgements
This work was done while I was a 2023 Summer Research Fellow at the Center on Long-Term Risk. Many thanks to Abram Demski, my mentor who got me started on this project, as well as Sam Eisenstat for some helpful conversations. CLR was a great place to work! Would highly recommend if you're interested in s-risk reduction.
In summary: A probabilistic version of the Payor's Lemma holds under the logic proposed in the Definability of Truth in Probabilistic Logic. This gives us modal fixed-point-esque group cooperation even under probabilistic guarantees.
EDIT 10/24: I think the way the way this post is framed is somewhat confused and should not be taken literally. However, I do stand by some of the core intuitions here.
Background
Payor's Lemma: If ⊢□(□x→x)→x, then ⊢x.
We assume two rules of inference:
Proof:
The Payor's Lemma is provable in all normal modal logics (as it can be proved in K, the weakest, because it only uses necessitation and distributivity). Its proof sidesteps the assertion of an arbitrary modal fixedpoint, does not require internal necessitation (⊢□x⟹⊢□□x), and provides the groundwork for Lobian handshake-based cooperation without Lob's theorem.
It is known that Lob's theorem fails to hold in reflective theories of logical uncertainty. However, a proof of a probabilistic Payor's lemma has been proposed, which modifies the rules of inference necessary to be:
The question is then: does there exist a consistent formalism under which these rules of inference hold? The answer is yes, and it is provided by Christiano 2012.
Setup
(Regurgitation and rewording of the relevant parts of the Definability of Truth)
Let L be some language and T be a theory over that language. Assume that L is powerful enough to admit a Godel encoding and that it contains terms which correspond to the rational numbers Q. Let ϕ1,ϕ2… be some fixed enumeration of all sentences in L. Let ┌ϕ┐ represent the Godel encoding of ϕ.
We are interested in the existence and behavior of a function P:L→[0,1]L, which assigns a real-valued probability in [0,1] to each well-formed sentence of L. We are guaranteed the coherency of P with the following assumptions:
Note: I think that 2 & 3 are redundant (as says John Baez), and that these axioms do not necessarily constrain P to [0,1] in and of themselves (hence the extra restriction). However, neither concern is relevant to the result.
A coherent P corresponds to a distribution μ over models of L. A coherent P which gives probability 1 to T corresponds to a distribution μ over models of T. We denote a function which generates a distribution over models of a given theory T as PT.
Syntactic-Probabilistic Correspondence: Observe that PT(ϕ)=1⟺T⊢ϕ. This allows us to interchange the notions of syntactic consequence and probabilistic certainty.
Now, we want P to give sane probabilities to sentences which talk about the probability P gives them. This means that we need some way of giving L the ability to talk about itself.
Consider the formula Bel. Bel takes as input the Godel encodings of sentences. Bel(┌ϕ┐) contains arbitrarily precise information about P(ϕ). In other words, if P(ϕ)=p, then the statement Bel(┌ϕ┐)>a is True for all a<p, and the statement Bel(┌ϕ┐)>b is False for all b>p. Bel is fundamentally a part of the system, as opposed to being some metalanguage concept.
(These are identical properties to that represented in Christiano 2012 by P(┌ϕ┐). I simply choose to represent P(┌ϕ┐) with Bel(┌ϕ┐) as it (1) reduces notational uncertainty and (2) seems to be more in the spirit of Godel's Bew for provability logic).
Let L′ denote the language created by affixing Bel to L. Then, there exists a coherent PT for a given consistent theory T over L such that the following reflection principle is satisfied: ∀ϕ∈L′∀a,b,∈Q:(a<PT(ϕ)<b)⟹PT(a<Bel(┌ϕ┐)<b)=1. In other words, a<PT(ϕ)<b implies T⊢a<Bel(┌ϕ┐)<b.
Proof
(From now, for simplicity, we use P to refer to PT and ⊢ to refer to T⊢. You can think of this as fixing some theory T and operating within it).
Let □p(ϕ) represent the sentence Bel(┌ϕ┐)>p, for some p∈Q. We abbreviate □p(ϕ) as □pϕ. Then, we have the following:
Probabilistic Payor's Lemma: If ⊢□p(□px→x)→x, then ⊢x.
Proof as per Demski:
Rules of Inference
Necessitation: ⊢x⟹⊢□px. If ⊢x, then P(x)=1 by syntactic-probabilistic correspondence, so by the reflection principle we have P(□px)=1, and as such ⊢□px by syntactic-probabilistic correspondence.
Weak Distributivity: ⊢x→y⟹⊢□px→□py. The proof of this is slightly more involved.
From ⊢x→y we have (via correspondence) that P(x→y)=1, so P(¬x∨y)=1. We want to prove that P(□px→□py)=1 from this, or P((Bel(┌x┐)≤p)∨(Bel(┌y┐)>p))=1. We can do casework on x. If P(x)≤p, then weak distributivity follows from vacuousness. If P(x)>p, then as P(¬x∨y)=P(x∧(¬x∨y))+P(¬x∧(¬x∨y)),1=P(x∧y)+P(¬x∨(¬x∧y)),1=P(x∧y)+P(¬x), P(¬x)<1−p, so P(x∧y)<p, and therefore P(y)>p. Then, Bel(┌y┐)>p is True by reflection, so by correspondence it follows that ⊢□px→□py.
(I'm pretty sure this modal logic, following necessitation and weak distributivity, is not normal (it's weaker than K). This may have some implications? But in the 'agent' context I don't think that restricting ourselves to modal logics makes sense).
Bots
Consider agents A,B,C which return True to signify cooperation in a multi-agent Prisoner's Dilemma and False to signify defection. (Similar setup to Critch's ). Each agent has 'beliefs' PA,PB,PC:L→[0,1]L representing their credences over all formal statements in their respective languages (we are assuming they share the same language: this is unnecessary).
Each agent has the ability to reason about their own 'beliefs' about the world arbitrarily precisely, and this allows them full knowledge of their utility function (if they are VNM agents, and up to the complexity of the world-states they can internally represent). Then, these agents can be modeled with Christiano's probabilistic logic! And I would argue it is natural to do so (you could easily imagine an agent having access to its own beliefs with arbitrary precision by, say, repeatedly querying its own preferences).
Then, if A,B,C each behave in the following manner:
where E=A∧B∧C and e=max({a,b,c}), they will cooperate by the probabilistic Payor's lemma.
Proof:
This can be extended to arbitrarily many agents. Moreso, the valuable insight here is that cooperation is achieved when the evidence that the group cooperates exceeds each and every member's individual threshold for cooperation. A formalism of the intuitive strategy 'I will only cooperate if there are no defectors' (or perhaps 'we will only cooperate if there are no defectors').
It is important to note that any P is going to be uncomputable. However, I think modeling agents as having arbitrary access to their beliefs is in line with existing 'ideal' models (think VNM -- I suspect that this formalism closely maps to VNM agents that have access to arbitrary information about their utility function, at least in the form of preferences), and these agents play well with modal fixedpoint cooperation.
Acknowledgements
This work was done while I was a 2023 Summer Research Fellow at the Center on Long-Term Risk. Many thanks to Abram Demski, my mentor who got me started on this project, as well as Sam Eisenstat for some helpful conversations. CLR was a great place to work! Would highly recommend if you're interested in s-risk reduction.