Löb's Theorem:
In the following discussion, I'll say "reality" to mean , "belief" to mean , "reliability" to mean (ie, belief is reliable when belief implies reality), and "trust" to mean (belief-in-reliability).
Löb says that if you have trust, you have belief.
Payor says that if you can prove that trust implies reality, then you have belief.
So, both results give conditions for belief. Indeed, both results give conditions equivalent to belief, since in both cases the inference can also be reversed:
Furthermore, both results relate reliability with belief, through the intermediary of trust.
Löb is usually thought of as a negative statement, that you can only have trust when you already have belief. One explanation of this is that Löb is the converse of the "trivial" case of trust, where you derive trust from simple belief: . (Recall that this is the pivotal step 2 in the proof of Payor's lemma.) Löb simply reverses this; combining the two, we know that trust is synonymous with belief: . "The only case where we can have trust is the trivial case: belief."
But this doesn't totally dash hopes of productive use of trust, as the negative reading of Löb might suggest. In the decision-theoretic application of Löb, we can take advantage of Löb to help ensure preferred outcomes:
For some desirable proposition , we can arrange things so that (reliability), and provably so (trust). Applying Löb, we get (belief). But we already arranged for ; so now we get out of the deal. This can allow cooperative handshakes between logic-based agents.
We can turn this into a general decision procedure by searching for the best we can make real in this way. More precisely, we search for possible implications of each action, and take the action for which we can find the most desirable implication. So we search for the relative provability of , in the context of an assumption that we take an action; and if we like what we see, we make that context real by taking that action.
Admittedly, this is a weird and spooky way to describe a fairly intuitive algorithm (search for actions with good consequences); the point is to try and clarify the connection to Löb as best I can.
The big problem here is that none of the above degrades well under uncertainty. Reading Löb as "trust is synonymous with belief" gives us a hint that it can't apply to probabilistic reasoning; clearly, I can think "my belief about is very probably very accurate" without assigning a high probability to . (In particular, if I assign a very low probability to .)
This results in a practical problem of Löb-based cooperation being very fragile, particularly for the general decision procedure mentioned above. Results are very sensitive to the proof strength of the participants. Intuitively, it shouldn't matter much whether I use or as my base-level reasoning systems, if I assign high probability to the soundness of both of those systems. For Löb-based handshakes, the difference can make or break cooperation.
Payor's Lemma looks much more amenable to probabilistic generalization, so it would be interesting to formulate a Payor-based decision procedure analogously to the Löb-based decision procedure mentioned earlier.
Where Löb is applied by making imply for desirable , Payor's lemma is applied via connecting with . So where Löb suggests the strategy make (good) beliefs imply reality, Payor suggests the strategy make trust (in the good) imply reality.
Critch gave the example strategy "cooperate if you know that if everyone believed everyone was cooperating, then everyone would indeed cooperate". However, this is a fixed strategy for a known game (it requires a pre-existing definition of "cooperation"). How can we turn it into a general decision procedure which can be applied to a wide variety of decision problems?
Trying to force the analogy:
| Löb | Payor | |
|---|---|---|
| statement of theorem | ||
| condition for belief | ||
| what we want to arrange to be (provably) true, for good | ||
| condition of above | ||
| condition outcome on action | ||
| what proofs to search for (looking for best for any ) |
When is interpreted as provability, these two decision procedures appear to be closely related, since as I mentioned before. This might be a good sign; the Löb-based procedure works well for proof-based decision theory. Since this equivalence is itself based on Löb, however, it'll break down for the probabilistic case. (I'm not yet sure how to derive an actual suggested decision procedure for the probabilistic case.)