This post assumes some background knowledge about "common knowledge", a technical term in philosophy. See the tag page for an introduction and a list of LessWrong essays on the subject.

Epistemic effort: basically reading the SEP article on common knowledge and forming my own conclusions. I've also gained something by discussing these issues with Scott Garrabrant and Sahil Kulshrestha, although this post is far from representing their views as I've understood them.

Consider a statement like "LessWrong curated posts help to establish common knowledge". 

On the one hand, this seems obviously valid in practice. Curating a post boosts the number of people who see it, in a public way which also communicates the meta-knowledge (you know that lots of people will know about this post, and they'll also know that fact, etc).

On the other hand, this is a long way off from actually establishing common knowledge. People don't check LessWrong constantly. Lots of people will miss any given curated post.

In this post, I will argue that common knowledge never occurs in the real world.

But this leaves us with a paradoxical state of affairs. We use the concept to explain things in the real world all the time. Common knowledge is an important coordination tool, right? For example, social norms are supposed to require common knowledge in order to work.

I'll also offer some evidence that common knowledge is not as important as it is typically made out to be.

Can't Have It!

Several arguments about the impossibility of common knowledge have been made in the philosophical literature; see Common Knowledge Revisited and Uncommon Knowledge. Whether nontrivial common knowledge is realistically possible is still debated, according to SEP (Stanford Encyclopedia of Philosophy).

SEP makes a big deal about how we don't literally need to explicitly believe infinitely many levels of knowledge in order to have common knowledge, because we can have a single concept ("common knowledge" itself) which implies any level of recursive knowledge. 

However, I don't find this particularly reassuring.

I will make the following argument: Common knowledge requires absolute certainty of common knowledge. If anyone has the slightest doubt that common knowledge has been established, it hasn't. But we are never 100% certain of such things in the real world! Therefore, common knowledge is impossible.

It is often stated that common knowledge requires a public event: some event which everyone observes, and which everyone knows that everyone observes. I have approximately two questions about this:

  1. How did we physically arrange an event which we can be absolutely certain everyone observes, when quantum mechanics makes every phenomenon in our universe at least a little noisy? Any communication channels have some degree of unreliability. Even setting aside quantum mechanics, how about human psychology? How are you certain everyone was paying attention? 
  2. How did we establish common knowledge that we had done so? Does establishing new common knowledge always require some seed of pre-existing common knowledge? Doesn't this create an infinite regress?

Common Knowledge Revisited also shows that establishing common knowledge requires simultaneity. Suppose that there is a public event which everyone can see, and indeed does see, but there is some small uncertainty about when they'll see it. You might expect that common knowledge will be established anyway; but, this is not the case!!

Alice sends a message at , which Bob is sure to receive within  seconds. Bob actually gets the message at , some time before . At , Alice now knows that Bob got the message. However, she doesn't know whether Bob yet knows that she knows this, since Bob will have some uncertainty  about when exactly she sent it. At , Bob knows that Alice knows Bob will have received the message, since he knows by then that Alice definitely sent the message at least  ago. But Alice does not know for sure that Bob has come to know this, until . And so on.

Since the blue points have form  and the red points have form , new layers of meta-knowledge are added as time goes by, but never infinitely many layers, so common knowledge is not achieved at any finite time, no matter how much time has passed.

To summarize, any uncertainty about message delivery, or even about the time of message delivery, blocks the formation of common knowledge. The real world seems too full of such uncertainty for common knowledge to be established in practice.

The Partition Assumption

I should also mention that the standard treatment of common knowledge makes some important assumptions about evidence. Evidence is what tells you which world you're in. I'll say world one  is indistinguishable from  iff the evidence you have at  doesn't rule out  -- written .

Standard definitions of common knowledge always assume the following three axioms:

  • Reflexivity:  for any .
  • Transitivity:  &  
  • Symmetry: 

Together, these three can be called the partition assumption, because they imply that the indistinguishability relation is a partition relation.

However, all three of these assumptions are realistically questionable. 

  • Reflexivity says that the real world is always indistinguishable from itself, IE, our observations are always true. In the case of logical uncertainty, this amounts to asserting that our calculations are always infallible, which seems questionable. In particular, it seems like you should doubt this in counterlogical mugging.[1] Furthermore, Löb's Theorem shows that under very broad conditions, coherent observers can't believe Reflexivity about themselves, even if from the outside we should believe it about them. 
  • Transitivity says that you can't get further by iterating the indistinguishability relation; anything you can reach that way, you can reach in one step. But this seems wrong. Consider observing the location of a cup on a desk. Your observation will have some precision associated with it; you can't distinguish between the actual location and very nearby locations. If Transitivity held, this would imply that you cannot distinguish between any two locations on the desk; we could argue that the cup could be anywhere, by making a chain of indistinguishability arguments connecting two points.
  • Symmetry says that if, from , I think I could be in , then from , I must also think I could be in . Eliezer argues against a version of this in Against Modest Epistemology, saying that many people -- when dreaming -- cannot tell whether they are dreaming or awake; but (he argues) the awake can tell that they are awake. 

So the partition assumption is dubious. This seems quite injurious to the concept of common knowledge, although to walk through all the consequences would require a longer post than this one.[2]

Who Needs It?

The orthodox case for the importance of common knowledge.

Two Generals

One of the commonly mentioned thought experiments is the Two Generals problem. Two generals are stationed on opposite sides of a common enemy. They need to attack at the same time in order to win; however, they haven't established a plan ahead of time.

The two generals can send messages to each other, but unfortunately, there is always a chance that messages will be lost to the enemy in transit. This prevents the formation of common knowledge, which (supposedly) stops the two generals from coordinating.

Let's call the two generals Alice and Bob. If Bob receives a message from Alice stating "I propose we should attack at noon". Great, now Bob knows what the plan is! However, (supposedly) Bob worries as follows: "Alice doesn't know that I got this message. Alice will attack only if she thinks I will. So we're still in the same situation we were in before."

Bob can send a new message to Alice to confirm his receipt of the message, but this doesn't really change the situation: Bob doesn't know whether Alice will receive the message, and even if she does, she will know that Bob doesn't know she did. So (supposedly) Alice still can't attack, since (supposedly) Bob won't attack.

I've been inserting "supposedly" everywhere, because all of this can be solved by normal probabilistic reasoning.[3]

Suppose for concreteness that the payoff matrix is as follows:

 Alice AttacksAlice Holds
Bob Attacks+1-2
Bob Holds-20

Let's say the probability that a message gets dropped is one percent.

There exists a Nash equilibrium[4] where Alice sends Bob "I will attack at noon", and then attacks at noon herself. The best thing Bob can do is to attack at noon as well (if he receives the message); and, the best thing Alice can do is to hold to her word.

So who cares about common knowledge! 

Now, you might have the following worry: 

"Abram, you have only said that there exists a Nash equilibrium. But how do the players know that they are in that particular Nash equilibrium? Nash equilibria themselves are justified by supposing common knowledge!!"

An astute observation and a just objection. However, I would respond that, in this case, Bob knows which equilibrium they are in, because Alice has told him! Alice said "I will attack at noon." Alice can write the contents in complete honesty (and not even requiring any weird precommitments), and Bob, receiving the message, knows everything he needs to know. Alice, knowing that she sent the message and knowing Bob will probably interpret it as honest, now has a simple incentive to attack at noon.

Bob could doubt Alice's honesty or accuracy, but he has no reason to do so, since Alice has every reason to be honest and accurate.[5]

Critically, even if Bob does doubt a little, he can still rationally follow the plan. And similarly for Alice.

Electronic Messaging Coordination Problem

OK, so maybe Two Generals is too easy to solve in practice if you accept a risk-benefit tradeoff & probabilistic reasoning. 

Perhaps a different case can illustrate the importance of common knowledge for coordination?

You know how some electronic messaging services have little check marks, to verify whether messages have been received and/or read? Apparently, those can make coordination problems worse, at least in some cases. Stanford Encyclopedia of Philosophy (SEP) describes an example where message verification makes it rationally impossible to coordinate, no matter how many messages have been sent back and forth, and no matter how close to perfectly reliable the messaging system is.

According to SEP (see the paragraph immediately before section 5.1), the moral of the story is that full common knowledge really is necessary, sometimes; even very large stacks of they-know-they-know-they-know (in the form of lots of confirmation messages) are as good as no communication at all.

I'll go over this example in an appendix, because I think it's quite interesting and surprising, but on the other hand, it's a bit complex and fiddly.

But I reject the conclusion. If your theory of rationality says that you can never use electronic messages to coordinate dinner plans, specifically because your messaging system displays little check marks to indicate what's been received, your theory of rationality has messed up somewhere, I'd think. Humans do better.

Coordinating to meet over such a messaging system, and then successfully meeting, is not some kind of irrational mistake which you would avoid if you understood common knowledge better

If this is a standard example (which it is, by virtue of being on SEP) used to argue how important common knowledge is for coordination, maybe the conclusion is flawed?[6]

Argument that common knowledge makes coordination harder in some cases. 

Imagine that Alice and Bob are each asked to name dollar amounts between $0 and $100. Both Alice and Bob will get the lowest amount named, but whoever names that lowest number will additionally get a bonus of $10. No bonus is awarded in the case of a tie.

According to traditional game theory, the only rational equilibrium is for everyone to answer $0. This is because traditional game theory assumes common knowledge of the equilibrium; if any higher answer were given, there would be an incentive to undercut it.

However, humans will typically fare much better in this game. One reason why this might be is that we lack common knowledge of the equilibrium. We can only guess what the other player might say, and name a slightly smaller number ourselves. This can easily result in both players getting significantly more than $0.

In the appendix, I will argue that this is part of what's going on in the electronic messaging coordination problem.

Summary of my case against the importance of common knowledge for coordination.

Some cases commonly used to illustrate the importance of common knowledge, such as Two Generals, are only "unsolvable without common knowledge" if you refuse to reason probabilistically and accept cost/benefit tradeoffs. 

We can find special cases where the importance of common knowledge stands up to probabilistic reasoning, but the inability of "rational" agents to coordinate in these cases seems (to me) like a problem with the notion of "rationality" used, rather than evidence that common knowledge is required for coordination.

There are also cases where common knowledge harms the ability of selfish players to coordinate.

I'd be happy to consider other example cases in the comments!

How to live without it?

Substitute knowledge for belief.

There's a concept called p-common knowledge, which is like common knowledge, except that you replace "knows" with "believes with probability at least p". Let's abbreviate "believes with probability at least p" as p-believes for convenience. Then, p-common-knowledge in a group of people refers to what everyone p-believes, everyone p-believes that everyone p-believes, etc for each level.

Is this more realistic than common knowledge?

If I have any doubts about whether common knowledge holds, then I can immediately conclude that it does not. p-common knowledge is not so fragile. I can have a small doubt, up to 1-p, without breaking p-common knowledge. 

This seems like a good sign. 

Consider the Two Generals solution I proposed earlier. If messages are delivered (and read and understood) with probability at least p, then so long as the message gets through in fact, p-common knowledge has been established:

  • Alice knows she will attack.
  • Alice believes with probability at least p that Bob knows this.
  • Alice also p-believes that Bob knows the above.
  • And so on.
  • Bob knows that Alice will attack.
  • Bob knows that Alice p-believes that Bob knows. 
  • And so on.

So it looks like p-belief can easily be established, without any "public event" as is needed for common knowledge. All we need is a communication method that's at least p-reliable.

Is p-common knowledge useful for coordination?

It seems so in this example, at least. P-common belief includes some inherent doubt, but this can be overcome by cost-benefit analysis, if the benefits of coordination outweigh the risks given the uncertainty.

It seems possible that p-common-knowledge is a basically perfect steelman of common knowledge.

I'd like to see a more complete analysis of this. Intuitively, it feels like p-common knowledge might inherently dilute each level more and more, in comparison to full common knowledge, since we stack p-belief operators. Does this cause it to be much much less useful than common knowledge, in some situations where the deeper iterated layers of knowledge are important? On the other hand, perhaps p-common-knowledge is actually harder to realistically obtain than it appears, much like common knowledge. Are there subtle difficulties which stop us from obtaining p-common-knowledge about most things?

Relax knowledge in other directions.

Early on, I gave an example where a small uncertainty in the communication delay blocks us from forming common knowledge at any finite time. We can define t-common knowledge by replacing "knows" with "will know within a small delay t". We might then be able to establish t-common knowledge at finite time, even if we can't establish full common knowledge. SEP suggests that various relaxations of common knowledge like this will be useful in different applications.

Returning to my example at the very beginning, "LessWrong curated posts help establish common knowledge", we could define %-common knowledge within a population by replacing "everyone knows" with "X% of people know". You might not be able to rely on every self-identified lesswronger knowing about a post, but the percentage might be high enough for desirable coordination strategies to become feasible.[7] 

Take an empirical approach to coordination.

Reasoning about what people know is an indirect way to try and predict their behavior. It might be better to take a more direct approach: predicting behavior from past behavior.

According to game theory, one of the biggest factors facilitating coordination is iterated games. The standard approach to this still requires equilibrium theory, so, still implicitly rests on establishing common knowledge of behavior. As I've argued, this is unrealistic. Fortunately, iterated games allow for the empirical approach; we have time to get to know the players.

In my own estimation, this accounts for more of coordination in reality, than common knowledge or p-common knowledge do. 

Appendix: Electronic Messaging Coordination

First, let me state that SEP calls this "the e-mail coordination example". However, email doesn't usually have automatic confirmation of receipt as a feature, while instant messaging services often do. So I've renamed it here.

I'm also going to change the payoff tables a bit, in order to make things easier to grasp according to my taste. 

The Coordination Problem

Alice and Bob like to eat out together, if master chef Carol is in the kitchen. Carol has an unpredictable schedule which flips between restaurants 1 and 2. On a given day, Alice and Bob can each choose:

  • Go to location 1.
  • Go to location 2.
  • Stay home.

The payoff for Alice is 0 for staying home, +1 for meeting Bob at Carol's current location, and -2 for any other outcome. Similarly for Bob: 0 for home, +1 for meeting Alice at Carol's current location, -2 for anything else.

The Messaging System

Alice gets emails from Carol about Carol's schedule, so she can tell Bob where they should meet on a given day. Alice and Bob use a special instant messaging service called InfiniCheck, which has a confirmation feature which works as follows.

The moment a message is read on the other end, a confirmation message is sent back. This is displayed as a little check mark for the original sender. So if Alice sends something to Bob, and Bob reads it, Alice will see a check mark by her message.

InfiniCheck also offers confirmations for checkmarks, so that you know whether the other person has seen a check mark. So if Alice sees the checkmark by her message, Bob will now also see a checkmark by the message, indicating to Bob that Alice knows he has read it.

But then if Bob sees this checkmark, then Alice will get a second checkmark, indicating to her that Bob knows she saw the first checkmark. 

And so on. 

If Bob and Alice both have their smartphones open, this process could continue forever, but it never does, because the network is fallible: every message has a small chance, , of being dropped. So Bob and Alice see (on average) about  checkmarks next to each message.

The Argument

Rubenstein shows that in a version of this problem, there are no equilibria in which Alice and Bob reliably meet Carol. Equilibria can only behave as if no email has been received. In other words, "rational" agents are totally unable to use this electronic messaging system to coordinate.

It's a fairly simple proof by induction:

  • If Alice sees zero checkmarks, then it's close to 50-50 between "the message dropped, so Bob didn't see it", or "Bob saw it, but the confirmation message coming back got dropped". (The first is slightly more probable than the second, since both events require an improbable message-drop, but the second also requires the first message to have gotten through, which is probable but not certain.) Given this gamble, it's better for Alice to remain home, since the risk of -2 outweighs the possible reward of +1.
  • If Bob gets the message but sees zero checkmarks, then he similarly reasons that it's roughly 50-50 between Alice seeing no checkmark (in which case we've already argued she stays home), or seeing one (in which case we haven't yet established what she will do). So, like Alice, Bob sees the risk of a -2 as worse than the possible reward, and stays home.
  • We then argue inductively that if no one would meet at  checkmarks, you also shouldn't meet at . The reasoning is very similar to the  case. If you've seen  checkmarks, it's a near 50-50 gamble on whether the other person has seen  or  checkmarks; so meeting isn't worth the risk.
  • Therefore, no number of checkmarks can provide sufficient confirmation for Alice and Bob to meet!!

Rubenstein's version of the problem and proof is here

Consequences

This problem seems very close to Two Generals, except we've added an arcane auto-message-confirmation system. It appears that this extra information is very harmful to our ability to coordinate! 

The Updateless Way

You might think this example argues in favor of updateless decision theory (UDT). If we simply ignore the extra information, we can solve this the way we solved Two Generals: Alice can just send Bob a message saying where she's going, and Bob can just go there. This gives both of them an expected utility very close to +1, while the "rational" equilibrium (staying home) gets 0. Ignoring the checkmarks, this solution works for essentially the same reason my two-generals solution worked.

However, simply going updateless doesn't actually solve this. The inductive proof above still works fine for a UDT agent in many cases. If UDT is optimizing its whole policy together ("UDT 1.1"), then each individual step in the inductive argument will improve its expected utility, provided the other agent is thinking in the same way. If UDT decides each policy-point individually ("UDT 1.0"), then so long as it considers its choices to be probabilistically independent of each other and of how the other agent chooses, then we also reach the same conclusion.

The Evidential Way

So it seems like the only way UDT can help us here is by EDT-like reasoning, where we think of the inductive argument as a slippery slope. Although an individual decision to stay home seems good in isolation, each such decision probabilistically increases the number of cases where the other person stays home. Perhaps this is worth it for highly distinguished numbers of checkmarks (such as 0 or 1), since these might be less correlated with decisions at other numbers; but each such decision weakens the overall chances to coordinate, so only very few such exceptions should be made.

Note that this EDTish solution is more powerful if we're also using UDT, but has some power either way.

I'm not so comfortable with this kind of solution, since different priors can easily imply different correlations, and I don't see a story where agents tend to converge to see these sorts of correlations rather than other patterns. If Alice has a mostly stable strategy for situations like this, with occasional deviations to experiment with other strategies, then she should know that changing one action does not, in fact, change my whole policy. So it seems more plausible that she sees its own actions as independent. Whether she sees Bob's actions as independent of hers depends on how well Bob foresees Alice's unreliability. If Bob can predict when Alice will flake out, then Alice would have an EDTish reason not to flake out; but otherwise, there would be no empirical reason for Alice to think this way (and there may or may not be an a priori reason, depending on her prior).

So I think the EDTish approach makes a little headway on why Alice and Bob can successfully (and rationally) meet up, but in my view, not a lot. 

Ditching Common Knowledge

The inductive argument that Bob and Alice cannot meet up critically relies on bouncing back and forth between Bob's reasoning and Alice's reasoning -- each player's decision to stay home an the nth step relies on the other's decision at the th step. So you can see how the whole conclusion relies on common knowledge of rationality. In order to buy the whole inductive argument, Alice and Bob need to buy that each other will reason as described in each step, which depends on Alice and Bob recursively modeling each other as buying the previous steps in the argument. 

On the other hand, the main point of this post is to argue that common knowledge of this kind is not a reasonable assumption.

The situation is analogous to price-setting in a moderately competitive market. Everyone wants to undercut everyone else's prices a little. The only possible common-knowledge equilibrium, then, is for everyone to set their prices as low as possible. But that's not necessarily the outcome you should expect! If the competitors lack common knowledge of how each other thinks, they're not going to jump to the lowest price -- they merely set their prices a little lower than what they expect others to charge. 

Similarly, if Alice and Bob lack common knowledge about how they think about these problems, they need not jump to the never-meet equilibrium. Alice and Bob are only individually motivated to require one more checkmark than the other person in order to meet. If their estimates of each other are uncalibrated, they could end up meeting quite often.

This solution has a nice irony to it, since the SEP's conclusion from the example is that you need common knowledge to coordinate, whereas I'm arguing here that you need to lack common knowledge to coordinate. On the other hand, it's a fragile kind of solution, which is progressively wrecked the better the players know each other. It seems quite plausible that a moderate amount of p-common knowledge would be enough to make the inductive argument go through anyway.

So again, I think this idea captures some of what is going on, but doesn't fully resolve the situation.

Cooperative Oracles

On the other hand, if the agents reason interactively about their policies, they can first rule out the worst possibilities ("I won't go out if I know I'm going to get stood up"), then the second-worst ("I won't go out if there's a good chance I'll get stood up"), etc, all the way through ruling out staying home ("I won't stay home if there's a good chance we could have had dinner"), and finally settle on a Pareto-optimal solution.

Which solutions are Pareto optimal?

"Go if you've sent or received any message" gives Alice an expected utility of , since there's a  chance of +1, plus an  chance of -2. Bob gets to stay home in the worst case in this plan, so Bob gets  in expectation. 

This sets a baseline; anything worse than this for both Alice and Bob can't be Pareto-optimal, so can't be arrived at. For instance, this shows that Alice and Bob don't just stay home, as classical rationality would suggest.

Alice can slightly better her position by staying home if she doesn't see a checkmark. This totals Alice  and Bob . This is also Pareto-optimal, although if we naively compare utilities 1:1, it looks a little worse from a Utilitarian perspective.

If that were Alice's strategy, of course, Bob could slightly better his position by staying home in the zero-checkmark case as well, in which case Alice would get  and Bob would get , if I've done my calculations right.

But at this point, for sufficiently small , we're already out of Pareto-optimal territory! Remember, the first payoff we calculated for Alice was . Here Alice is getting something closer to . Bob's first number was , but here Bob is getting closer to . So they're both worse off by almost . So this option would be ruled out by either Alice or Bob, before settling on one of the Pareto optima.

Of course, the kind of deliberation I'm describing requires that Alice and Bob have detailed access to each other's a priori reasoning, although it doesn't require Bob to have access to Alice's private knowledge (the email from Carol). So it may be considered unrealistic for humans, if not AIs.

And, the reasoning behind cooperative oracles is not totally selfish, although it is mostly selfish. So it's not a robust equilibrium; rational agents can do something different to get the upper hand in many cases.

  1. ^

    Counterlogical Mugging is a short name for counterfactual mugging with a logical coin. One reasonable-seeming solution to the problem is to put some probability on "I am the 'me' being simulated in Omega's head, so my mathematical calculations are actually wrong because Omega is spoofing them" -- this allows us to see some benefit to giving Omega the $100 they are asking for.

  2. ^

    It's interesting to note that we can still get Aumann's Agreement Theorem while abandoning the partition assumption (see Ignoring ignorance and agreeing to disagree, by Dov Samet). However, we still need Reflexivity and Transitivity for that result. Still, this gives some hope that we can do without the partition assumption without things getting too crazy.

  3. ^

    In the comments, rpglover64 point out that the classic problem statement requires us to solve it with no chance of failure, which is of course a common requirement in algorithm design (where this problem hails from). So, the problem is indeed impossible to solve as stated

    The intellectual friction I'm chafing against, when I complain about the standard treatment of Two Generals, is a friction between what it means to "solve" a problem in algorithm design (where a "solution" means satisfying the goal with a 100% success rate, unless the problem states otherwise), vs "solve" a problem in decision theory (where a "solution" means a defensible story about how rational agents should deal with the situation).

    In the context of algorithm design, it's quite reasonable to say that Two Generals is impossible to solve, but can easily be approximated. In the context of decision theory, it seems absurd to call it "impossible to solve", and the meme of its impossibility gives rise to an inflated view of the importance of common knowledge. 

  4. ^

    To reason that this is a Nash equilibrium, I'm ignoring the fact that Alice and Bob could decide to send more messages to further increase the chances of coordination. The Electronic Messaging problem, coming up soon, casts doubt on whether confirmation messages back and forth are such a good idea. The possibility of confirmation messages casts doubt on whether Alice will follow through with her plan if she doesn't receive sufficient confirmation. 

    However, I think it's fair to say that Alice should send her message as many times as she can, rather than just once, to increase Bob's chances of getting the message. Bob has no particular reason to try to send messages, if he gets Alice's message. And Alice can see that this is the case, so there's still no need for spooky knowledge of which Nash equilibrium we're in.

    So the only problem remaining is that Bob could send Alice a message before he receives her message, perhaps containing a different plan of attack. 

    A possibility which I'm going to keep ignoring. ;p

  5. ^

    You could make the argument that I'm still relying on language itself, which can be analyzed as a Nash equilibrium and therefore a form of common knowledge. My solution seemingly relies on the fact that Alice and Bob both know English, know that each other know English, know that they know, etc. 

    If I had any expectations of keeping reader attention for long enough, this post would explore the question further. My provisional reply is that the reliance on common knowledge here is also an illusion. After all, in real life, we don't and can't rely on perfect common knowledge about language. 

  6. ^

    I should flag that this is an extreme claim, which might require us to abandon a lot of decision theory. More on this in the appendix.

  7. ^

    In some cases, this reduces to p-common knowledge; eg, if you're interacting with one random LessWronger, and you want to gamble on coordinating based on some specific post you've read, then X%-common knowledge implies p-common-knowledge with p=X/100 (provided they view you as a random LessWronger). We can also generalize this argument to larger groups, if we (unrealistically) assume independent sampling.

    In other cases, you might specifically care about a large enough fraction of people getting what you're trying to do; eg, if enough people go along with something, there could be a critical mass to entrench a social norm. Then you might specifically want to estimate %-common knowledge. 

New Comment
5 comments, sorted by Click to highlight new comments since:

It's interesting to note that we can still get Aumann's Agreement Theorem while abandoning the partition assumption (see Ignoring ignorance and agreeing to disagree, by Dov Samet). However, we still need Reflexivity and Transitivity for that result. Still, this gives some hope that we can do without the partition assumption without things getting too crazy.

I don't quite get this paragraph. Do you suggest that the failure of Aumanns disagreement theorem would be "crazy"? I know his result has become widely accepted in some circles (including, I think, LessWrong) but

a) the conclusion of the theorem is highly counterintuitive, which should make us suspicious, and

b) it relies on Aumann's own specific formalization of "common knowledge" (mentioned under "alternative accounts" in SEP) which may very well be fatally flawed and not be instantiated in rational agents, let alone in actual ones.

It has always baffled me that some people (including economists and LW style rationalists) celebrate a result which relies on the, as you argued, highly questionable, concept of common knowledge, or at least one specific formalization of it.

To be clear, rejecting Aumann's account of common knowledge would make his proof unsound (albeit still valid), but it would not solve the general "disagreement paradox", the counterintuitive conclusion that rational disagreements seem to be impossible: There are several other arguments which lead this conclusion, and which do not rely on any notion of common knowledge. (Such as this essay by Richard Feldman, which is quite well-known in philosophy and which makes only very weak assumptions.)

I was using "crazy" to mean something like "too different from what we are familiar with", but I take your point. It's not clear we should want to preserve Aumann.

To be clear, rejecting Aumann's account of common knowledge would make his proof unsound (albeit still valid), but it would not solve the general "disagreement paradox", the counterintuitive conclusion that rational disagreements seem to be impossible: There are several other arguments which lead this conclusion, and which do not rely on any notion of common knowledge.

Interesting, thanks for pointing this out!

The infinite autoresponse example seems like it would be solved in practice by rational ignorance: after some sufficiently small number of autoresponses (say 5) people would not want to explicitly reason about the policy implications of the specific number of autoresponses they saw, so "5+ autoresponses" would be a single category for decisionmaking purposes. In that case the induction argument fails and "both people go to the place specified in the message as long as they observe 5+ autoresponses" is a Nash equilibrium.

Of course, this assumes people haven't already accepted and internalized the logic of the induction argument, since then no further explicit reasoning would be necessary based on the observed number of autoresponses. But the induction argument presupposes that rational ignorance does not exist, so it is not valid when we add rational ignorance to our model.

so "5+ autoresponses" would be a single category for decisionmaking purposes

I agree that something in this direction could work, and plausibly captures something about how humans reason. However, I don't feel satisfied. I would want to see the idea developed as part of a larger framework of bounded rationality.

UDT gives us a version of "never be harmed by information" which is really nice, as far as it goes. In the cases which UDT helps with, we don't need to do anything tricky, where we carefully decide which information to look at -- UDT simply isn't harmed by the information, so we can think about everything from a unified perspective without hiding things from ourselves.

Unfortunately, as I've outlined in the appendix, UDT doesn't help very much in this case. We could say that UDT guarantees that there's no need for "rational ignorance" when it comes to observations (ie, no need to avoid observations), but fails to capture the "rational ignorance" of grouping events together into more course-grained events (eg "5+ auto responses").

So if we had something like "UDT but for course-graining in addition to observations", that would be really nice. Some way to deal with things such that you never wish you'd course-grained things.

Whereas the approach of actually course-graining things, seems a bit doomed to fragility and arbitrariness. It seems like you have to specify some procedure for figuring out when you'd want to course-grain. For example, maybe you start with only one event, and iteratively decide how to add details, splitting the one event into more events. But I feel pessimistic about this. I feel similarly pessimistic about the reverse, starting with a completely fine-grained model and iteratively grouping things together. 

Of course, this assumes people haven't already accepted and internalized the logic of the induction argument, 

Fortunately, the induction argument involves both agents following along with the whole argument. If one agent doubts that the other thinks in this way, this can sort of stabilize things. It's similar to the price-undercutting dynamic, where you want to charge slightly less than competitors, not as little as possible. If market participants have common knowledge of rationality, then this does amount to charging as little as possible; but of course, the main point of the post is to cast doubt on this kind of common knowledge. Doubts about how low your competitor will be willing to go can significantly increase prices from "as low as possible". 

Similarly, the induction argument really only shows that you want to stay home in slightly more cases than the other person. This means the only common-knowledge equilibrium is to stay home; but if we abandon the common-knowledge assumption, this doesn't need to be the outcome.

(Perhaps I will edit the post to add this point.)

ironically I missed this post when you first posted it