cubefox

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

What does it mean to optimize for the map to fit the territory, but not the other way around? (After all: we can improve fit between map and territory by changing either map or territory.) Maybe it's complicated, but primarily what it means is that the map is the part that's being selected in the optimization. When communicating, I'm not using my full agency to make my claims true; rather, I'm specifically selecting the claims to be true.

I don't know whether you are familiar with it, but most speech acts or writing acts are considered to have either a "word-to-world" direction of fit, e.g. statements, or a "world-to-word" direction of fit, e.g. commands. Only with the former the agents optimize the speech act ("word") to fit the world; in the latter case they optimize the world to fit the speech act. The fit would be truth in the case of a statement, execution in the case of a command.

There is an analogous but more basic distinction for intentional states ("propositional attitudes"), where the "intentionality" of a mental state is its aboutness. Some have a mind-to-world direction of fit, e.g. beliefs, while others have a world-to-mind direction of fit, e.g. desires or intentions. The former are satisfied when the mind is optimized to fit the world, the latter when the world is optimized to fit the mind.

(Speech acts seem to be honest only insofar the speaker/writer holds an analogous intentional state. So someone who states that snow is white is only honest if they believe that snow is white. For lying the speaker would, apart from being dishonest, also need a deceptive intention with the speech act, i.e. intenting the listener to believe that the speaker believes that snow is white.)

So it seems in the above paragraph you are only considering the word-to-world / mind-to-world direction of fit?

cubefox810

It's interesting to note that we can still get Aumann's Agreement Theorem while abandoning the partition assumption (see Ignoring ignorance and agreeing to disagree, by Dov Samet). However, we still need Reflexivity and Transitivity for that result. Still, this gives some hope that we can do without the partition assumption without things getting too crazy.

I don't quite get this paragraph. Do you suggest that the failure of Aumanns disagreement theorem would be "crazy"? I know his result has become widely accepted in some circles (including, I think, LessWrong) but

a) the conclusion of the theorem is highly counterintuitive, which should make us suspicious, and

b) it relies on Aumann's own specific formalization of "common knowledge" (mentioned under "alternative accounts" in SEP) which may very well be fatally flawed and not be instantiated in rational agents, let alone in actual ones.

It has always baffled me that some people (including economists and LW style rationalists) celebrate a result which relies on the, as you argued, highly questionable, concept of common knowledge, or at least one specific formalization of it.

To be clear, rejecting Aumann's account of common knowledge would make his proof unsound (albeit still valid), but it would not solve the general "disagreement paradox", the counterintuitive conclusion that rational disagreements seem to be impossible: There are several other arguments which lead this conclusion, and which do not rely on any notion of common knowledge. (Such as this essay by Richard Feldman, which is quite well-known in philosophy and which makes only very weak assumptions.)