All of Bunthut's Comments + Replies

Maybe I'm missing something, but it seems to me that all of this is straightforwardly justified through simple selfish pareto-improvements.

Take a look at Critchs cake-splitting example in section 3.5. Now imagine varying the utility of splitting. How high does it need to get, before [red->Alice;green->Bob] is no longer a pareto improvement over [(split)] from both player's selfish perspective before the observation? It's 27, and thats also exactly where the decision flips when weighing Alice 0.9 and Bob 0.1 in red, and Alice 0.1 and Bob 0.9 in green.... (read more)

2Abram Demski
I think you are right, I was confused. A pareto-improvement bet is necessarily one which all parties would selfishly consent to (at least not actively object to). 

On reflection, I didn't quite understand this exploration business, but I think I can save a lot of it.

>You can do exploration, but the problem is that (unless you explore into non-fixed-point regions, violating epistemic constraints) your exploration can never confirm the existence of a fixed point which you didn't previously believe in.

I think the key here is in the word "confirm". Its true that unless you believe p is a fixed point, you can't just try out p and see the result. However, you can change your beliefs about p based on your results from ex... (read more)

4Abram Demski
This is the fundamental obstacle according to me,  so, unfortunate that I haven't successfully communicated this yet.  Perhaps I could suggest that you try to prove your intuition here? 

I don't think the learnability issues are really a problem. I mean, if doing a handstand with a burning 100 riyal bill between your toes under the full moon is an exception to all physical laws and actually creates utopia immediately, I'll never find out either. Assuming you agree that that's not a problem, why is the scenario you illustrate? In both cases, it's not like you can't find out, you just don't, because you stick to what you believe is the optimal action.

I don't think this would be a significant problem in practice any more than other kinds of h... (read more)

2Abram Demski
You can do exploration, but the problem is that (unless you explore into non-fixed-point regions, violating epistemic constraints) your exploration can never confirm the existence of a fixed point which you didn't previously believe in. However, I agree that the situation is analogous to the handstand example, assuming it's true that you'd never try the handstand. My sense is that the difficulties I describe here are "just the way it is" and only count against FixDT in the sense that we'd be happier with FixDT if somehow these difficulties weren't present.  I think your idea for how to find repulsive fixed-points could work if there's a trader who can guess the location of the repulsive point exactly rather than approximately, and has the wealth to precisely enforce that belief on the market. However, the wealth of that trader will act like a martingale; there's no reliable profit to be made (even on average) by enforcing this fixed point. Therefore, such a trader will go broke eventually. On the other hand, attractive fixed points allow profit to be made (on average) by approximately guessing their locations. Repulsive points effectively "drain willpower".

This prediction seems flatly wrong: I wouldn’t bring about an outcome like that. Why do I believe that? Because I have reasonably high-fidelity access to my own policy, via imagining myself in the relevant situations.

This seems like you're confusing two things here, because the thing you would want is not knowable by introspection. What I think you're introspecting is that if you'd noticed that the-thing-you-pursued-so-far was different from what your brother actually wants, you'd do what he actually wants. But the-thing-you-pursued-so-far doesn't play the... (read more)

2Alex Turner
I want to know whether, as a matter of falsifiable fact, I would enact good outcomes by my brother's values were I very powerful and smart. You seem to be sympathetic to the falsifiable-in-principle prediction that, no, I would not. (Is that true?) Anyways, I don't really buy this counterargument, but we can consider the following variant (from footnote 2):  "True" values: My own (which I have access to) "Proxy" values: My brother's model of my values (I have a model of his model of my values, as part of the package deal by which I have a model of him) I still predict that he would bring about a good future by my values. Unless you think my predictive model is wrong? I could ask him to introspect on this scenario and get evidence about what he would do? 

Why is the price of the un-actualized bet constant? My argument in the OP was to suppose that PCH is the dominant hypothesis, so, mostly controls market prices.

Thinking about this in detail, it seems like what influence traders have on the market price depends on a lot more of their inner workings than just their beliefs. I was thinking in a way where each trader only had one price for the bet, below which they bought and above which they sold, no matter how many units they traded (this might contradict "continuous trading strategies" because of finite wea... (read more)

2Abram Demski
The continuity property is really important.

So I don't see how we can be sure that PCH loses out overall. LCH has to exploit PCH -- but if LCH tries it, then we're seemingly in a situation where LCH has to sell for PCH's prices, in which case it suffers the loss I described in the OP.

So I've reread the logical induction paper for this, and I'm not sure I understand exploitation. Under 3.5, it says:

On each day, the reasoner receives 50¢ from T, but after day t, the reasoner must pay $1 every day thereafter.

So this sounds like before day t, T buys a share every day, and those shares never pay out - ot... (read more)

2Abram Demski
Again, my view may have drifted a bit from the LI paper, but the way I think about this is that the market maker looks at the minimum amount of money a trader has "in any world" (in the sense described in my other comment). This excludes worlds which the deductive process has ruled out, so for example if A∨B has been proved, all worlds will have either A or B. So if you had a bet which would pay $10 on A, and a bet which would pay $2 on B, you're treated as if you have $2 to spend. It's like a bookie allowing a gambler to make a bet without putting down the money because the bookie knows the gambler is "good for it" (the gambler will definitely be able to pay later, based on the bets the gambler already has, combined with the logical information we now know). Of course, because logical bets don't necessarily ever pay out, the market maker realistically shouldn't expect that traders are necessarily "good for it". But doing so allows traders to arbitrage logically contradictory beliefs, so, it's nice for our purposes. (You could say this is a difference between an ideal prediction market and a mere betting market; a prediction market should allow arbitrage of inconsistency in this way.)
2Abram Demski
Hm. It's a bit complicated and there are several possible ways to set things up. Reading that paragraph, I'm not sure about this sentence either. In the version I was trying to explain, where traders are "forced to sell" every morning before the day of trading begins, the reasoner would receive 50¢ from the trader every day, but would return that money next morning. Also, in the version I was describing, the reasoner is forced to set the price to $1 rather than 50¢ as soon as the deductive process proves 1+1=2. So, that morning, the reasoner has to return $1 rather than 50¢. That's where the reasoner loses money to the trader. After that, the price is $1 forever, so the trader would just be paying $1 every day and getting that $1 back the next morning. I would then define exploitation as "the trader's total wealth (across different times) has no upper bound". (It doesn't necessarily escape to infinity -- it might oscillate up and down, but with higher and higher peaks.) Now, the LI paper uses a different definition of exploitation, which involves how much money a trader has within a world (which basically means we imagine the deductive process decides all the sentences, and we ask how much money the trader would have; and, we consider all the different ways the deductive process could do this). This is not equivalent to my definition of exploitation in general; according to the LI paper, a trader 'exploits' the market even if its wealth is unbounded only in some very specific world (eg, where a specific sequence of in-fact-undecidable sentences gets proved). However, I do have an unpublished proof that the two definitions of exploitation are equivalent for the logical induction algorithm and for a larger class of "reasonable" logical inductors. This is a non-trivial result, but, justifies using my definition of exploitation (which I personally find a lot more intuitive). My basic intuition for the result is: if you don't know the future, the only way to ensure y
2Abram Demski
Now I feel like you're trying to have it both ways; earlier you raised the concern that a proposal which doesn't overtly respect logic could nonetheless learn a sort of logic internally, which could then be susceptible to Troll Bridge. I took this as a call for an explicit method of avoiding Troll Bridge, rather than merely making it possible with the right prior. But now, you seem to be complaining that a method that explicitly avoids Troll Bridge would be too restrictive? I think there is a mistake somewhere in the chain of inference from cross→−10 to low expected value for crossing. Material implication is being conflated with counterfactual implication. A strong candidate from my perspective is the inference from ¬(A∧B) to C(A|B)=0 where C represents probabilistic/counterfactual conditional (whatever we are using to generate expectations for actions). You seem to be arguing that being susceptible to Troll Bridge should be judged as a necessary/positive trait of a decision theory. But there are decision theories which don't have this property, such as regular CDT, or TDT (depending on the logical-causality graph). Are you saying that those are all necessarily wrong, due to this? I'm not sure quite what you meant by this. For example, I could have a lot of prior mass on "crossing gives me +10, not crossing gives me 0". Then my +10 hypothesis would only be confirmed by experience. I could reason using counterfactuals, so that the troll bridge argument doesn't come in and ruin things. So, there is definitely a way. And being born with this prior doesn't seem like some kind of misunderstanding/delusion about the world. So it also seems natural to try and design agents which reliably learn this, if they have repeated experience with Troll Bridge.

The payoff for 2-boxing is dependent on beliefs after 1-boxing because all share prices update every market day and the "payout" for a share is essentially what you can sell it for.

If a sentence is undecidable, then you could have two traders who disagree on its value indefinitely: one would have a highest price to buy, thats below the others lowest price to sell. But then anything between those two prices could be the "market price", in the classical supply and demand sense. If you say that the "payout" of a share is what you can sell it for... well, the ... (read more)

2Abram Demski
This sounds like doing optimality results poorly. Unfortunately, there is a lot of that (EG how the different optimality notions for CDT and EDT don't help decide between them). In particular, the "don't be a stupid frequentist" move has blinded Bayesians (although frequentists have also been blinded in a different way). Solomonoff induction has a relatively good optimality notion (that it doesn't do too much worse than any computable prediction). AIXI has a relatively poor one (you only guarantee that you take the subjectively best action according to Solomonoff induction; but this is hardly any guarantee at all in terms of reward gained, which is supposed to be the objective). (There are variants of AIXI which have other optimality guarantees, but none very compelling afaik.) An example of a less trivial optimality notion is the infrabayes idea, where if the world fits within the constraints of one of your partial hypotheses, then you will eventually learn to do at least as well (reward-wise) as that hypothesis implies you can do.
2Abram Demski
Hmm. Well, I didn't really try to prove that 'physical causation' would persist as a hypothesis. I just tried to show that it wouldn't, and failed. If you're right, that'd be great! But here is what I am thinking: Firstly, yes, there is a market maker. You can think of the market maker as setting the price exactly where buys and sells balance; both sides stand to win the same amount if they're correct, because that amount is just the combined amount they've spent. Causality is a little funky because of fixed point stuff, but rather than imagining the traders hold shares for a long time, we can instead imagine that today's shares "pay out" overnight (at the next day's prices), and then traders have to re-invest if they still want to hold a position. (But this is fine, because they got paid the next day's prices, so they can afford to buy the same number of shares as they had.) But if the two traders don't reinvest, then tomorrow's prices (and therefore their profits) are up to the whims of the rest of the market. So I don't see how we can be sure that PCH loses out overall. LCH has to exploit PCH -- but if LCH tries it, then we're seemingly in a situation where LCH has to sell for PCH's prices, in which case it suffers the loss I described in the OP. Thanks for raising the question, though! It would be very interesting if PCH actually could not maintain its position. I have been thinking a bit more about this. I think it should roughly work like this: you have a 'conditional contract', which is like normal conditional bets, except normally a conditional bet (a|b) is made up of a conjunction bet (a&b) and a hedge on the negation of the condition (not-b); the 'conditional contract' instead gives the trader an inseparable pair of contracts (the a&x bet bound together with the not-b bet). Normally, the price of anything that's proved goes to one quickly (and zero for anything refuted), because traders are getting $1 per share (and $0 per share for what's been re

Because we have a “basic counterfactual” proposition for what would happen if we 1-box and what would happen if we 2-box, and both of those propositions stick around, LCH’s bets about what happens in either case both matter. This is unlike conditional bets, where if we 1-box, then bets conditional on 2-boxing disappear, refunded, as if they were never made in the first place.

I don't understand this part. Your explanation of PCDT at least didn't prepare me for it, it doesn't mention betting. And why is the payoff for the counterfactual-2-boxing determined b... (read more)

2Abram Demski
Not sure how to best answer. I'm thinking of all this in an LIDT setting, so all learning occurs through traders making bets. The payoff for 2-boxing is dependent on beliefs after 1-boxing because all share prices update every market day and the "payout" for a share is essentially what you can sell it for. Similarly, if a trader buys a share of an undecidable sentence (let's say, the consistency of PA) then the only "payoff" is whatever you can sell it for later, based on future market prices, because the sentence will never get fully decided one way or the other. My claim is: eventually, if you observe enough cases of "crossing" in similar circumstances, your expectation for "cross" should be consistent with the empirical history (rather than, say, -10 even though you've never experienced -10 for crossing). To give a different example, I'm claiming it is irrational to persist in thinking 1-boxing gets you less money in expectation, if your empirical history continues to show that it is better on average. And I claim that if there is a persistent disagreement between counterfactuals and evidential conditionals, then the agent will in fact experimentally try crossing infinitely often, due to the value-of-information of testing the disagreement (that is, this will be the limiting behavior of reduced temporal discounting, under the assumption that the agent isn't worried about traps). So the two will indeed converge (under those assumptions). The hope is that we can block the troll argument completely if proving B->A does not imply cf(A|B)=1, because no matter what predicate the troll uses, the inference from P to cf fails. So what we concretely need to do is give a version of counterfactual reasoning which lets cf(A|B) not equal 1 in some cases where B->A is proved. Granted, there could be some other problematic argument. However, if my learning-theoretic ideas go through, this provides another safeguard: Troll Bridge is a case where the agent never learns the em

are the two players physically precisely the same (including environment), at least insofar as the players can tell?

In the examples I gave yes. Because thats the case where we have a guarantee of equal policy, from which people try to generalize. If we say players can see their number, then the twins in the prisoners dilemma needn't play the same way either.

But this is one reason why correlated equilibria are, usually, a better abstraction than Nash equilibria.

The "signals" players receive for correlated equilibria are already semantic. So I'm suspicious t... (read more)

2Abram Demski
It's not something we would naively expect, but it does further speak in favor of CE, yes? In particular, if you look at those learnability results, it turns out that the "external signal" which the agents are using to correlate their actions is the play history itself. IE, they are only using information which must be available to learning agents (granted, sufficiently forgetful learning agents might forget the history; however, I do not think the learnability results actually rely on any detailed memory of the history -- the result still holds with very simple agents who only remember a few parameters, with no explicit episodic memory (unlike, eg, tit-for-tat).

Well, if I understand the post correctly, you're saying that these two problems are fundamentally the same problem

No. I think:

...the reasoning presented is correct in both cases, and the lesson here is for our expectations of rationality...

As outlined in the last paragraph of the post. I want to convince people that TDT-like decision theories won't give a "neat" game theory, by giving an example where they're even less neat than classical game theory.

Actually it could. 

I think you're thinking about a realistic case (same algorithm, similar environment... (read more)

1Adam Shimi
Hum, then I'm not sure I understand in what way classical game theory is neater here? As long as the probabilistic coin flips are independent on both sides (you also mention the case where they're symmetric, but let's put that aside for the example), then you can apply the basic probabilistic algorithm for leader election: both copies flip a coin n times to get a n-bit number, which they exchange. If the numbers are different, then the copy with the smallest one says 0 and the other says 1; otherwise they flip a coin and return the answer. With this algorithm, you have probability ≥1−12n of deciding different values, and so you can get as close as you want to 1 (by paying the price in more random bits). Do you have examples of problems with copies that I could look at and that you think would be useful to study?

"The same" in what sense? Are you saying that what I described in the context of game theory is not surprising, or outlining a way to explain it in retrospect? 

Communication won't make a difference if you're playing with a copy.

0Adam Shimi
Well, if I understand the post correctly, you're saying that these two problems are fundamentally the same problem, and so rationality should be able to solve them both if it can solve one. I disagree with that, because from the perspective of distributed computing (which I'm used to), these two problems are exactly the two kinds of problems that are fundamentally distinct in a distributed setting: agreement and symmetry-breaking. Actually it could. Basically all of distributed computing assumes that every process is running the same algorithm, and you can solve symmetry-breaking in this case with communication and additional constraint on the scheduling of processes (the difficulty here is that the underlying graph is symmetric, whereas if you had some form of asymmetry (like three processes in a line, such that the one in the middle has two neighbors but the others only have one), they you can use directly that asymmetry to solve symmetry-breaking. (By the way, you just gave me the idea that maybe I can use my knowledge of distributed computing to look at the sort of decision problems where you play with copies? Don't know if it would be useful, but that's interesting at least)

Another problem with this is that it isn't clear how to form the hypothesis "I have control over X".

You don't. I'm using talk about control sometimes to describe what the agent is doing from the outside, but the hypothesis it believes all have a form like "The variables such and such will be as if they were set by BDT given such and such inputs".

One problem with this is that it doesn't actually rank hypotheses by which is best (in expected utility terms), just how much control is implied.

For the first setup, where its trying to learn what it has control ov... (read more)

2Abram Demski
Right, but then, are all other variables unchanged? Or are they influenced somehow? The obvious proposal is EDT -- assume influence goes with correlation. Another possible answer is "try all hypotheses about how things are influenced."

From my perspective, Radical Probabilism is a gateway drug.

This post seemed to be praising the virtue of returning to the lower-assumption state. So I argued that in the example given, it took more than knocking out assumptions to get the benefit.

So, while I agree, I really don't think it's cruxy. 

It wasn't meant to be. I agree that logical inductors seem to de facto implement a Virtuous Epistemic Process, with attendent properties, whether or not they understand that. I just tend to bring up any interesting-seeming thoughts that are triggered during ... (read more)

2Abram Demski
Agreed. Simple Bayes is the hero of the story in this post, but that's more because the simple bayesian can recognize that there's something beyond.

Either way, we've made assumptions which tell us which Dutch Books are valid. We can then check what follows.

Ok. I suppose my point could then be made as "#2 type approaches aren't very useful, because they assume something thats no easier than what they provide".

I think this understates the importance of the Dutch-book idea to the actual construction of the logical induction algorithm. 

Well, you certainly know more about that than me. Where did the criterion come from in your view?

This part seems entirely addressed by logical induction, to me.

Quite p... (read more)

I wanted to separate what work is done by radicalizing probabilism in general, vs logical induction specifically. 

From my perspective, Radical Probabilism is a gateway drug. Explaining logical induction intuitively is hard. Radical Probabilism is easier to explain and motivate. It gives reason to believe that there's something interesting in the direction. But, as I've stated before, I have trouble comprehending how Jeffrey correctly predicted that there's something interesting here, without logical uncertainty as a motivation. In hindsight, I feel hi... (read more)

What is actually left of Bayesianism after Radical Probabilism? Your original post on it was partially explaining logical induction, and introduced assumptions from that in much the same way as you describe here. But without that, there doesn't seem to be a whole lot there. The idea is that all that matters is resistance to dutch books, and for a dutch book to be fair the bookie must not have an epistemic advantage over the agent. Said that way, it depends on some notion of "what the agent could have known at the time", and giving a coherent account of thi... (read more)

2Abram Demski
Part of the problem is that I avoided getting too technical in Radical Probabilism, so I bounced back and forth between different possible versions of Radical Probabilism without too much signposting. I can distinguish at least three versions: 1. Jeffrey's version. I don't have a good source for his full picture. I get the sense that the answer to "what is left?" is "very little!" -- EG, he didn't think agents have to be able to articulate probabilities. But I am not sure of the details. 2. The simplification of Jeffrey's version, where I keep the Kolmogorov axioms (or the Jeffrey-Bolker axioms) but reject Bayesian updates. 3. Skyrms' deliberation dynamics. This is a pretty cool framework and I recommend checking it out (perhaps via his book The Dynamics of Rational Deliberation). The basic idea of its non-bayesian updates is, it's fine so long as you're "improving" (moving towards something good). 4. The version represented by logical induction. 5. The Shafer & Vovk version. I'm not really familiar with this version, but I hear it's pretty good. (I can think of more, but I cut myself off.) Making a broad generalization, I'm going to stick things into camp #2 above or camp #4. Theories in camp #2 have the feature that they simply assume a solid notion of "what the agent could have known at the time". This allows for a nice simple picture in which we can check Dutch Book arguments. However, it does lend itself more easily to logical omniscience, since it doesn't allow a nuanced picture of how much logical information the agent can generate. Camp #4 means we do give such a nuanced picture, such as the poly-time assumption. Either way, we've made assumptions which tell us which Dutch Books are valid. We can then check what follows. I think this understates the importance of the Dutch-book idea to the actual construction of the logical induction algorithm. The criterion came first, and the construction was finished soon after. So the hard part was the criteri

If you're reasoning using PA, you'll hold open the possibility that PA is inconsistent, but you won't hold open the possibility that . You believe the world is consistent. You're just not so sure about PA.

Do you? This sounds like PA is not actually the logic you're using. Which is realistic for a human. But if PA is indeed inconsistent, and you don't have some further-out system to think in, then what is the difference to you between "PA is inconsistent" and "the world is inconsistent"? In both cases you just believe everything and its negatio... (read more)

3Abram Demski
Maybe this is the confusion. I'm not using PA. I'm assuming (well, provisionally assuming) PA is consistent. If PA is consistent, then an agent using PA believes the world is consistent -- in the sense of assigning probability 1 to tautologies, and also assigning probability 0 to contradictions. (At least, 1 to tautologies it can recognize, and 0 to contradictions it can recognize.) Hence, I (standing outside of PA) assert that (since I think PA is probably consistent) agents who use PA don't know whether PA is consistent, but, believe the world is consistent. If PA were inconsistent, then we need more assumptions to tell us how probabilities are assigned. EG, maybe the agent "respects logic" in the sense of assigning 0 to refutable things. Then It assigns 0 to everything. Maybe it "respects logic" in the sense of assigning 1 to provable things. Then it assigns 1 to everything. (But we can't have both. The two notions of "respect logic" are equivalent if the underlying logic is consistent, but not otherwise.) But such an agent doesn't have much to say for itself anyway, so it's more interesting to focus on what the consistent agent has to say for itself. And I think the consistent agent very much does not "hold open the possibility" that the world is inconsistent. It actively denies this.

If I'm using PA, I can prove that .

Sure, thats always true. But sometimes its also true that . So unless you believe PA is consistent, you need to hold open the possibility that the ball will both (stop and continue) and (do at most one of those). But of course you can also prove that it will do at most one of those. And so on. I'm not very confident whats right, ordinary imagination is probably just misleading here.

It seems particularly absurd that, in some sense, the reason you think that is just because you think that.

The fa... (read more)

3Abram Demski
I think you're still just confusing levels here. If you're reasoning using PA, you'll hold open the possibility that PA is inconsistent, but you won't hold open the possibility that A&¬A. You believe the world is consistent. You're just not so sure about PA. I'm wondering what you mean by "hold open the possibility". * If you mean "keep some probability mass on this possibility", then I think most reasonable definitions of "keep your probabilities consistent with your logical beliefs" will forbid this. * If you mean "hold off on fully believing things which contradict the possibility", then obviously the agent would hold off on fully believing PA itself. * Etc for other reasonable definitions of holding open the possibility (I claim).

Heres what I imagine the agent saying in its defense:

Yes, of course I can control the consistency of PA, just like everything else can. For example, imagine that you're using PA and you see a ball rolling. And then in the next moment, you see the ball stopping and you also see the ball continuing to roll. Then obviously PA is inconsistent.

Now you might think this is dumb, because its impossible to see that. But why do you think its impossible? Only because its inconsistent. But if you're using PA, you must believe PA really might be inconsistent, so you ca... (read more)

2Abram Demski
This part, at least, I disagree with. If I'm using PA, I can prove that ¬(A&¬A). So I don't need to believe PA is consistent to believe that the ball won't stop rolling and also continue rolling. On the other hand, I have no direct objection to believing you can control the consistency of PA by doing something else than PA says you will do. It's not a priori absurd to me. I have two objections to the line of thinking, but both are indirect. 1. It seems absurd to think that if you cross the bridge, it will definitely collapse. It seems particularly absurd that, in some sense, the reason you think that is just because you think that. 2. From a pragmatic/consequentialist perspective, thinking in this way seems to result in poor outcomes.

The first sentence of your first paragraph appears to appeal to experiment, while the first sentence of your second paragraph seems to boil down to "Classically, X causes Y if there is a significant statistical connection twixt X and Y."  

No. "Dependence" in that second sentence does not mean causation. It just means statistical dependence. The definition of dependence is important because an intervention must be statistically independent from things "before" the intervention.

None of these appear to involve intervention.

These are methods of causal inf... (read more)

Pearl's answer, from IIRC Chapter 7 of Causality, which I find 80% satisfying, is about using external knowledge about repeatability to consider a system in isolation. The same principle gets applied whenever a researcher tries to shield an experiment from outside interference.

This is actually a good illustration of what I mean. You can't shield an experiment from outside influence entirely, not even in principle, because its you doing the shielding, and your activity is caused by the rest of the world. If you decide to only look at a part of the world, on... (read more)

0Darmani
Causal inference has long been about how to take small assumptions about causality and turn them into big inferences about causality. It's very bad at getting causal knowledge from nothing. This has long been known. For the first: Well, yep, that's why I said I was only 80% satisfied.    For the second: I think you'll need to give a concrete example, with edges, probabilities, and functions. I'm not seeing how to apply thinking about complexity to a type causality setting, where it's assumed you have actual probabilities on co-occurrences.

What I had in mind was increasing precision of Y.

0Measure
I guess that makes sense. Thanks for clarifying!

X and Y are variables for events. By complexity class I mean computational complexity, not sure what scaling parameter is supposed to be there?

0Measure
Computational complexity only makes sense in terms of varying sizes of inputs. Are some Y events "bigger" than others in some way so that you can look at how the program runtime depends on that "size"?

we have an updating process which can change its mind about any particular thing; and that updating process itself is not the ground truth, but rather has beliefs (which can change) about what makes an updating process legitimate.

This should still be a strong formal theory, but one which requires weaker assumptions than usual

There seems to be a bit of a tension here. What you're outlining for most of the post still requires a formal system with assumptions within which to take the fixed point, but then that would mean that it can't change its mind about an... (read more)

4Abram Demski
It's sort of like the difference between a programmable computer vs an arbitrary blob of matter. A programmable computer provides a rigid structure which can't be changed, but the set of assumptions imposed really is quite light. When programming language designers aim for "totally self-revising systems" (languages with more flexibility in their assumptions, such as Lisp), they don't generally attack the assumption that the hardware should be fixed. (Although occasionally they do go as far as asking for FPGAs.) (a finite approximation of) Solomonoff Induction can be said to make "very few assumptions", because it can learn a wide variety of programs. Certainly it makes less assumptions than more special-case machine learning systems. But it also makes a lot more assumptions than the raw computer. In particular, it has no allowance for updating against the use of Bayes' Rule for evaluating which program is best. I'm aiming for something between the Solomonoff induction and the programmable computer. It can still have a rigid learning system underlying it, but in some sense it can learn any particular way of selecting hypotheses, rather than being stuck with one. This seems like a rather excellent question which demonstrates a high degree of understanding of the proposal. I think the answer from my not-necessarily-foundationalist but not-quite-pluralist perspective (a pluralist being someone who points to the alternative foundations proposed by different people and says "these are all tools in a well-equipped toolbox") is: The meaning of a confused concept such as "the real word for X" is not ultimately given by any rigid formula, but rather, established by long deliberation on what it can be understood to mean. However, we can understand a lot of meaning through use. Pragmatically, what "the real word for X" seems to express is that there is a correct thing to call something, usually uniquely determined, which can be discovered through investigation (EG by askin
But in a newly born child or blank AI system, how does it acquire causal models?

I see no problem assuming that you start out with a prior over causal models - we do the same for propabilistic models after all. The question is how the updating works, and if, assuming the world has a causal structure, this way of updating can identify it.

I myself think (but I haven't given it enough thought) that there might be a bridge from data to causal models though falsification. Take a list of possible causal models for a given problem and search through your d
... (read more)
If Markov models are simple explanations of our observations, then what's the problem with using them?

To be clear, by total propability distribution I mean a distribution over all possible conjunctions of events. A Markov model also creates a total propability distribution, but there are multiple Markov models with the same propability distribution. Believing in a Markov model is more specific, and so if we could do the same work with just propability distributions, then Occam would seem to demand we do.

The surface-level answer to your question would
... (read more)
0Godismyprior
I think Judea Pearl would answer that the do() operator is the most reductionistic explanation that is possible. The point of the do calculus is precisely that it can't be found in the data (the difference between do(x) and "see(x)") and requires causal assumptions. Without a causal model, there is no do operator. And conversely, one cannot create a causal model from pure data alone- The do operator is on a higher rung of "the ladder of causality" from bare probabilities. I feel like there's a partial answer to your last question in that do-calculus is to causal reasoning what the bayes rule is to probability. The do calculus can be derived from probability rules and the introduction of the do() operator- but the do() operator itself is something can not be explained in non causal terms. Pearl believes we inherently use some version of do calculus when we think about causality. These ideas are all in Pearls "the book of why". But now I think your question is where do the models come from? For researchers, the causal models they create come from background information they have of the problem they're working with. A confounder is possible between these parameters, but not those because of randomization etc. etc. But in a newly born child or blank AI system, how does it acquire causal models? If that is explained, then we have answered your question. I don't have a good answer. I myself think (but I haven't given it enough thought) that there might be a bridge from data to causal models though falsification. Take a list of possible causal models for a given problem and search through your data. You might not be able to prove your assumptions, but you might be able to rule causal models out, if they suppose there is a causal relation between two variables that show no correlation at all. The trouble is, you don't know whether you can rule out the correlation, or if there is a correlation which doesn't show in the data because of a confounder. It seems plausible t
One possibility is that it's able to find a useful outside view model such as "the Predict-O-Matic has a history of making negative self-fulfilling prophecies". This could lead to the Predict-O-Matic making a negative prophecy ("the Predict-O-Matic will continue to make negative prophecies which result in terrible outcomes"), but this prophecy wouldn't be selected for being self-fulfilling. And we might usefully ask the Predict-O-Matic whether the terrible self-fulfilling prophecies will continue conditional on us tak
... (read more)