Epistemic status: I expect some people to say "this is obvious and trivial", and others to say "this makes no sense at all".

One fundamental difference between E.T. Jaynes’ probability textbook and most others is the emphasis on including our model in the prior information. When we write an expression like , that’s really shorthand for , where is some probabilistic model in which and appear. In practice, I’ve found this useful for two main use-cases: model comparison, and interventions/counterfactuals in causal models. This post is mainly about the latter.

The general idea is that an intervention in a causal model (e.g. ) takes in one model and returns a new model - it should really be written as . When we write something like , that’s really shorthand for .

In order to make this all less hand-wavy, we need to make the model a bit more explicit.

What’s in a Model?

The simplest way to represent a probabilistic model is as a table of possibilities - more explicitly, a list of exhaustive and mutually-exclusive logic statements. If I roll a standard die and call the outcome , then I’d explicitly represent my model as .

In Probability as Minimal Map, we saw that . Interpretation: I obtain some data , calculate the probability , then my computer crashes and I lose the data. But as long as I still know , I should still assign the same probability to . Thus: the probability of , given (but not given itself!) is just .

(Note that I left the model implicit in the previous paragraph - really we should write .)

Now let’s apply that idea to the expression , with our die-model . Our given information includes , so

.

Representing models this way gives a much stronger logic-flavor to the calculations; our probability calculations are a derivation in an explicit logic. The axioms of that logic are the contents of , along with the universal laws of probability (i.e. Bayes’ rule, sum rule, etc) and arithmetic.

Causality & Interventions

In the case of a causal model, would look something like

i.e. gives a graph and an expression for the probability of each in terms of ’s parents in . (This would be for a Bayes net; structural equations are left as an exercise to the reader.)

A operation then works exactly like you’d expect: returns a new model in which:

  • The arrows into node in have been removed
  • has been replaced with the indicator function (or, for continuous , )

Counterfactuals work the same way, except they’re limited to structural models - i.e. every nondeterministic node must be a root. As long as the model satisfies that constraint, a counterfactual is exactly the same as an intervention: if we have some data , then to run the counterfactual , we calculate . If we do this with a non-structural model - i.e. if some nondeterministic node has parents - then we’ll find that the result is sometimes undefined: our axioms do not fully determine the probability in question.

Why Does This Matter?

Hopefully this all seems pretty trivial. Why belabor it?

There are a handful of practical applications where explicitly including the model is useful.

The most important of these is model comparison, especially the Bayesian approach to learning causal structure. Another application is scenarios involving a mix of different experimental interventions and observational studies.

But the main reason I’m bringing it up is that agenty things have the type signature (A -> B) -> A. In English: agenty things have some model (A -> B) which predicts the results (B) of their own actions (A). They use that model to decide what actions to perform: (A -> B) -> A.

In the context of causal models, the model (A -> B) is our causal model . (A -> B) -> A means performing some computation on in order to find A - which is a lot simpler with an explicit representation of .

Of course, we could just use the usual structural equation representation without explicitly making everything a statement in some logic - but then we’d have a lot more different types of things floating around. By explicitly making everything logic statements, we unify the formulation. Statements like “counterfactuals are underdefined for Bayes nets” become statements about provability within our logic, and can themselves be proven. Also, by formulating the model in terms of logic statements, we have a single unified language for probability queries - the models , , etc can be represented and manipulated in the same format as any other information.

New Comment