The formulation of Bayes' rule you are most likely to see in textbooks runs as follows:
Where:
As a quick example, let's say there's a bathtub full of potentially biased coins.
We want to know the posterior probability that a randomly drawn coin is of type 2, after flipping the coin once and seeing it produce heads once.
Let stand for the hypotheses that the coin is of types 1, 2, and 3 respectively. Then using conditional probability notation, we want to know the probability
The probability form of Bayes' theorem says:
Expanding the sum:
Computing the actual quantities:
This calculation was big and messy. Which is fine, because the probability form of Bayes' theorem is okay for directly grinding through the numbers, but not so good for doing things in your head.
We can think of the advice of Bayes' theorem as saying:
"Think of how much each hypothesis in contributed to our expectation of seeing the evidence , including both the likelihood of seeing if is true, and the prior probability of . The posterior of after seeing is the amount contributed to our expectation of seeing within the total expectation of seeing contributed by every hypothesis in "
Or to say it at somewhat greater length:
Imagine each hypothesis as an expert who has to distribute the probability of their predictions among all possible pieces of evidence. We can imagine this more concretely by visualizing "probability" as a lump of clay.
The total amount of clay is one kilogram (probability ). Each expert has been allocated a fraction of that kilogram. For example, if then expert 4 has been allocated 200 grams of clay.
We're playing a game with the experts to determine which one is the best predictor.
Each time we're about to make an observation each expert has to divide up all their clay among the possible outcomes
After we observe that we take away all the clay that wasn't put onto And then our new belief in all the experts is the relative amount of clay that each expert has left.
So to know how much we now believe in expert after observing say, we need to know two things: First, the amount of clay that put onto and second, the total amount of clay that all experts (including ) put onto
In turn, to know that, we need to know how much clay started with, and what fraction of its clay put onto And similarly, to compute the total clay on we need to know how much clay each expert started with, and what fraction of their clay put onto
So Bayes' theorem here would say:
What are the incentives of this game of clay?
On each round, the experts who gain the most are the experts who put the most clay on the observed so if you know for certain that is about to be observed, your incentive is to put all your clay on
But putting literally all your clay on is risky; if is observed instead, you lose all your clay and are out of the game. Once an expert's amount of clay goes all the way to zero, there's no way for them to recover over any number of future rounds. That hypothesis is done, dead, and removed from the game. ("Falsification," some people call that.) If you're not certain that is literally impossible, you'd be wiser to put at least a little clay on instead. That is to say: if your mind puts some probability on you'd better put some clay there too!
(As it happens, if at the end of the game we score each expert by the logarithm of the amount of clay they have left, then each expert is incentivized to place clay exactly proportionally to their honest probability on each successive round.)
It's an important part of the game that we make the experts put down their clay in advance. If we let the experts put down their clay afterwards, they might be tempted to cheat by putting down all their clay on whichever had actually been observed. But since we make the experts put down their clay in advance, they have to divide up their clay among the possible outcomes: to give more clay to that clay has to be taken away from some other outcome, like To put a very high probability on and gain a lot of relative credibility if is observed, an expert has to stick their neck out and risk losing a lot of credibility if some other outcome like happens instead. If we force the experts to make advance predictions, that is!
We can also derive from this game that the question "does evidence support hypothesis ?" depends on how well predicted compared to the competition. It's not enough for to predict well if every other hypothesis also predicted well--your amazing new theory of physics gets no points for predicting that the sky is blue. only goes up in probability when it predicts better than the alternatives. And that means we have to ask what the alternative hypotheses predicted, even if we think those hypotheses are false.
If you get in a car accident, and don't want to relinquish the hypothesis that you're a great driver, then you can find all sorts of reasons ("the road was slippery! my car freaked out!") why is not too low. But is also part of the update equation, and the "bad driver" hypothesis better predicts the evidence. Thus, your first impulse, when deciding how to update your beliefs in the face of a car accident, should not be "But my preferred hypothesis allows for this evidence!" It should instead be "Points to the 'bad driver' hypothesis for predicting this evidence better than the alternatives!" (And remember, you're allowed to increase a little bit, while still thinking that it's less than 50% probable.)
The proof of Bayes' theorem follows from the definition of conditional probability:
And from the law of marginal probability:
Therefore:
QED.