There is a lot more to say about the perspective that isn't relaxed to continuous random variables. In particular, the problem of finding the maximum entropy joint distribution that agrees with particular pairwise distributions is closely related to Markov Random Fields and the Ising model. (The relaxation to continuous random variables is a Gaussian Markov Random Field.) It is easily seen that this maximum entropy joint distribution must have the form where is the normalizing constant, or partition function. This is an appealing distribution to use, and easy to do conditioning on and to add new variables to. Computing relative entropy reduces to finding bivariate marginals and to computing , and computing marginals reduces to computing , which is intractable in general[^istrail], though easy if the Markov graph (ie the graph with edges for ) is a forest.
There have been many approaches to this problem (Wainwright and Jordan[^wainwright] is a good survey), but the main ways to extend the applicability from forests have been:
decompose components of the graph as "junction trees", ie trees whose nodes are overlapping clusters of nodes from the original graph; this permits exact computation with cost exponential in the cluster-sizes, ie in the treewidth[^pearl]
make use of clever combinatorial work coming out of statistical mechanics to do exact computation on "outerplanar" graphs, or on general graphs with cost exponential in the (outer-)graph genus[^schraudolph]
find nodes such that conditioning on those nodes greatly simplifies the graph (eg makes it singly connected), and sum over their possible values explicitly (this has cost exponential in the number of nodes being conditioned on)
A newer class of models, called sum-product networks[^poon], generalizes these and other models by writing the total joint probability as a positive polynomial in the variables and requiring only that this polynomial be simplifiable to an expression requiring a tractable number of additions and multiplications to evaluate. This allows easy computation of marginals, conditionals, and KL divergence, though it will likely be necessary to do some approximate simplification every so often (otherwise the complexity may accumulate, even with a fixed maximum number of sentences being considered at a time).
However, if we want to stay close to the context of the Non-Omniscience paper, we can do approximate calculations of the partition function on the complete graph - in particular, the Bethe partition function[^weller] has been widely used in practice, and while it's not logconvex like is, it's often a better approximation to the partition function than well-known convex approximations such as TRW.
[^istrail]: Istrail, Sorin. "Statistical mechanics, three-dimensionality and NP-completeness: I. Universality of intractability for the partition function of the Ising model across non-planar surfaces." In Proceedings of the thirty-second annual ACM symposium on Theory of computing, pp. 87-96. ACM, 2000.
[^weller]: Weller, Adrian. "Bethe and Related Pairwise Entropy Approximations."
[^pearl]: Pearl, Judea. "Probabilistic reasoning in intelligent systems: Networks of plausible reasoning." (1988).
[^schraudolph]: Schraudolph, Nicol N., and Dmitry Kamenetsky. "Efficient Exact Inference in Planar Ising Models." arXiv preprint arXiv:0810.4401 (2008).
[^wainwright]: Wainwright, Martin J., and Michael I. Jordan. "Graphical models, exponential families, and variational inference." Foundations and Trends® in Machine Learning 1, no. 1-2 (2008): 1-305.
[^poon]: Poon, Hoifung, and Pedro Domingos. "Sum-product networks: A new deep architecture." In Computer Vision Workshops (ICCV Workshops), 2011 IEEE International Conference on, pp. 689-690. IEEE, 2011.
An easy way to get rid of the probabilities-outside-[0,1] problem in the continuous relaxation is to constrain the "conditional"/updated distribution to have (which is a convex constraint; it's equivalent to ), and then minimize KL-divergence accordingly.
The two obvious flaws are that the result of updating becomes ordering-dependent (though this may not be a problem in practice), and that the updated distribution will sometimes have , and it's not clear how to interpret that.
Actually, on further thought, I think the best thing to use here is a log-bilinear distribution over the space of truth-assignments. For these, it is easy to efficiently compute exact normalizing constants, conditional distributions, marginal distributions, and KL divergences; there is no impedance mismatch. KL divergence minimization here is still a convex minimization (in the natural parametrization of the exponential family).
The only shortcoming is that 0 is not a probability, so it won't let you eg say that ; but this can be remedied using a real or hyperreal approximation.
These results from my conversations with Charlie Steiner at the May 29-31 MIRI Workshop on Logical Uncertainty will primarily be of interest to people who've read section 2.4 of Paul Christiano's Non-Omniscience paper.
If we write a reasoner that keeps track of probabilities of a collection of sentences φ1,…,φn (that grows and shrinks as the reasoner explores), we need some way of tracking known relationships between the sentences. One way of doing this is to store the pairwise probability distributions, ie not only Pr(φi) for all i but also Pr(φi∧φj) for all i,j.
If we do this, a natural question to ask is: how can we update this data structure if we learn that eg φ1 is true?
We'll refer to the updated probabilities as Pr(⋅|φ1).
It's fairly reasonable for us to want to set Pr(φi|φ1):=Pr(φi∧φ1)/Pr(φ1); however, it's less clear what values to assign to Pr(φi∧φj|φ1), because we haven't stored Pr(φi∧φj∧φ1).
One option would be to find the maximum entropy distribution over truth assignments to φ1,…,φn under the constraint that the stored pairwise distributions are correct. This seems intractable for large n; however, in the spirit of locality, we could restrict our attention to the joint truth value distribution of φ1,φi,φj. Maximizing its entropy is simple (it boils down to either convex optimization or solving a cubic), and yields a plausible candidate for Pr(φi∧φj∧φ1) that we can derive Pr(φi∧φj|φ1) from. I'm not sure what global properties this has, for example whether it yields a positive semidefinite matrix (Pr(φi∧φj))i,j.
A different option, as noted in section 2.4.2, is to observe that the matrix (Pr(φi∧φj))i,j must be positive semidefinite under any joint distribution for the truth values. This means we can consider a zero-mean multivariate normal distribution with this matrix as its covariance; then there's a closed-form expression for the Kullback-Leibler divergence of two such distributions, and this can be used to define a sort of conditional distribution, as is done in section 2.4.3.
However, as the paper remarks, this isn't a very familiar way of defining these updated probabilities. For example, it lacks the desirable property that Pr(φi|φ1)=Pr(φi∧φ1)/Pr(φ1).
Fortunately, there is a natural construction that combines these ideas: namely, if we consider the maximum-entropy distribution for the truth assignment vector (1φ1,…,1φn) with the given second moments E(1φi1φj), but relax the requirement that their values be in {0,1}, then we find a multivariate normal distribution N((Pr(φi))i,(Pr(φi∧φj)−Pr(φi)Pr(φj))i,j). If we wish to update this distribution after observing φ1 by finding the candidate distribution (1φ1,…,1φn|φ1) of highest relative entropy with Pr(1φ1=1|φ1)=1, as proposed in the paper, then we will get the multivariate normal conditional distribution N⎛⎝(Pr(φ1∧φi)/Pr(φ1))i,(Pr(φi∧φj)−Pr(φi)Pr(φj)−(Pr(φ1∧φi)−Pr(φ1)Pr(φi))(Pr(φ1∧φj)−Pr(φ1)Pr(φj))Pr(φ1)−Pr(φ1)2)ij⎞⎠.
Note that this generally has Var(1φi∣∣φ1)≠E(1φi∣∣φ1)(1−E(1φi∣∣φ1)), which is a mismatch; this is related to the fact that a conditional variance in a multivariate normal is never higher than the marginal variance, which is an undesirable feature for a distribution over truth-values.
This is also related to other undesirable features; for example, if we condition on more than one sentence, we can arrive at conditional probabilities outside of [0,1]. (For example if 3 sentences have Pr(φ1)=Pr(φ2)=Pr(φ3)=13,Pr(φ1∧φ2)=Pr(φ1∧φ3)=Pr(φ2∧φ3)=ε then this yields Pr(φ3|φ1,φ2)=−1+15ε1+9ε≈−1; this makes sense because this prior is very confident that 1φ1+1φ2+1φ3≈1, with standard deviation √6ε.)
Intermediate relaxations that lack these particular shortcomings are possible, such as the ones that restrict the relaxed 1φ1,…,1φn to the sphere ∑i(2xi−1)2=n or ball ∑i(2xi−1)2≤n. Then the maximum entropy distribution, similarly to a multivariate normal distribution, has quadratic logdensity, though the Hessian of the quadratic may have nonnegative eigenvalues (unlike in the normal case). In the spherical case, this is known as a Fisher-Bingham distribution.
Both of these relaxations seem difficult to work with, eg to compute normalizing constants for; furthermore I don't think the analogous updating process will share the desirable property that Pr(φi|φ1)=Pr(φi∧φ1)/Pr(φ1). However, the fact that these distributions allow updating by relaxed conditioning, keep (fully conditioned) truth-values between 0 and 1, and have reasonable (at least, possibly-increasing) behavior for conditional variances, makes them seem potentially appealing.