Intelligent Agent Foundations Forumsign up / log in
A tractable, interpretable formulation of approximate conditioning for pairwise-specified probability distributions over truth values
discussion post by Janos Kramar 906 days ago | Victoria Krakovna and Patrick LaVictoire like this | 2 comments

These results from my conversations with Charlie Steiner at the May 29-31 MIRI Workshop on Logical Uncertainty will primarily be of interest to people who’ve read section 2.4 of Paul Christiano’s Non-Omniscience paper.

If we write a reasoner that keeps track of probabilities of a collection of sentences \(\varphi_1,\dots,\varphi_n\) (that grows and shrinks as the reasoner explores), we need some way of tracking known relationships between the sentences. One way of doing this is to store the pairwise probability distributions, ie not only \(\Pr(\varphi_i)\) for all \(i\) but also \(\Pr(\varphi_i\wedge\varphi_j)\) for all \(i,j\).

If we do this, a natural question to ask is: how can we update this data structure if we learn that eg \(\varphi_1\) is true?

We’ll refer to the updated probabilities as \(\Pr(\cdot|\varphi_1)\).

It’s fairly reasonable for us to want to set \(\Pr(\varphi_i|\varphi_1):=\Pr(\varphi_i\wedge\varphi_1)/\Pr(\varphi_1)\); however, it’s less clear what values to assign to \(\Pr(\varphi_i\wedge\varphi_j|\varphi_1)\), because we haven’t stored \(\Pr(\varphi_i\wedge\varphi_j\wedge\varphi_1)\).

One option would be to find the maximum entropy distribution over truth assignments to \(\varphi_1,\dots,\varphi_n\) under the constraint that the stored pairwise distributions are correct. This seems intractable for large \(n\); however, in the spirit of locality, we could restrict our attention to the joint truth value distribution of \(\varphi_1,\varphi_i,\varphi_j\). Maximizing its entropy is simple (it boils down to either convex optimization or solving a cubic), and yields a plausible candidate for \(\Pr(\varphi_i\wedge\varphi_j\wedge\varphi_1)\) that we can derive \(\Pr(\varphi_i\wedge\varphi_j|\varphi_1)\) from. I’m not sure what global properties this has, for example whether it yields a positive semidefinite matrix \((\Pr(\varphi_i\wedge\varphi_j))_{i,j}\).

A different option, as noted in section 2.4.2, is to observe that the matrix \((\Pr(\varphi_i\wedge\varphi_j))_{i,j}\) must be positive semidefinite under any joint distribution for the truth values. This means we can consider a zero-mean multivariate normal distribution with this matrix as its covariance; then there’s a closed-form expression for the Kullback-Leibler divergence of two such distributions, and this can be used to define a sort of conditional distribution, as is done in section 2.4.3.

However, as the paper remarks, this isn’t a very familiar way of defining these updated probabilities. For example, it lacks the desirable property that \(\Pr(\varphi_i|\varphi_1)=\Pr(\varphi_i\wedge\varphi_1)/\Pr(\varphi_1)\).

Fortunately, there is a natural construction that combines these ideas: namely, if we consider the maximum-entropy distribution for the truth assignment vector \((1_{\varphi_1},\dots,1_{\varphi_n})\) with the given second moments \(\operatorname{E}(1_{\varphi_i}1_{\varphi_j})\), but relax the requirement that their values be in \(\{0,1\}\), then we find a multivariate normal distribution \[\mathcal{N}\left(\left(\Pr(\varphi_i)\right)_i,\left(\Pr(\varphi_i\wedge\varphi_j)-\Pr(\varphi_i)\Pr(\varphi_j)\right)_{i,j}\right)\mbox{.}\] If we wish to update this distribution after observing \(\varphi_1\) by finding the candidate distribution \((1_{\varphi_1},\dots,1_{\varphi_n}|\varphi_1)\) of highest relative entropy with \(\Pr(1_{\varphi_1}=1|\varphi_1)=1\), as proposed in the paper, then we will get the multivariate normal conditional distribution \[\mathcal{N}\left(\left(\Pr(\varphi_1\wedge\varphi_i)/\Pr(\varphi_1)\right)_i,\left(\Pr(\varphi_i\wedge\varphi_j)-\Pr(\varphi_i)\Pr(\varphi_j)-\frac{(\Pr(\varphi_1\wedge\varphi_i)-\Pr(\varphi_1)\Pr(\varphi_i))(\Pr(\varphi_1\wedge\varphi_j)-\Pr(\varphi_1)\Pr(\varphi_j))}{\Pr(\varphi_1)-\Pr(\varphi_1)^2}\right)_{ij}\right)\mbox{.}\]

Note that this generally has \(\operatorname{Var}\left(1_{\varphi_i}\middle|\varphi_1\right)\not=\operatorname{E}\left(1_{\varphi_i}\middle|\varphi_1\right)\left(1-\operatorname{E}\left(1_{\varphi_i}\middle|\varphi_1\right)\right)\), which is a mismatch; this is related to the fact that a conditional variance in a multivariate normal is never higher than the marginal variance, which is an undesirable feature for a distribution over truth-values.

This is also related to other undesirable features; for example, if we condition on more than one sentence, we can arrive at conditional probabilities outside of \([0,1]\). (For example if 3 sentences have \(\Pr(\varphi_1)=\Pr(\varphi_2)=\Pr(\varphi_3)=\frac{1}{3},\Pr(\varphi_1\wedge\varphi_2)=\Pr(\varphi_1\wedge\varphi_3)=\Pr(\varphi_2\wedge\varphi_3)=\varepsilon\) then this yields \(\Pr(\varphi_3|\varphi_1,\varphi_2)=\frac{-1+15\varepsilon}{1+9\varepsilon}\approx -1\); this makes sense because this prior is very confident that \(1_{\varphi_1}+1_{\varphi_2}+1_{\varphi_3}\approx 1\), with standard deviation \(\sqrt{6\varepsilon}\).)

Intermediate relaxations that lack these particular shortcomings are possible, such as the ones that restrict the relaxed \(1_{\varphi_1},\dots,1_{\varphi_n}\) to the sphere \(\sum_i (2x_i-1)^2=n\) or ball \(\sum_i (2x_i-1)^2\leq n\). Then the maximum entropy distribution, similarly to a multivariate normal distribution, has quadratic logdensity, though the Hessian of the quadratic may have nonnegative eigenvalues (unlike in the normal case). In the spherical case, this is known as a Fisher-Bingham distribution.

Both of these relaxations seem difficult to work with, eg to compute normalizing constants for; furthermore I don’t think the analogous updating process will share the desirable property that \(\Pr(\varphi_i|\varphi_1)=\Pr(\varphi_i\wedge\varphi_1)/\Pr(\varphi_1)\). However, the fact that these distributions allow updating by relaxed conditioning, keep (fully conditioned) truth-values between 0 and 1, and have reasonable (at least, possibly-increasing) behavior for conditional variances, makes them seem potentially appealing.



by Janos Kramar 897 days ago | Jessica Taylor likes this | link

An easy way to get rid of the probabilities-outside-[0,1] problem in the continuous relaxation is to constrain the “conditional”/updated distribution to have \(\operatorname{Var}\left(1_{\varphi_i}\middle|\dots\right)\leq \operatorname{E}\left(1_{\varphi_i}\middle|\dots\right)\left(1-\operatorname{E}\left(1_{\varphi_i}\middle|\dots\right)\right)\) (which is a convex constraint; it’s equivalent to \(\operatorname{Var}\left(1_{\varphi_i}\middle|\dots\right)+\left(\operatorname{E}\left(1_{\varphi_i}\middle|\dots\right)-\frac{1}{2}\right)^2\leq \frac{1}{4}\)), and then minimize KL-divergence accordingly.

The two obvious flaws are that the result of updating becomes ordering-dependent (though this may not be a problem in practice), and that the updated distribution will sometimes have \(\operatorname{Var}\left(1_{\varphi_i}\middle|\dots\right)< \operatorname{E}\left(1_{\varphi_i}\middle|\dots\right)\left(1-\operatorname{E}\left(1_{\varphi_i}\middle|\dots\right)\right)\), and it’s not clear how to interpret that.

reply

by Janos Kramar 888 days ago | Patrick LaVictoire likes this | link

There is a lot more to say about the perspective that isn’t relaxed to continuous random variables. In particular, the problem of finding the maximum entropy joint distribution that agrees with particular pairwise distributions is closely related to Markov Random Fields and the Ising model. (The relaxation to continuous random variables is a Gaussian Markov Random Field.) It is easily seen that this maximum entropy joint distribution must have the form \(\log \Pr(1_{\varphi_1},\dots,1_{\varphi_n})=\sum_{i<j}\theta_{ij}1_{\varphi_i\wedge\varphi_j}+\sum_i \theta_i 1_{\varphi_i}-\log Z\) where \(\log Z\) is the normalizing constant, or partition function. This is an appealing distribution to use, and easy to do conditioning on and to add new variables to. Computing relative entropy reduces to finding bivariate marginals and to computing \(Z\), and computing marginals reduces to computing \(Z\), which is intractable in general1, though easy if the Markov graph (ie the graph with edges \(ij\) for \(\theta_{i,j}\not=0\)) is a forest.

There have been many approaches to this problem (Wainwright and Jordan2 is a good survey), but the main ways to extend the applicability from forests have been:

  • decompose components of the graph as “junction trees”, ie trees whose nodes are overlapping clusters of nodes from the original graph; this permits exact computation with cost exponential in the cluster-sizes, ie in the treewidth3

  • make use of clever combinatorial work coming out of statistical mechanics to do exact computation on “outerplanar” graphs, or on general graphs with cost exponential in the (outer-)graph genus4

  • find nodes such that conditioning on those nodes greatly simplifies the graph (eg makes it singly connected), and sum over their possible values explicitly (this has cost exponential in the number of nodes being conditioned on)

A newer class of models, called sum-product networks5, generalizes these and other models by writing the total joint probability as a positive polynomial \(1=\sum_{x_1,\dots,x_n=0}^1\Pr(1_{\varphi_1}=x_1,\dots,1_{\varphi_n}=x_n)1_{\varphi_1}^{x_1}1_{\bar\varphi_1}^{1-x_1}\dots 1_{\varphi_n}^{x_n}1_{\bar\varphi_n}^{1-x_n}\) in the variables \(1_{\varphi_1},1_{\bar\varphi_1},\dots,1_{\varphi_n},1_{\bar\varphi_n}\) and requiring only that this polynomial be simplifiable to an expression requiring a tractable number of additions and multiplications to evaluate. This allows easy computation of marginals, conditionals, and KL divergence, though it will likely be necessary to do some approximate simplification every so often (otherwise the complexity may accumulate, even with a fixed maximum number of sentences being considered at a time).

However, if we want to stay close to the context of the Non-Omniscience paper, we can do approximate calculations of the partition function on the complete graph - in particular, the Bethe partition function6 has been widely used in practice, and while it’s not logconvex like \(Z\) is, it’s often a better approximation to the partition function than well-known convex approximations such as TRW.


  1. Istrail, Sorin. “Statistical mechanics, three-dimensionality and NP-completeness: I. Universality of intractability for the partition function of the Ising model across non-planar surfaces.” In Proceedings of the thirty-second annual ACM symposium on Theory of computing, pp. 87-96. ACM, 2000.

  2. Wainwright, Martin J., and Michael I. Jordan. “Graphical models, exponential families, and variational inference.” Foundations and Trends® in Machine Learning 1, no. 1-2 (2008): 1-305.

  3. Pearl, Judea. “Probabilistic reasoning in intelligent systems: Networks of plausible reasoning.” (1988).

  4. Schraudolph, Nicol N., and Dmitry Kamenetsky. “Efficient Exact Inference in Planar Ising Models.” arXiv preprint arXiv:0810.4401 (2008).

  5. Poon, Hoifung, and Pedro Domingos. “Sum-product networks: A new deep architecture.” In Computer Vision Workshops (ICCV Workshops), 2011 IEEE International Conference on, pp. 689-690. IEEE, 2011.

  6. Weller, Adrian. “Bethe and Related Pairwise Entropy Approximations.

reply



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

Indeed there is some kind of
by Vadim Kosoy on Catastrophe Mitigation Using DRL | 0 likes

Very nice. I wonder whether
by Vadim Kosoy on Hyperreal Brouwer | 0 likes

Freezing the reward seems
by Vadim Kosoy on Resolving human inconsistency in a simple model | 0 likes

Unfortunately, it's not just
by Vadim Kosoy on Catastrophe Mitigation Using DRL | 0 likes

>We can solve the problem in
by Wei Dai on The Happy Dance Problem | 1 like

Maybe it's just my browser,
by Gordon Worley III on Catastrophe Mitigation Using DRL | 2 likes

At present, I think the main
by Abram Demski on Looking for Recommendations RE UDT vs. bounded com... | 0 likes

In the first round I'm
by Paul Christiano on Funding opportunity for AI alignment research | 0 likes

Fine with it being shared
by Paul Christiano on Funding opportunity for AI alignment research | 0 likes

I think the point I was
by Abram Demski on Predictable Exploration | 0 likes

(also x-posted from
by Sören Mindermann on The Three Levels of Goodhart's Curse | 0 likes

(x-posted from Arbital ==>
by Sören Mindermann on The Three Levels of Goodhart's Curse | 0 likes

>If the other players can see
by Stuart Armstrong on Predictable Exploration | 0 likes

Thinking about this more, I
by Abram Demski on Predictable Exploration | 0 likes

> So I wound up with
by Abram Demski on Predictable Exploration | 0 likes

RSS

Privacy & Terms