Intelligent Agent Foundations Forumsign up / log in
by Janos Kramar 1491 days ago | Patrick LaVictoire likes this | link | parent

There is a lot more to say about the perspective that isn’t relaxed to continuous random variables. In particular, the problem of finding the maximum entropy joint distribution that agrees with particular pairwise distributions is closely related to Markov Random Fields and the Ising model. (The relaxation to continuous random variables is a Gaussian Markov Random Field.) It is easily seen that this maximum entropy joint distribution must have the form \(\log \Pr(1_{\varphi_1},\dots,1_{\varphi_n})=\sum_{i<j}\theta_{ij}1_{\varphi_i\wedge\varphi_j}+\sum_i \theta_i 1_{\varphi_i}-\log Z\) where \(\log Z\) is the normalizing constant, or partition function. This is an appealing distribution to use, and easy to do conditioning on and to add new variables to. Computing relative entropy reduces to finding bivariate marginals and to computing \(Z\), and computing marginals reduces to computing \(Z\), which is intractable in general1, though easy if the Markov graph (ie the graph with edges \(ij\) for \(\theta_{i,j}\not=0\)) is a forest.

There have been many approaches to this problem (Wainwright and Jordan2 is a good survey), but the main ways to extend the applicability from forests have been:

  • decompose components of the graph as “junction trees”, ie trees whose nodes are overlapping clusters of nodes from the original graph; this permits exact computation with cost exponential in the cluster-sizes, ie in the treewidth3

  • make use of clever combinatorial work coming out of statistical mechanics to do exact computation on “outerplanar” graphs, or on general graphs with cost exponential in the (outer-)graph genus4

  • find nodes such that conditioning on those nodes greatly simplifies the graph (eg makes it singly connected), and sum over their possible values explicitly (this has cost exponential in the number of nodes being conditioned on)

A newer class of models, called sum-product networks5, generalizes these and other models by writing the total joint probability as a positive polynomial \(1=\sum_{x_1,\dots,x_n=0}^1\Pr(1_{\varphi_1}=x_1,\dots,1_{\varphi_n}=x_n)1_{\varphi_1}^{x_1}1_{\bar\varphi_1}^{1-x_1}\dots 1_{\varphi_n}^{x_n}1_{\bar\varphi_n}^{1-x_n}\) in the variables \(1_{\varphi_1},1_{\bar\varphi_1},\dots,1_{\varphi_n},1_{\bar\varphi_n}\) and requiring only that this polynomial be simplifiable to an expression requiring a tractable number of additions and multiplications to evaluate. This allows easy computation of marginals, conditionals, and KL divergence, though it will likely be necessary to do some approximate simplification every so often (otherwise the complexity may accumulate, even with a fixed maximum number of sentences being considered at a time).

However, if we want to stay close to the context of the Non-Omniscience paper, we can do approximate calculations of the partition function on the complete graph - in particular, the Bethe partition function6 has been widely used in practice, and while it’s not logconvex like \(Z\) is, it’s often a better approximation to the partition function than well-known convex approximations such as TRW.


  1. Istrail, Sorin. “Statistical mechanics, three-dimensionality and NP-completeness: I. Universality of intractability for the partition function of the Ising model across non-planar surfaces.” In Proceedings of the thirty-second annual ACM symposium on Theory of computing, pp. 87-96. ACM, 2000.

  2. Wainwright, Martin J., and Michael I. Jordan. “Graphical models, exponential families, and variational inference.” Foundations and Trends® in Machine Learning 1, no. 1-2 (2008): 1-305.

  3. Pearl, Judea. “Probabilistic reasoning in intelligent systems: Networks of plausible reasoning.” (1988).

  4. Schraudolph, Nicol N., and Dmitry Kamenetsky. “Efficient Exact Inference in Planar Ising Models.” arXiv preprint arXiv:0810.4401 (2008).

  5. Poon, Hoifung, and Pedro Domingos. “Sum-product networks: A new deep architecture.” In Computer Vision Workshops (ICCV Workshops), 2011 IEEE International Conference on, pp. 689-690. IEEE, 2011.

  6. Weller, Adrian. “Bethe and Related Pairwise Entropy Approximations.



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

[Note: This comment is three
by Ryan Carey on A brief note on factoring out certain variables | 0 likes

There should be a chat icon
by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes

Apparently "You must be
by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like

There is a replacement for
by Alex Mennen on Meta: IAFF vs LessWrong | 1 like

Regarding the physical
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think that we should expect
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think I understand your
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

This seems like a hack. The
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

After thinking some more,
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yes, I think that we're
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

My intuition is that it must
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

To first approximation, a
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Actually, I *am* including
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yeah, when I went back and
by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes

> Well, we could give up on
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

RSS

Privacy & Terms