Intelligent Agent Foundations Forumsign up / log in
by Janos Kramar 1500 days ago | Jessica Taylor likes this | link | parent

An easy way to get rid of the probabilities-outside-[0,1] problem in the continuous relaxation is to constrain the “conditional”/updated distribution to have \(\operatorname{Var}\left(1_{\varphi_i}\middle|\dots\right)\leq \operatorname{E}\left(1_{\varphi_i}\middle|\dots\right)\left(1-\operatorname{E}\left(1_{\varphi_i}\middle|\dots\right)\right)\) (which is a convex constraint; it’s equivalent to \(\operatorname{Var}\left(1_{\varphi_i}\middle|\dots\right)+\left(\operatorname{E}\left(1_{\varphi_i}\middle|\dots\right)-\frac{1}{2}\right)^2\leq \frac{1}{4}\)), and then minimize KL-divergence accordingly.

The two obvious flaws are that the result of updating becomes ordering-dependent (though this may not be a problem in practice), and that the updated distribution will sometimes have \(\operatorname{Var}\left(1_{\varphi_i}\middle|\dots\right)< \operatorname{E}\left(1_{\varphi_i}\middle|\dots\right)\left(1-\operatorname{E}\left(1_{\varphi_i}\middle|\dots\right)\right)\), and it’s not clear how to interpret that.



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

[Note: This comment is three
by Ryan Carey on A brief note on factoring out certain variables | 0 likes

There should be a chat icon
by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes

Apparently "You must be
by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like

There is a replacement for
by Alex Mennen on Meta: IAFF vs LessWrong | 1 like

Regarding the physical
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think that we should expect
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think I understand your
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

This seems like a hack. The
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

After thinking some more,
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yes, I think that we're
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

My intuition is that it must
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

To first approximation, a
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Actually, I *am* including
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yeah, when I went back and
by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes

> Well, we could give up on
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

RSS

Privacy & Terms