Intelligent Agent Foundations Forumsign up / log in
Conditioning on Conditionals
post by Scott Garrabrant 461 days ago | Abram Demski likes this | discuss

(From conversations with Sam, Abram, Tsvi, Marcello, and Ashwin Sah) A basic EDT agent starts with a prior, updates on a bunch of observations, and then has an choice between various actions. It conditions on each possible action it could take, and takes the action for which this conditional leads the the highest expected utility. An updateless (but non-policy selection) EDT agent has a problem here. It wants to not update on the observations, but it wants to condition on the fact that its takes a specific action given its observations. It is not obvious what this conditional should look like. In this post, I agrue for a particular way to interpret this conditioning on this conditional (of taking a specific action given a specific observation).

Formally, we have a set of observations \(O\), a set of actions \(A\), and a set of possible utilities \(U\). The agent has a prior distribution \(P\) over \(O\times A\times U\). For simplicity, we assume that \(U\) takes one of two possible values, 0 and 1. Our original EDT agent upon seeing \(O=o\) considers takes action \[\mbox{argmax}_a\ P(U=1|O=o \wedge A=a).\]

Similarly, we would like our updateless EDT upon seeing \(O=o\) to take the action

\[\mbox{argmax}_a\ P(U=1|(A=a|O=o)),\]

but it is not clear what the “conditional distribution” \(Q(X)=P(X|(A=a|O=o))\) should look like. We are not conditioning on a standard event. We will define \(Q\) as having a minimal KL divergence with \(P\) subject to the constraint that \(Q(A=a|O=o)=1.\) However there is still more than one way to define this. First, because KL divergence is not symmetric, and second, for on direction, the KL divergence will be infinite, so we need to take a limit, and it matters how we take this limit.

  1. \(Q\) minimizing \(D_{KL}(Q||P)\) subject to \(Q(A=a|O=o)=1.\)
  2. Limit as \(x\) goes to 1 of \(Q\) minimizing \(D_{KL}(P||Q)\) subject to \(Q(A=a|O=o)=x.\)
  3. Limit as \(x\) goes to 0 of \(Q\) minimizing \(D_{KL}(P||Q)\) subject to \(Q(O=o\wedge A\neq a)=x.\)

Note that all of the above methods are natural generalizations of something that does the normal bayesian update when conditioning on an actual event.

Here, recall that \[D_{KL}(P||Q)=\sum_i P(i)\log\frac{P(i)}{Q(i)}.\]

As a simple example, assume that A and O are independently uniform between two options. If we condition on \(A=a_1|O=o_1\), it seems most natural to preserve the fact that the probability that \(o_1\) is \(1/2\). Our choice of action should not change the probability of \(o_1\). However both definitions 1 and 3 give Q(o_1)=1/3. We are in effect ruling out the world in which \(O=o_1\) and \(A=a_2\), and distributing our probability mass between the other 3 worlds equally.

Option 2, however updates to \(Q(O=o_1\wedge A=a_1)=1/2\) while keeping \(Q(O=o_2\wedge A=a_1)\) and \(Q(O=o_2\wedge A=a_2)\) at 1/4.

Philosophically, I think that option 1 is just the wrong direction to minimize KL divergence, (Usually the free parameter that you are minimizing should be the second term in the KL divergence.) while option 2 is correctly conditioning on the conditional \((a|o)\) and option 3 is conditioning on the implication \(o\rightarrow a\).

Thus, I will interpret \(P(-|(A=a|O=o))\) to be calculated via method 2. I may also want to use \(P(-|O=o\rightarrow A=a)\), which would be calculated via method 3 (or equivalently just updated on as the event \(O=o\rightarrow A=a\)).

Conditioning in this way causes the agent to believe that it can never change the probability of its observation, which seems bad for situations like transparent Newcomb. However, it appears you might be able to get past this by having multiple different instances of the agent (in this case one real and one in Omega’s head), and doing conditioning on the conditional for all instances simultaneously. This way, one instance can change the probability of the observation for the other instance. However, this seems unsatisfying since it requires knowing where all copies of yourself are in advance.



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

[Note: This comment is three
by Ryan Carey on A brief note on factoring out certain variables | 0 likes

There should be a chat icon
by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes

Apparently "You must be
by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like

There is a replacement for
by Alex Mennen on Meta: IAFF vs LessWrong | 1 like

Regarding the physical
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think that we should expect
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think I understand your
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

This seems like a hack. The
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

After thinking some more,
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yes, I think that we're
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

My intuition is that it must
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

To first approximation, a
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Actually, I *am* including
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yeah, when I went back and
by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes

> Well, we could give up on
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

RSS

Privacy & Terms