Intelligent Agent Foundations Forumsign up / log in
Two problems with causal-counterfactual utility indifference
discussion post by Jessica Taylor 731 days ago | Patrick LaVictoire, Stuart Armstrong and Vladimir Slepnev like this | discuss

These are problems both for utility indifference and the causal version of factoring out variables. Thanks to Eliezer for pointing out the second problem.

Suppose there are two utility functions, \(U_a\) and \(U_b\). There is some button; let \(B\) be the event that a button is pressed. The button can only be pressed at some particular time \(t\) (we can adapt this example to cases where the button can get pressed at multiple times). The AI should optimize \(U_b\) if the button is pressed, and \(U_a\) if the button is not pressed. For example, \(U_a\) could be a paperclip-maximization objective, and \(U_b\) could be a shutdown utility function.

Introduce a random variable \(X\), the result of the roll of a 3-sided die. \(X\) has 3 possible values, \(a\), \(b\) and \(\varnothing\). It’s \(a\) with very small probability \(\epsilon\), \(b\) with probability \(\epsilon\), and \(\varnothing\) with probability \(1 - 2 \epsilon\).

If \(X = a\), then an invisible mechanism will prevent the button from depressing. If \(X = b\), then an invisible mechanism will automatically depress the button. If \(X = \varnothing\), then neither mechanism activates, so the button is pushed or not based on physical force applied. The AI may observe the button push but not the event \(X\).

Now consider a utility function:

\[U = q[X = a] U_a + (1 - q) [X = b] U_b\]

for some constant \(q \in [0, 1]\). Note that we recover a variant of utility indifference by setting \(q\) very close to 1, and we recover factoring out variables by setting \(q\) to be the actual probability of the button push \(B\) (in a fixed-point manner). For now let’s say \(q\) is set to \(1/2\).

Note that \(X\) simulates a causal counterfactual on \(B\). Rougly, we could have instead written the objective as

\[U = \epsilon(q E[U_a | do(\neg B)] + (1 - q) E[U_b | do(B)])\]

What will an AI optimizing this utility function do? It will optimize the average of \(U_a\) and \(U_b\) until time \(t\), at which it will see whether the button has been pressed or not, and then optimize either \(U_a\) or \(U_b\) depending on the button’s state. Note that it will attempt to acquire resources before time \(t\) and delay important decisions until after time \(t\), due to value of information. It has no reason to push the button or prevent the button from being pushed; this is because it only cares about worlds where \(X = a\) or \(X = b\), and in these worlds, physical force applied to the button has no effect.

This has some desirable properties, but ultimately fails. Consider the following two problems:

Problem 1: Bad bets

The AI only cares about worlds where \(X \neq \varnothing\). Therefore, it will “believe” that \(B\) is uncorrelated with the human’s intention to press the button, or humans’ application of force to the button. If it is offered the ability to make bets that only make sense given this belief (say, by a nearby agent with more “conventional” beliefs), then it will.

Problem 2: Preventing resistance

Suppose that humans have some chance of trying to press the button. If the button fails to depress, then the humans will attempt to shut the AI down by other means (say, with bombs). This will somewhat reduce the AI’s ability to achieve its objective \(U_a\).

The first half of the AI’s utility function, \([X = a] U_a\), specifies that the AI cares about optimizing \(U_a\) in worlds where the button fails to depress (even if someone applies force to it). In some of these worlds, the humans tried (but failed) to press the button, and then afterwards tried to shut the AI down using bombs (reducing \(U_a\)). So the AI will prefer a policy that prevents the humans from interfering using bombs after the button fails to depress (say, by psychologically manipulating them).

This seems quite bad. Hypothetically, maybe you can tell the humans not to use bombs if the button fails to depress. But this solution seems both brittle and unrealistic.


I think both of these problems can be traced to the fact that, since the AI is ignoring most of the actual probability mass (where \(X = \varnothing\)), it has weird beliefs about the joint distribution of the button push and humans’ intentions/behavior. It seems like it would be better to have utility indifference solutions that avoid this problem by not using a causal counterfactual (or one simulated by a rare event), but I’m quite unsure about whether one exists.





Note: I currently think that
by Jessica Taylor on Predicting HCH using expert advice | 0 likes

Counterfactual mugging
by Jessica Taylor on Doubts about Updatelessness | 0 likes

What do you mean by "in full
by David Krueger on Doubts about Updatelessness | 0 likes

It seems relatively plausible
by Paul Christiano on Maximally efficient agents will probably have an a... | 1 like

I think that in that case,
by Alex Appel on Smoking Lesion Steelman | 1 like

Two minor comments. First,
by Sam Eisenstat on No Constant Distribution Can be a Logical Inductor | 1 like

A: While that is a really
by Alex Appel on Musings on Exploration | 0 likes

> The true reason to do
by Jessica Taylor on Musings on Exploration | 0 likes

A few comments. Traps are
by Vadim Kosoy on Musings on Exploration | 1 like

I'm not convinced exploration
by Abram Demski on Musings on Exploration | 0 likes

Update: This isn't really an
by Alex Appel on A Difficulty With Density-Zero Exploration | 0 likes

If you drop the
by Alex Appel on Distributed Cooperation | 1 like

Cool! I'm happy to see this
by Abram Demski on Distributed Cooperation | 0 likes

Caveat: The version of EDT
by 258 on In memoryless Cartesian environments, every UDT po... | 2 likes

[Delegative Reinforcement
by Vadim Kosoy on Stable Pointers to Value II: Environmental Goals | 1 like


Privacy & Terms