Intelligent Agent Foundations Forumsign up / log in
by Jessica Taylor 708 days ago | Patrick LaVictoire likes this | link | parent

Update: I think there’s a more natural generalizion to multi-stage games that might solve the iterated paperclip example.

Let \(\pi\) be a policy. Define

\[v_{\pi}(\pi') = \sum_b P(B = b | \pi) \mathbb{E}[U(C) | do(b), \pi']\]

A policy \(\pi\) is optimal iff \(\pi \in \arg\max_{\pi'} v_{\pi}(\pi')\). This reduces to the old definition when the set of policies is the set of probability distributions over actions. I couldn’t figure out what policy this selects in the 10-step example, though.

by Stuart Armstrong 707 days ago | Jessica Taylor likes this | link

Let \(p\) be the probability the shutdown button is pressed, given \(\pi\). If the AI produces paperclips for \(n\) turns then shuts down, it gets utility \(p(10-n)+(1-p)n\); pressing the shutdown button itself or destroying it just wastes a turn and reduces the utility, so we’ll ignore those two options.

This utility is \(10p+(1-2p)n\). For fixed \(p>0.5\), this is maximised for \(n=0\), for \(p<0.5\) this is maximised for \(n=10\). However, \(n=10\) implies \(p=1\) and \(n=0\) implies \(p=0\), so there are no compatible solutions there.

Thus \(p=0.5\), and the utility is \(10p=5\), independent of \(n\). All that is needed is to ensure that \(p=0.5\) (without the AI pressing the button itself), which means \(n<8\) with \(0.5\) probability and \(n\geq 8\) with \(0.5\) probability. This thus extends your solution 5.


by Jessica Taylor 707 days ago | link

I think this almost works. Suppose the AI constructs 7 paperclips 50% of the time, and 8 paperclips 50% of the time (shutting down after producing the last paperclip). This means the button is pushed 50% of the time after step 8, and never pushed 50% of the time. Given this distribution of button pushes, what’s the best-response policy?

I think the best-response policy is to make 8 paperclips, then see if the shutdown button has been pressed; if it has been pressed, then shut down, else make 2 more paperclips. When the button is not pressed, this makes 10 paperclips; when the button is pressed, this shuts down for 2 steps. Thus the expected utility is 6, which is better than the original 5. So I think it’ll be more difficult to find an equilibrium policy; the uncertainty about when the shutdown button is pressed must be such that the agent is indifferent between making a paperclip and shutting down on step 8.


by Stuart Armstrong 704 days ago | Jessica Taylor and Patrick LaVictoire like this | link

You’re right, let’s be more careful.

First, let’s list the optimal policies. Pressing or destroying the button remain suboptimal policies. Once the button is seen to be pressed, then anything but shutdown is suboptmial. Therefore there are \(11\) potentially optimal policies, labelled by \(n=0\) to \(n=10\). These involve producing \(n\) paperclips then shutting down for \(10-n\) turns, unless it sees the shutdown button has been pressed, in which case it shuts down (this is only relevant for \(n=9,10\)).

Let \(p\) be the total shutdown probability. Note that for all the optimal policies, either the shutdown button is never pressed, or it is pressed on turn \(8\), so we can ignore the possibilities of it being pressed later.

The utility gain for \(n\leq 8\) is \(n(1-p)+(10-n)p=10p+(1-2p)n\). For \(n=9\) it is that quantity, plus \(p\) (it shuts down one turn earlier if the shutdown button is pressed on turn \(8\), and gains one extra utility for it). For \(n=10\) it is that quantity, plus \(2p\)).

For \(p\leq 0.5\), the optimal policy is \(n=10\), which ensures \(p=1\), hence contradiction.

For \(p>0.5\), the only possible optimal policies are \(n=0\) (utility \(10p\)), \(n=9\) (utility \(9-7p\)), and \(n=10\) (utility \(10-8p\)). Subtracting the last two gives \(1-p >0\), hence \(n=9\) is never optimal.

The agent is indifferent between \(n=0\) and \(n=10\) for \(10p=10-8p\) ie \(p=5/9\approx 0.5555\ldots\).

Therefore the result should be that the agent pursues the mixed policy \(5/9(n=10) + 4/9(n=0)\), with probability of shutdown being equal to \(5/9\), and pseudo expected utility \(50/9\) (real expected utility \(10/9\)).

Have I got this right?


by Jessica Taylor 703 days ago | link

This looks correct; thanks for doing this analysis!


by Patrick LaVictoire 689 days ago | Jessica Taylor likes this | link

We’d discussed how this “magical counterfactual” approach has the property of ignoring evidence of precursors to a button-press, since they don’t count as evidence for whether the button would be pressed in the counterfactual world. Here’s a simple illustration of that issue:

In this world, there is a random fair coinflip, then the AI gets to produce either a staple or a paperclip, and then a button is pressed. We have a utility function that rewards paperclips if the button is pressed, and staples if it is not pressed. Furthermore, the button is pressed iff the coin landed heads.

Explicitly, say our utility function equals \(\alpha>0\) if a paperclip is made and the button is pressed, \(\beta>0\) if a staple is made and the button is not pressed, and 0 otherwise.

Now \(v_\pi(\pi')\) simplifies to

\[v_\pi(\pi') = \frac12\left(\alpha\mathbb{P}(paperclip | do(\pi')) + \beta\mathbb{P}(staple | do(\pi'))\right),\]

since the probability of the button press does not depend on \(\pi\) and since the button press is not observed before the action is taken.

The possible policies are mixtures of four pure strategies: always producing paperclips, always producing staples, producing paperclips iff the coin landed heads, and producing staples iff the coin landed heads. We should like our AI to settle on the third strategy. Alas, this is only possible (and then not necessary) if \(\alpha=\beta\).

If instead w.l.o.g. \(\alpha>\beta\), the only fixed point disregards the coin flip and always produces paperclips, since our observed coinflip does not tell us about the coinflip in the magical counterfactual!


by Stuart Armstrong 687 days ago | link

This seems to be what we desire. The coin flip is only relevant via it’s impact on the button; we want the AI to ignore the impact via the button; hence the AI ignore the coin flip.


by Patrick LaVictoire 685 days ago | Jessica Taylor likes this | link

It’s illustrating the failure of a further desideratum for the shutdown problem: we would like the AI to be able to update on and react to things that happen in the world which correlate with a certain channel, and yet still not attempt to influence that channel.

For motivation, assume a variant on the paperclip game:

  • the humans can be observed reaching for the button several turns before it is pressed
  • the humans’ decision to press the button is a stochastic function of environmental variables (like seeing that the AI has unexpectedly been hit by lightning, or has started producing Too Many Paperclips, etc)

We would like a solution which in some sense updates on the precursors to shutdown and minimizes the damage while still not attempting to influence the button press. (If doing such a thing robustly is impossible, we would like to discover this; Jessica mentioned that there is a version which does this but is not reflectively consistent.)

Intuitively, I could imagine a well-constructed AI reasoning “oh, they’re showing signs that they’re going to shut me down, guess my goal is wrong, I’ll initiate Safe Shutdown Protocol now rather than risk doing further damage”, but current formalizations don’t do this.






Caveat: The version of EDT
by 258 on In memoryless Cartesian environments, every UDT po... | 2 likes

[Delegative Reinforcement
by Vadim Kosoy on Stable Pointers to Value II: Environmental Goals | 1 like

Intermediate update: The
by Alex Appel on Further Progress on a Bayesian Version of Logical ... | 0 likes

Since Briggs [1] shows that
by 258 on In memoryless Cartesian environments, every UDT po... | 2 likes

This doesn't quite work. The
by Nisan Stiennon on Logical counterfactuals and differential privacy | 0 likes

I at first didn't understand
by Sam Eisenstat on An Untrollable Mathematician | 1 like

This is somewhat related to
by Vadim Kosoy on The set of Logical Inductors is not Convex | 0 likes

This uses logical inductors
by Abram Demski on The set of Logical Inductors is not Convex | 0 likes

Nice writeup. Is one-boxing
by Tom Everitt on Smoking Lesion Steelman II | 0 likes

Hi Alex! The definition of
by Vadim Kosoy on Delegative Inverse Reinforcement Learning | 0 likes

A summary that might be
by Alex Appel on Delegative Inverse Reinforcement Learning | 1 like

I don't believe that
by Alex Appel on Delegative Inverse Reinforcement Learning | 0 likes

This is exactly the sort of
by Stuart Armstrong on Being legible to other agents by committing to usi... | 0 likes

When considering an embedder
by Jack Gallagher on Where does ADT Go Wrong? | 0 likes

The differences between this
by Abram Demski on Policy Selection Solves Most Problems | 1 like


Privacy & Terms