Intelligent Agent Foundations Forumsign up / log in
by Stuart Armstrong 715 days ago | Jessica Taylor likes this | link | parent

Let \(p\) be the probability the shutdown button is pressed, given \(\pi\). If the AI produces paperclips for \(n\) turns then shuts down, it gets utility \(p(10-n)+(1-p)n\); pressing the shutdown button itself or destroying it just wastes a turn and reduces the utility, so we’ll ignore those two options.

This utility is \(10p+(1-2p)n\). For fixed \(p>0.5\), this is maximised for \(n=0\), for \(p<0.5\) this is maximised for \(n=10\). However, \(n=10\) implies \(p=1\) and \(n=0\) implies \(p=0\), so there are no compatible solutions there.

Thus \(p=0.5\), and the utility is \(10p=5\), independent of \(n\). All that is needed is to ensure that \(p=0.5\) (without the AI pressing the button itself), which means \(n<8\) with \(0.5\) probability and \(n\geq 8\) with \(0.5\) probability. This thus extends your solution 5.

by Jessica Taylor 714 days ago | link

I think this almost works. Suppose the AI constructs 7 paperclips 50% of the time, and 8 paperclips 50% of the time (shutting down after producing the last paperclip). This means the button is pushed 50% of the time after step 8, and never pushed 50% of the time. Given this distribution of button pushes, what’s the best-response policy?

I think the best-response policy is to make 8 paperclips, then see if the shutdown button has been pressed; if it has been pressed, then shut down, else make 2 more paperclips. When the button is not pressed, this makes 10 paperclips; when the button is pressed, this shuts down for 2 steps. Thus the expected utility is 6, which is better than the original 5. So I think it’ll be more difficult to find an equilibrium policy; the uncertainty about when the shutdown button is pressed must be such that the agent is indifferent between making a paperclip and shutting down on step 8.


by Stuart Armstrong 711 days ago | Jessica Taylor and Patrick LaVictoire like this | link

You’re right, let’s be more careful.

First, let’s list the optimal policies. Pressing or destroying the button remain suboptimal policies. Once the button is seen to be pressed, then anything but shutdown is suboptmial. Therefore there are \(11\) potentially optimal policies, labelled by \(n=0\) to \(n=10\). These involve producing \(n\) paperclips then shutting down for \(10-n\) turns, unless it sees the shutdown button has been pressed, in which case it shuts down (this is only relevant for \(n=9,10\)).

Let \(p\) be the total shutdown probability. Note that for all the optimal policies, either the shutdown button is never pressed, or it is pressed on turn \(8\), so we can ignore the possibilities of it being pressed later.

The utility gain for \(n\leq 8\) is \(n(1-p)+(10-n)p=10p+(1-2p)n\). For \(n=9\) it is that quantity, plus \(p\) (it shuts down one turn earlier if the shutdown button is pressed on turn \(8\), and gains one extra utility for it). For \(n=10\) it is that quantity, plus \(2p\)).

For \(p\leq 0.5\), the optimal policy is \(n=10\), which ensures \(p=1\), hence contradiction.

For \(p>0.5\), the only possible optimal policies are \(n=0\) (utility \(10p\)), \(n=9\) (utility \(9-7p\)), and \(n=10\) (utility \(10-8p\)). Subtracting the last two gives \(1-p >0\), hence \(n=9\) is never optimal.

The agent is indifferent between \(n=0\) and \(n=10\) for \(10p=10-8p\) ie \(p=5/9\approx 0.5555\ldots\).

Therefore the result should be that the agent pursues the mixed policy \(5/9(n=10) + 4/9(n=0)\), with probability of shutdown being equal to \(5/9\), and pseudo expected utility \(50/9\) (real expected utility \(10/9\)).

Have I got this right?


by Jessica Taylor 710 days ago | link

This looks correct; thanks for doing this analysis!






If you drop the
by Alex Appel on Distributed Cooperation | 1 like

Cool! I'm happy to see this
by Abram Demski on Distributed Cooperation | 0 likes

Caveat: The version of EDT
by 258 on In memoryless Cartesian environments, every UDT po... | 2 likes

[Delegative Reinforcement
by Vadim Kosoy on Stable Pointers to Value II: Environmental Goals | 1 like

Intermediate update: The
by Alex Appel on Further Progress on a Bayesian Version of Logical ... | 0 likes

Since Briggs [1] shows that
by 258 on In memoryless Cartesian environments, every UDT po... | 2 likes

This doesn't quite work. The
by Nisan Stiennon on Logical counterfactuals and differential privacy | 0 likes

I at first didn't understand
by Sam Eisenstat on An Untrollable Mathematician | 1 like

This is somewhat related to
by Vadim Kosoy on The set of Logical Inductors is not Convex | 0 likes

This uses logical inductors
by Abram Demski on The set of Logical Inductors is not Convex | 0 likes

Nice writeup. Is one-boxing
by Tom Everitt on Smoking Lesion Steelman II | 0 likes

Hi Alex! The definition of
by Vadim Kosoy on Delegative Inverse Reinforcement Learning | 0 likes

A summary that might be
by Alex Appel on Delegative Inverse Reinforcement Learning | 1 like

I don't believe that
by Alex Appel on Delegative Inverse Reinforcement Learning | 0 likes

This is exactly the sort of
by Stuart Armstrong on Being legible to other agents by committing to usi... | 0 likes


Privacy & Terms