Intelligent Agent Foundations Forumsign up / log in
by Patrick LaVictoire 910 days ago | Jessica Taylor likes this | link | parent

We’d discussed how this “magical counterfactual” approach has the property of ignoring evidence of precursors to a button-press, since they don’t count as evidence for whether the button would be pressed in the counterfactual world. Here’s a simple illustration of that issue:

In this world, there is a random fair coinflip, then the AI gets to produce either a staple or a paperclip, and then a button is pressed. We have a utility function that rewards paperclips if the button is pressed, and staples if it is not pressed. Furthermore, the button is pressed iff the coin landed heads.

Explicitly, say our utility function equals \(\alpha>0\) if a paperclip is made and the button is pressed, \(\beta>0\) if a staple is made and the button is not pressed, and 0 otherwise.

Now \(v_\pi(\pi')\) simplifies to

\[v_\pi(\pi') = \frac12\left(\alpha\mathbb{P}(paperclip | do(\pi')) + \beta\mathbb{P}(staple | do(\pi'))\right),\]

since the probability of the button press does not depend on \(\pi\) and since the button press is not observed before the action is taken.

The possible policies are mixtures of four pure strategies: always producing paperclips, always producing staples, producing paperclips iff the coin landed heads, and producing staples iff the coin landed heads. We should like our AI to settle on the third strategy. Alas, this is only possible (and then not necessary) if \(\alpha=\beta\).

If instead w.l.o.g. \(\alpha>\beta\), the only fixed point disregards the coin flip and always produces paperclips, since our observed coinflip does not tell us about the coinflip in the magical counterfactual!



by Stuart Armstrong 908 days ago | link

This seems to be what we desire. The coin flip is only relevant via it’s impact on the button; we want the AI to ignore the impact via the button; hence the AI ignore the coin flip.

reply

by Patrick LaVictoire 906 days ago | Jessica Taylor likes this | link

It’s illustrating the failure of a further desideratum for the shutdown problem: we would like the AI to be able to update on and react to things that happen in the world which correlate with a certain channel, and yet still not attempt to influence that channel.

For motivation, assume a variant on the paperclip game:

  • the humans can be observed reaching for the button several turns before it is pressed
  • the humans’ decision to press the button is a stochastic function of environmental variables (like seeing that the AI has unexpectedly been hit by lightning, or has started producing Too Many Paperclips, etc)

We would like a solution which in some sense updates on the precursors to shutdown and minimizes the damage while still not attempting to influence the button press. (If doing such a thing robustly is impossible, we would like to discover this; Jessica mentioned that there is a version which does this but is not reflectively consistent.)

Intuitively, I could imagine a well-constructed AI reasoning “oh, they’re showing signs that they’re going to shut me down, guess my goal is wrong, I’ll initiate Safe Shutdown Protocol now rather than risk doing further damage”, but current formalizations don’t do this.

reply



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

[Note: This comment is three
by Ryan Carey on A brief note on factoring out certain variables | 0 likes

There should be a chat icon
by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes

Apparently "You must be
by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like

There is a replacement for
by Alex Mennen on Meta: IAFF vs LessWrong | 1 like

Regarding the physical
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think that we should expect
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think I understand your
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

This seems like a hack. The
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

After thinking some more,
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yes, I think that we're
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

My intuition is that it must
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

To first approximation, a
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Actually, I *am* including
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yeah, when I went back and
by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes

> Well, we could give up on
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

RSS

Privacy & Terms