Intelligent Agent Foundations Forumsign up / log in
A simple model of the Löbstacle
post by Patrick LaVictoire 1108 days ago | Abram Demski and Jessica Taylor like this | discuss

The idea of the Löbstacle is that basic trust in yourself and your successors is necessary but tricky: necessary, because naively modeling your successor’s decisions cannot rule out them making a bad decision, unless they are in some sense less intelligent than you; tricky, because the strongest patches of this problem lead to inconsistency, and weaker patches can lead to indefinite procrastination (because you always trust your successors to do the thing you are now putting off). (For a less handwavy explanation, see the technical agenda document on Vingean reflection.)

It is difficult to specify the circumstances under which this kind of self-trust succeeds or fails. Here is one simple example in which it can succeed, but for rather fragile reasons.


We will consider a sequential decision problem, where an agent’s payoff can depend on the actions of “later” agents in “later” universes. Even in the case where the identities of the later agents are known in advance and correspond closely to the current agent, the current agent can have difficulties in trusting the later ones.

Each universe is defined with respect to the output of the current agent, and to the output of the next universe on the next agent:

  • def \(U_n(a)\):
    • if \(a = \texttt{`Quit'}\): return 0
    • else if \(a = \texttt{`Explode'}\): return -10
    • else return \(2^{-n}+U_{n+1}(A_{n+1}())\)

Note that if \(A_n\)’s successors continue for a while but then explode, then \(A_n\) should quit rather than continue; but if none of \(A_n\)’s successors explode, then \(A_n\) should continue. So ideally, a sensible sequence of agents \(A_n\) should continue indefinitely in this problem.

We will consider a sequence of agents inspired by the descending search order version of modal UDT:

  • def \(A_n()\):
    • for (a, u) in \(\{\texttt{`Continue',`Quit',`Explode'}\}\times\textsf{Utilities}\), sorted in decreasing order of \(u\):
      • if \(\textsf{PA}\vdash U_n(a)\geq u\): return \(a\)
    • if still here: return some default action

Here, \(\textsf{Utilities}\) is a finite set of possible utilities, and we may choose it differently depending on \(n\).

In the language of the Vingean reflection paper, we have replaced the suggester-verifier architecture with a known order of proof searches (which may or may not succeed). We are also using a sequence of agents that use the same formal system, rather than a descending chain of formal systems in which each can prove the next one consistent. So in order to achieve self-trust, these agents will need to find some Löbian shortcuts.

Let’s start with a failure of self-trust:

Claim 1: If \(A_n\) ever considers any utility \(-9<u\leq 0\), and \(A_{n+1}\) first considers \((\texttt{`Explode'}, u)\) for some \(u>-10\), then \(A_n\) will quit, even though \(A_{n+1}\) will not explode.

This is straightforward: since the inconsistency of Peano Arithmetic would imply that \(A_{n+1}\) succeeds at its first proof check, and since PA cannot prove its own consistency, PA cannot prove that \(A_{n+1}()\neq\texttt{`Explode'}\), and so it cannot prove that \(A_n()=\texttt{`Continue'}\to U_n()> -9\). So if \(A_n\) considers any \(-9<u\leq 0\), it will find a proof that quitting satisfices before it can find any other proof.

Now, for a special case of successful self-trust:

Claim 2: If \(A_n\) is such that PA proves that for all \(n\), \(A_n\) first considers \((\texttt{`Continue'}, u)\) for some \(u< 2^{1-n}\), then \(A_n\) will indeed continue for all \(n\).

This follows from a Löbian argument: if we assume that PA proves \(\forall n \; A_n()=\texttt{`Continue'}\), then it follows that PA proves that for all \(n\), \(A_n()=\texttt{`Continue'}\to U_n()\geq u\) for any \(u< 2^{1-n}\), and since this is always the first counterfactual \(A_n\) considers, the success of this proof search implies that for all \(n\), \(A_n\) indeed continues. Since a proof of that assertion implies the assertion, by Löb’s theorem the assertion is provable.

This example can be weakened somewhat: it’s OK for \(A_{n+1}\) to first consider \((\texttt{`Quit'}, u)\) and secondly \((\texttt{`Continue'}, u)\) if \(0<u\leq 2^{-n}\), since then the analogous proof goes through. (If PA were inconsistent, \(A_n\) would get the \(\texttt{`Quit'}\) payoff at the next step rather than continuing the chain.) But if any \(\texttt{`Explode'}\) action must be considered before a correct counterfactual about \(\texttt{`Continue'}\), then self-trust will fail. So it doesn’t seem that such Löbian cycles are a robust foundation for Vingean reflection.

(Thanks to Benja for working this out with me.)



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

I found an improved version
by Alex Appel on A Loophole for Self-Applicative Soundness | 0 likes

I misunderstood your
by Sam Eisenstat on A Loophole for Self-Applicative Soundness | 0 likes

Caught a flaw with this
by Alex Appel on A Loophole for Self-Applicative Soundness | 0 likes

As you say, this isn't a
by Sam Eisenstat on A Loophole for Self-Applicative Soundness | 1 like

Note: I currently think that
by Jessica Taylor on Predicting HCH using expert advice | 0 likes

Counterfactual mugging
by Jessica Taylor on Doubts about Updatelessness | 0 likes

What do you mean by "in full
by David Krueger on Doubts about Updatelessness | 0 likes

It seems relatively plausible
by Paul Christiano on Maximally efficient agents will probably have an a... | 1 like

I think that in that case,
by Alex Appel on Smoking Lesion Steelman | 1 like

Two minor comments. First,
by Sam Eisenstat on No Constant Distribution Can be a Logical Inductor | 1 like

A: While that is a really
by Alex Appel on Musings on Exploration | 0 likes

> The true reason to do
by Jessica Taylor on Musings on Exploration | 0 likes

A few comments. Traps are
by Vadim Kosoy on Musings on Exploration | 1 like

I'm not convinced exploration
by Abram Demski on Musings on Exploration | 0 likes

Update: This isn't really an
by Alex Appel on A Difficulty With Density-Zero Exploration | 0 likes

RSS

Privacy & Terms