Intelligent Agent Foundations Forumsign up / log in
by Ryan Carey 651 days ago | Jessica Taylor likes this | link | parent

This result features in the paper by Piccione and Rubeinstein that introduced the absent-minded driver problem [1].

Philosophers like decision theories that self-ratify, and this is indeed a powerful self-ratification principle.

This self-ratification principle does however rely on SIA probabilities assuming the current policy. We have shown that conditioning on your current policy, you will want to continue on with your current policy. i.e. the policy will be a Nash Equilibrium. There can be Nash Equilibria for other policies \(\pi'\) however. The UDT policy will by definition equal or beat these from the ex ante point of view. However, others can achieve higher expected utility conditioning on the initial observation i.e. higher \(SIA_{\pi'}(s|o)Q_{\pi'}(s,a)\). This apparent paradox is discussed in [2] [3], and seems to reduce to disagreement over counterfactual mugging.

So why do we like the UDT solution over solutions that are more optimal locally, and that also locally self-ratify? Obviously we want to avoid resorting so circular reasoning (i.e. it gets the best utility ex ante). I think there are some okay reasons:

  1. it is reflectively stable (i.e. will not self-modify, will not hide future evidence) and
  2. it makes sense assuming modal realism or many worlds interpretation (then we deem it parochial to focus on any reference frame other than equal weighting across the whole wavefunction/universe)
  3. it makes sense if we assume that self-location somehow does not
  4. it’s simpler (utility function given weighting 1 across all worlds). In principle, UDT can also include the locally optimal
  5. it transfers better to scenarios without randomization as in Nate + Ben Levenstein’s forthcoming [4].

I imagine there are more good arguments that I don’t yet know.

  1. p19 Piccione, Michele, and Ariel Rubinstein. “On the interpretation of decision problems with imperfect recall.” Games and Economic Behavior 20.1 (1997): 3-24.
  2. Schwarz, Wolfgang. “Lost memories and useless coins: revisiting the absentminded driver.” Synthese 192.9 (2015): 3011-3036.
  3. http://lesswrong.com/lw/3dy/has_anyone_solved_psykoshs_nonanthropic_problem/
  4. Cheating Death in Damascus / Nate Soares and Ben Levenstein / Forthcoming


NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

[Note: This comment is three
by Ryan Carey on A brief note on factoring out certain variables | 0 likes

There should be a chat icon
by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes

Apparently "You must be
by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like

There is a replacement for
by Alex Mennen on Meta: IAFF vs LessWrong | 1 like

Regarding the physical
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think that we should expect
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think I understand your
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

This seems like a hack. The
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

After thinking some more,
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yes, I think that we're
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

My intuition is that it must
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

To first approximation, a
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Actually, I *am* including
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yeah, when I went back and
by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes

> Well, we could give up on
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

RSS

Privacy & Terms