Intelligent Agent Foundations Forumsign up / log in
Two Types of Updatelessness
discussion post by Abram Demski 219 days ago | discuss

Just a small note which I’m not sure has been mentioned anywhere else:

It seems like there are two different classes of “updateless reasoning”.

In problems like Agent Simulates Predictor, switching to updateless reasoning is better for you in the very situation you find yourself in. The gains accrue to you. You objectively achieve higher expected value, at the point of decision, by making the decision from the perspective of yourself long ago rather than doing what seems higher EV from the current perspective.

In problems like counterfactual mugging, the gains do not accrue to the agent at the point of making the decision. The increase in expected value goes to other possible selves, which the decision-point self does not even believe in any more. The claim of higher EV is quite subjective; it depends entirely on one’s prior.

For lack of better terms, I’ll call the first type all-upside updatelessness; the second type is mixed-upside.

It is quite possible to construct decision theories which get all-upside updateless reasoning without getting mixed-upside. Asymptotic decision theory was one.

On the other hand, it seems unlikely that any natural proposal would get mixed-upside without getting the all-upside cases. Policy selection, for example, automatically gets both types (to the limited extent that it enables updatelessness reasoning).

Nonetheless, I find it plausible that one wants two different mechanisms to get the two different kinds. It seems to me that one can handle all-upside cases in a more objective way, getting good overall guarantees. Mixed-upside cases, on the other hand, require more messiness and compromise, as in the policy selection proposal. So, it could be beneficial to combine a mechanism which does perfectly for all-upside cases with a mechanism that provides some weaker guarantee for mixed-upside.





NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

There should be a chat icon
by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes

Apparently "You must be
by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like

There is a replacement for
by Alex Mennen on Meta: IAFF vs LessWrong | 1 like

Regarding the physical
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think that we should expect
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think I understand your
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

This seems like a hack. The
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

After thinking some more,
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yes, I think that we're
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

My intuition is that it must
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

To first approximation, a
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Actually, I *am* including
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yeah, when I went back and
by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes

> Well, we could give up on
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

> For another thing, consider
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

RSS

Privacy & Terms