Intelligent Agent Foundations Forumsign up / log in
by Jessica Taylor 767 days ago | link | parent

It seems like this won’t happen with the value learning method that seems most natural to me (and consistent with IRL/CIRL): have the true utility function, definition of chocolate, etc be “historical” facts that are not in the AI’s future. In this case, there is no incentive to manipulate the definition of chocolate, since according to the AI’s model, this definition has already been decided.

So I’m curious about what model you’re using; it seems like in your model, it is natural to place the definition of chocolate in the AI’s future.



by Paul Christiano 765 days ago | link

I think the other natural approach is to simply make decisions based on the current estimated preferences, but to learn instrumental preferences of the user (including desire for the agent to learn more), as described here. Of course this also doesn’t have the problem from the OP.

reply

by Jessica Taylor 764 days ago | link

Yeah, this seems like the most natural way to deal with things like “chocolate” that aren’t yet well-defined. In this case, the instrumental preferences themselves will be treated as historical facts (it’s assumed that they’re already well-defined enough to learn).

reply

by Stuart Armstrong 763 days ago | link

have the true utility function, definition of chocolate, etc be “historical” facts that are not in the AI’s future.

The whole point of stratification (which is a kind of counterfactual reasoning) is to achieve this. Most value learning suggestions that I’ve seen do not.

reply

by Paul Christiano 763 days ago | link

Most value learning suggestions that I’ve seen do not.

What are you thinking of here? Could you point to an example?

reply



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

[Note: This comment is three
by Ryan Carey on A brief note on factoring out certain variables | 0 likes

There should be a chat icon
by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes

Apparently "You must be
by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like

There is a replacement for
by Alex Mennen on Meta: IAFF vs LessWrong | 1 like

Regarding the physical
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think that we should expect
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think I understand your
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

This seems like a hack. The
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

After thinking some more,
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yes, I think that we're
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

My intuition is that it must
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

To first approximation, a
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Actually, I *am* including
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yeah, when I went back and
by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes

> Well, we could give up on
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

RSS

Privacy & Terms