Intelligent Agent Foundations Forumsign up / log in
by Paul Christiano 557 days ago | link | parent

I don’t know what you really want, even in mundane circumstances. Nevertheless, it’s easy to talk about a motivational state in which I try my best to help you get what you want, and this would be sufficient to avert catastrophe. This would remain true if you were an alien with whom I share no cognitive machinery.

An example I often give is that a supervised learner is basically trying to do what I want, while usually being very weak. It may generalize catastrophically to unseen situations (which is a key problem), and it may not be very competent, but on the training distribution it’s not going to kill me except by incompetence.



by Stuart Armstrong 557 days ago | link

It may generalize catastrophically to unseen situations (which is a key problem)

That probably summarises my whole objection ^_^

reply

by Paul Christiano 556 days ago | link

But this could happen even if you train your agent using the “correct” reward function. And conversely, if we take as given an AI that can robustly maximize a given reward function, then it seems like my schemes don’t have this generalization problem.

So it seems like this isn’t a problem with the reward function, it’s just the general problem of doing robust/reliable ML. It seems like that can be cleanly factored out of the kind of reward engineering I’m discussing in the ALBA post. Does that seem right?

(It could certainly be the case that robust/reliable ML is the real meat of aligning model-free RL systems. Indeed, I think that’s a more common view in the ML community. Or, it could be the case that any ML system will fail to generalize in some catastrophic way, in which case the remedy is to make less use of learning.)

reply

by Stuart Armstrong 554 days ago | link

It seems like that can be cleanly factored out of the kind of reward engineering I’m discussing in the ALBA post. Does that seem right?

That doesn’t seem right to me. If there isn’t a problem with the reward function, then ALBA seems unnecessarily complicated. Conversely, if there is a problem, we might be able to use something like ALBA to try and fix it (this is why I was more positive about it in practice).

reply



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

[Note: This comment is three
by Ryan Carey on A brief note on factoring out certain variables | 0 likes

There should be a chat icon
by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes

Apparently "You must be
by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like

There is a replacement for
by Alex Mennen on Meta: IAFF vs LessWrong | 1 like

Regarding the physical
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think that we should expect
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think I understand your
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

This seems like a hack. The
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

After thinking some more,
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yes, I think that we're
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

My intuition is that it must
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

To first approximation, a
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Actually, I *am* including
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yeah, when I went back and
by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes

> Well, we could give up on
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

RSS

Privacy & Terms