Intelligent Agent Foundations Forumsign up / log in
by Stuart Armstrong 814 days ago | link | parent

(“conservation of expected evidence,” like Bayesian updating). In the case of value learning with a causal model, this just requires the values to not be causally downstream of the AI’s actions, e.g. for them to be fixed before the first action of the agent. This is usually what people assume.

Yes, I’ve posted on that. But getting that kind of causal downstreaming is not easy (and most things that people have proposed for value learning violate those assumptions; I’m pretty sure approval based methods do as well). Stratification is one way you can get this.

So I’m not avoiding the Bayesian approach because I want more options, but because I haven’t seen a decent Bayesian approach proposed.



by Paul Christiano 813 days ago | link

In order to do value learning we need to specify what the AI is supposed to infer from some observations. The usual approach is to specify how the observations depend on the human’s preferences, and then have the AI do bayesian updating. If we are already in the business of explicitly specifying a causal model that links latent preferences to observations, we will presumably specify a model where latent preferences are upstream of observations and not downstream of the AI’s actions.

At some points it seems like you are expressing concerns about model misspecification, but I don’t see how this would cause the problem either.

For example, suppose that I incorrectly specify a model where the human is perfectly reliable, such that if at any time they say they like death, then they really do. And suppose that the AI can easily intervene to cause the human to say they like death. You seem to imply that the AI would take the action to cause the human to say they like death, if death is easier to achieve. But I don’t yet see why this would happen.

If the AI updates from the human saying that they like death, then it’s because the AI doesn’t recognize the impact of its own actions on the human’s utterances. And if the AI doesn’t recognize the impact of its own action on the human’s utterances, then it won’t bother to change its actions in order to influence the human’s utterances.

I don’t see any in-between regime where the AI will engage in this kind of manipulation, even if the model is completely misspecified. That is, I literally cannot construct any Bayesian agent that exhibits this behavior.

It seems like the only way it can appear is if we either (1) directly specify how the AI ought to update on observations, rather than specifying a model, or (2) specify a model in which the user’s preferences are causally downstream of the AI’s actions. But neither of those seems like things we would do.

because I haven’t seen a decent Bayesian approach proposed.

In some sense I agree with this. Specifying a model of how observations relate to preferences is very difficult! But both IRL and your writing seem to take as given such a model, and people who work on IRL in fact believe that we’ll be able to construct good-enough models. So if you are objecting to this leg of the proposal, that would be a much more direct criticism of IRL on its own terms. (And this is what I meant by saying “give up on the Bayesian approach.”)

For example, if you assume “Anything humans say about their preferences is true,” that’s basically giving up on the Bayesian approach as usually imagined (which would be to directly specify a model that relates preferences to utterances, and then to update on utterances) and replacing it with an ad-hoc algorithm for making inferences from human utterances (namely, accept them at face value). In the usual Bayesian setting, “humans are perfectly reliable” corresponds to believing that human utterances correctly track (fixed) human preferences, i.e. believing that it is impossible to influence those utterances.

reply

by Stuart Armstrong 812 days ago | link

For example, if you assume “Anything humans say about their preferences is true,” that’s basically giving up on the Bayesian approach as usually imagined

More formally, what I mean by that is “assume humans are perfectly rational, and fit a reward/utility function given those assumptions”. This is a perfectly Bayesian approach, and will always produce a (over-complicated) utility function that fits with the observed behaviour.

In the usual Bayesian setting, “humans are perfectly reliable” corresponds to believing that human utterances correctly track (fixed) human preferences, i.e. believing that it is impossible to influence those utterances.

Yes and no. Under the assumption that humans are perfectly reliable, influencing human preferences and utterances is impossible. But this leads to behaviour that resembles influencing human utterances under other assumptions.

eg if you threaten a human with a gun and ask them to report they are maximally happy, a sensible model of human preferences will say they are lying. But the “humans are rational” model will simply conclude that humans really like being threatened in this way.

reply



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

[Note: This comment is three
by Ryan Carey on A brief note on factoring out certain variables | 0 likes

There should be a chat icon
by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes

Apparently "You must be
by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like

There is a replacement for
by Alex Mennen on Meta: IAFF vs LessWrong | 1 like

Regarding the physical
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think that we should expect
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think I understand your
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

This seems like a hack. The
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

After thinking some more,
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yes, I think that we're
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

My intuition is that it must
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

To first approximation, a
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Actually, I *am* including
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yeah, when I went back and
by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes

> Well, we could give up on
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

RSS

Privacy & Terms