Intelligent Agent Foundations Forumsign up / log in
by Paul Christiano 453 days ago | Jessica Taylor likes this | link | parent

This should cause CIRL to have motivated and manipulative learning

Have you ever seen an implementation of CIRL that would exhibit this behavior? I think you’d have to really stretch to write down an implementation with these problems, and that if you did it would look kind of silly.

Relatedly, in this post I describe two basic approaches to having an IRL agent reason sensibly about its own future learning. I think that neither of those approaches has this particular problem.



by Stuart Armstrong 453 days ago | link

The problem you’re discussing there is the same as the naive cake or death problem. You can avoid that by shoving an indicator function into the utility function: \(w=I_u u + I_v v\), with \(I_u+I_v=1\) (and \(u\) and \(v\) corresponding to home or office delivery).

The definitions of indicator function(s) contain the details of the learning process. But calling this a learning process doesn’t make it unbiased. This leads to the sophisticated version of the cake or death problem. In terms of your setup, we can imagine that going to work requires more energy, and the robot has an energy penalty. Then the AI can ask the human to clarify; but if, say \(I_u\) = “the human says home delivery, if asked”, then the AI will, if it can, force the human to say “home delivery”.

Avoiding these kinds of value learning problems is what I’ve been trying to do in recent posts.

reply

by Paul Christiano 452 days ago | Jessica Taylor likes this | link

Learning processes are unbiased when they are a martingale for any action sequence (“conservation of expected evidence,” like Bayesian updating). In the case of value learning with a causal model, this just requires the values to not be causally downstream of the AI’s actions, e.g. for them to be fixed before the first action of the agent. This is usually what people assume.

Then the AI can ask the human to clarify; but if, say \(I_u\) = “the human says home delivery, if asked”, then the AI will, if it can, force the human to say “home delivery”.

I strongly believe that you should get more precise about exactly what various possible systems actually do, and exactly how you would set up the model, before trying to fix the problem. I think that if you formally write down the model you are imagining, it will (1) become obvious that it is a super weird model, (2) become obvious that there are more natural models that don’t have the problem. The model you have in mind here seems to require totally pinning down that it means for the human to “say home delivery,” while it is going to be way more natural to set up a causal model in which the human’s utterances (and the system’s observations of human utterances) are downstream of some latent human preferences.

If you want to give up on the usual Bayesian approach to value learning, in which values are latent structure that is fixed at the beginning of the AI’s life, I think you should say something about why you are giving up on it.

If the point is just to have extra options, in case the Bayesian approach turns out to be prohibitively difficult, then you should probably call that out explicitly so that it is clear what situation you are addressing. You should also probably say something about why you are imagining the Bayesian approach doesn’t work, since your posts still impose most of the same technical requirements and at face value don’t look any easier to implement. How are you going to define the indicator function \(I_u\) in terms of observations, except by specifying a probabilistic model and conditioning it on observations?

reply

by Stuart Armstrong 452 days ago | link

(“conservation of expected evidence,” like Bayesian updating). In the case of value learning with a causal model, this just requires the values to not be causally downstream of the AI’s actions, e.g. for them to be fixed before the first action of the agent. This is usually what people assume.

Yes, I’ve posted on that. But getting that kind of causal downstreaming is not easy (and most things that people have proposed for value learning violate those assumptions; I’m pretty sure approval based methods do as well). Stratification is one way you can get this.

So I’m not avoiding the Bayesian approach because I want more options, but because I haven’t seen a decent Bayesian approach proposed.

reply

by Paul Christiano 451 days ago | link

In order to do value learning we need to specify what the AI is supposed to infer from some observations. The usual approach is to specify how the observations depend on the human’s preferences, and then have the AI do bayesian updating. If we are already in the business of explicitly specifying a causal model that links latent preferences to observations, we will presumably specify a model where latent preferences are upstream of observations and not downstream of the AI’s actions.

At some points it seems like you are expressing concerns about model misspecification, but I don’t see how this would cause the problem either.

For example, suppose that I incorrectly specify a model where the human is perfectly reliable, such that if at any time they say they like death, then they really do. And suppose that the AI can easily intervene to cause the human to say they like death. You seem to imply that the AI would take the action to cause the human to say they like death, if death is easier to achieve. But I don’t yet see why this would happen.

If the AI updates from the human saying that they like death, then it’s because the AI doesn’t recognize the impact of its own actions on the human’s utterances. And if the AI doesn’t recognize the impact of its own action on the human’s utterances, then it won’t bother to change its actions in order to influence the human’s utterances.

I don’t see any in-between regime where the AI will engage in this kind of manipulation, even if the model is completely misspecified. That is, I literally cannot construct any Bayesian agent that exhibits this behavior.

It seems like the only way it can appear is if we either (1) directly specify how the AI ought to update on observations, rather than specifying a model, or (2) specify a model in which the user’s preferences are causally downstream of the AI’s actions. But neither of those seems like things we would do.

because I haven’t seen a decent Bayesian approach proposed.

In some sense I agree with this. Specifying a model of how observations relate to preferences is very difficult! But both IRL and your writing seem to take as given such a model, and people who work on IRL in fact believe that we’ll be able to construct good-enough models. So if you are objecting to this leg of the proposal, that would be a much more direct criticism of IRL on its own terms. (And this is what I meant by saying “give up on the Bayesian approach.”)

For example, if you assume “Anything humans say about their preferences is true,” that’s basically giving up on the Bayesian approach as usually imagined (which would be to directly specify a model that relates preferences to utterances, and then to update on utterances) and replacing it with an ad-hoc algorithm for making inferences from human utterances (namely, accept them at face value). In the usual Bayesian setting, “humans are perfectly reliable” corresponds to believing that human utterances correctly track (fixed) human preferences, i.e. believing that it is impossible to influence those utterances.

reply

by Stuart Armstrong 451 days ago | link

For example, if you assume “Anything humans say about their preferences is true,” that’s basically giving up on the Bayesian approach as usually imagined

More formally, what I mean by that is “assume humans are perfectly rational, and fit a reward/utility function given those assumptions”. This is a perfectly Bayesian approach, and will always produce a (over-complicated) utility function that fits with the observed behaviour.

In the usual Bayesian setting, “humans are perfectly reliable” corresponds to believing that human utterances correctly track (fixed) human preferences, i.e. believing that it is impossible to influence those utterances.

Yes and no. Under the assumption that humans are perfectly reliable, influencing human preferences and utterances is impossible. But this leads to behaviour that resembles influencing human utterances under other assumptions.

eg if you threaten a human with a gun and ask them to report they are maximally happy, a sensible model of human preferences will say they are lying. But the “humans are rational” model will simply conclude that humans really like being threatened in this way.

reply

by Stuart Armstrong 452 days ago | link

Even worse, having the conservation of expected evidence for every action sequence is not enough to make the AI behave well. Jessica’s example of an AI that (to re-use the “human says” example for the moment) forces the human to randomly answer a question, has conservation of expected evidence… But not the other properties we want, such as conditional conservation of expected evidence (this is related to the ultra-sophisticated Cake or Death problem).

reply



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

This is exactly the sort of
by Stuart Armstrong on Being legible to other agents by committing to usi... | 0 likes

When considering an embedder
by Jack Gallagher on Where does ADT Go Wrong? | 0 likes

The differences between this
by Abram Demski on Policy Selection Solves Most Problems | 0 likes

Looking "at the very
by Abram Demski on Policy Selection Solves Most Problems | 0 likes

Without reading closely, this
by Paul Christiano on Policy Selection Solves Most Problems | 1 like

>policy selection converges
by Stuart Armstrong on Policy Selection Solves Most Problems | 0 likes

Indeed there is some kind of
by Vadim Kosoy on Catastrophe Mitigation Using DRL | 0 likes

Very nice. I wonder whether
by Vadim Kosoy on Hyperreal Brouwer | 0 likes

Freezing the reward seems
by Vadim Kosoy on Resolving human inconsistency in a simple model | 0 likes

Unfortunately, it's not just
by Vadim Kosoy on Catastrophe Mitigation Using DRL | 0 likes

>We can solve the problem in
by Wei Dai on The Happy Dance Problem | 1 like

Maybe it's just my browser,
by Gordon Worley III on Catastrophe Mitigation Using DRL | 2 likes

At present, I think the main
by Abram Demski on Looking for Recommendations RE UDT vs. bounded com... | 0 likes

In the first round I'm
by Paul Christiano on Funding opportunity for AI alignment research | 0 likes

Fine with it being shared
by Paul Christiano on Funding opportunity for AI alignment research | 0 likes

RSS

Privacy & Terms