by Paul Christiano 549 days ago | Jessica Taylor likes this | link | parent This should cause CIRL to have motivated and manipulative learning Have you ever seen an implementation of CIRL that would exhibit this behavior? I think you’d have to really stretch to write down an implementation with these problems, and that if you did it would look kind of silly. Relatedly, in this post I describe two basic approaches to having an IRL agent reason sensibly about its own future learning. I think that neither of those approaches has this particular problem.

 by Stuart Armstrong 548 days ago | link The problem you’re discussing there is the same as the naive cake or death problem. You can avoid that by shoving an indicator function into the utility function: $$w=I_u u + I_v v$$, with $$I_u+I_v=1$$ (and $$u$$ and $$v$$ corresponding to home or office delivery). The definitions of indicator function(s) contain the details of the learning process. But calling this a learning process doesn’t make it unbiased. This leads to the sophisticated version of the cake or death problem. In terms of your setup, we can imagine that going to work requires more energy, and the robot has an energy penalty. Then the AI can ask the human to clarify; but if, say $$I_u$$ = “the human says home delivery, if asked”, then the AI will, if it can, force the human to say “home delivery”. Avoiding these kinds of value learning problems is what I’ve been trying to do in recent posts. reply
 by Paul Christiano 548 days ago | Jessica Taylor likes this | link Learning processes are unbiased when they are a martingale for any action sequence (“conservation of expected evidence,” like Bayesian updating). In the case of value learning with a causal model, this just requires the values to not be causally downstream of the AI’s actions, e.g. for them to be fixed before the first action of the agent. This is usually what people assume. Then the AI can ask the human to clarify; but if, say $$I_u$$ = “the human says home delivery, if asked”, then the AI will, if it can, force the human to say “home delivery”. I strongly believe that you should get more precise about exactly what various possible systems actually do, and exactly how you would set up the model, before trying to fix the problem. I think that if you formally write down the model you are imagining, it will (1) become obvious that it is a super weird model, (2) become obvious that there are more natural models that don’t have the problem. The model you have in mind here seems to require totally pinning down that it means for the human to “say home delivery,” while it is going to be way more natural to set up a causal model in which the human’s utterances (and the system’s observations of human utterances) are downstream of some latent human preferences. If you want to give up on the usual Bayesian approach to value learning, in which values are latent structure that is fixed at the beginning of the AI’s life, I think you should say something about why you are giving up on it. If the point is just to have extra options, in case the Bayesian approach turns out to be prohibitively difficult, then you should probably call that out explicitly so that it is clear what situation you are addressing. You should also probably say something about why you are imagining the Bayesian approach doesn’t work, since your posts still impose most of the same technical requirements and at face value don’t look any easier to implement. How are you going to define the indicator function $$I_u$$ in terms of observations, except by specifying a probabilistic model and conditioning it on observations? reply
 by Stuart Armstrong 548 days ago | link (“conservation of expected evidence,” like Bayesian updating). In the case of value learning with a causal model, this just requires the values to not be causally downstream of the AI’s actions, e.g. for them to be fixed before the first action of the agent. This is usually what people assume. Yes, I’ve posted on that. But getting that kind of causal downstreaming is not easy (and most things that people have proposed for value learning violate those assumptions; I’m pretty sure approval based methods do as well). Stratification is one way you can get this. So I’m not avoiding the Bayesian approach because I want more options, but because I haven’t seen a decent Bayesian approach proposed. reply
 by Paul Christiano 547 days ago | link In order to do value learning we need to specify what the AI is supposed to infer from some observations. The usual approach is to specify how the observations depend on the human’s preferences, and then have the AI do bayesian updating. If we are already in the business of explicitly specifying a causal model that links latent preferences to observations, we will presumably specify a model where latent preferences are upstream of observations and not downstream of the AI’s actions. At some points it seems like you are expressing concerns about model misspecification, but I don’t see how this would cause the problem either. For example, suppose that I incorrectly specify a model where the human is perfectly reliable, such that if at any time they say they like death, then they really do. And suppose that the AI can easily intervene to cause the human to say they like death. You seem to imply that the AI would take the action to cause the human to say they like death, if death is easier to achieve. But I don’t yet see why this would happen. If the AI updates from the human saying that they like death, then it’s because the AI doesn’t recognize the impact of its own actions on the human’s utterances. And if the AI doesn’t recognize the impact of its own action on the human’s utterances, then it won’t bother to change its actions in order to influence the human’s utterances. I don’t see any in-between regime where the AI will engage in this kind of manipulation, even if the model is completely misspecified. That is, I literally cannot construct any Bayesian agent that exhibits this behavior. It seems like the only way it can appear is if we either (1) directly specify how the AI ought to update on observations, rather than specifying a model, or (2) specify a model in which the user’s preferences are causally downstream of the AI’s actions. But neither of those seems like things we would do. because I haven’t seen a decent Bayesian approach proposed. In some sense I agree with this. Specifying a model of how observations relate to preferences is very difficult! But both IRL and your writing seem to take as given such a model, and people who work on IRL in fact believe that we’ll be able to construct good-enough models. So if you are objecting to this leg of the proposal, that would be a much more direct criticism of IRL on its own terms. (And this is what I meant by saying “give up on the Bayesian approach.”) For example, if you assume “Anything humans say about their preferences is true,” that’s basically giving up on the Bayesian approach as usually imagined (which would be to directly specify a model that relates preferences to utterances, and then to update on utterances) and replacing it with an ad-hoc algorithm for making inferences from human utterances (namely, accept them at face value). In the usual Bayesian setting, “humans are perfectly reliable” corresponds to believing that human utterances correctly track (fixed) human preferences, i.e. believing that it is impossible to influence those utterances. reply
 by Stuart Armstrong 546 days ago | link For example, if you assume “Anything humans say about their preferences is true,” that’s basically giving up on the Bayesian approach as usually imagined More formally, what I mean by that is “assume humans are perfectly rational, and fit a reward/utility function given those assumptions”. This is a perfectly Bayesian approach, and will always produce a (over-complicated) utility function that fits with the observed behaviour. In the usual Bayesian setting, “humans are perfectly reliable” corresponds to believing that human utterances correctly track (fixed) human preferences, i.e. believing that it is impossible to influence those utterances. Yes and no. Under the assumption that humans are perfectly reliable, influencing human preferences and utterances is impossible. But this leads to behaviour that resembles influencing human utterances under other assumptions. eg if you threaten a human with a gun and ask them to report they are maximally happy, a sensible model of human preferences will say they are lying. But the “humans are rational” model will simply conclude that humans really like being threatened in this way. reply
 by Stuart Armstrong 548 days ago | link Even worse, having the conservation of expected evidence for every action sequence is not enough to make the AI behave well. Jessica’s example of an AI that (to re-use the “human says” example for the moment) forces the human to randomly answer a question, has conservation of expected evidence… But not the other properties we want, such as conditional conservation of expected evidence (this is related to the ultra-sophisticated Cake or Death problem). reply

### NEW DISCUSSION POSTS

If you drop the
 by Alex Appel on Distributed Cooperation | 1 like

Cool! I'm happy to see this
 by Abram Demski on Distributed Cooperation | 0 likes

Caveat: The version of EDT
 by 258 on In memoryless Cartesian environments, every UDT po... | 2 likes

[Delegative Reinforcement
 by Vadim Kosoy on Stable Pointers to Value II: Environmental Goals | 1 like

Intermediate update: The
 by Alex Appel on Further Progress on a Bayesian Version of Logical ... | 0 likes

Since Briggs [1] shows that
 by 258 on In memoryless Cartesian environments, every UDT po... | 2 likes

This doesn't quite work. The
 by Nisan Stiennon on Logical counterfactuals and differential privacy | 0 likes

I at first didn't understand
 by Sam Eisenstat on An Untrollable Mathematician | 1 like

This is somewhat related to
 by Vadim Kosoy on The set of Logical Inductors is not Convex | 0 likes

This uses logical inductors
 by Abram Demski on The set of Logical Inductors is not Convex | 0 likes

Nice writeup. Is one-boxing
 by Tom Everitt on Smoking Lesion Steelman II | 0 likes

Hi Alex! The definition of
 by Vadim Kosoy on Delegative Inverse Reinforcement Learning | 0 likes

A summary that might be
 by Alex Appel on Delegative Inverse Reinforcement Learning | 1 like

I don't believe that
 by Alex Appel on Delegative Inverse Reinforcement Learning | 0 likes

This is exactly the sort of
 by Stuart Armstrong on Being legible to other agents by committing to usi... | 0 likes