Intelligent Agent Foundations Forumsign up / log in
by Stuart Armstrong 550 days ago | link | parent

Here I’m allowing the AI to predict in advance what giving the heroin would do. And the AI doesn’t predict “the human likes heroin” but “contingent on some minor fact that is true in the world where the human is forced to take heroin, the human likes heroin”.

Via tricks like that, the human’s behaviour is seen to be perfectly rational.

by Jessica Taylor 550 days ago | link

Ok, here’s my reconstruction of what model you’re using:

The AI will take an action \(A_{AI}\); then the human will take action \(A_H\); then a world state \(W\) will result.

The human utility function \(U\) is taken to be a function from \(W\) to \(\mathbb{R}\).

The AI has a “prior” over human utility functions, \(Q(U)\), and a “rationality model”, \(Q(A_H|U, A_{AI})\), saying what action a human would take given that they have a given utility function and given the action the AI took (in this case, let’s say the human directly observes the AI’s action). For example, \(Q\) could say that the human takes the optimal action 80% of the time and otherwise takes a suboptimal action.

Separately, the AI has a predictive model \(P(A_H | A_{AI})\), saying what action the human is actually going to take. The predictive model is “good” is the sense that, for example, it should predict that the human will say they want heroin if they are given it. The AI also has a predictive model for the world \(P(W | A_{AI}, A_H)\).

Notably, \(Q\) is inconsistent with \(P\). If we define \(Q(A_H | A_{AI}) := \sum_U Q(U) Q(A_H | U, A_{AI})\), then in general \(P(A_H | A_{AI}) \neq Q(A_H | A_{AI})\). For example, \(P\) correctly predicts that the human will say they like heroin if the AI administers it; while \(Q\) says that the human will probably not (since they’re correct 80% of the time, and disprefer heroin with 60% probability).

The AI’s “estimate” for the human’s utility function, given action \(A_H\), is

\[Q(U | A_H, A_{AI}) := \frac{Q(U) Q(A_H | U, A_{AI})}{\sum_{U'} Q(U') Q(A_H | U', A_{AI})}\].

(I put “estimate” in quotes because the “estimate” uses Q, while P can be interpreted as the AI’s “actual” beliefs). The AI’s objective is to optimize its “estimate” of expected utility:

\[ U_{AI} := \sum_U Q(U | A_H, A_{AI}) U(W) \]

And it scores actions by taking the expectation of this “estimate”, using \(P\):

\[score(A_{AI}) := \mathbb{E}_P[U_{AI} | A_{AI}] := \sum_{A_H} P(A_H | A_{AI}) \sum_W P(W | A_H, A_{AI}) \sum_U Q(U | A_H, A_{AI}) U(W)\]

Correct me if I’m wrong about the model you’re using.

If this is the model you’re using, then it is (a) inconsistent with the IRL/CIRL literature, and (b) looks really weird (as Paul points out). In this case you should be clear that you’re not criticizing IRL/CIRL, you’re criticizing a different model which, as far as I know, no one has advocated as a good way of learning human values.


by Stuart Armstrong 550 days ago | link

No, that’s not it! Let me try to make it clearer.

Let \(Q\) be quite simple: the human always takes the optimal action. The predictive model \(P\) predicts that, if given heroin, the human will take more, and, if not, will not.

It seems that \(P\) and \(Q\) are in contradiction with each other, but that is because we are using a different model \(Q'\) of human preferences. A valid model of human preferences, under \(Q\), is that humans like heroin if it’s forced on them. Or, if you want to isolate this from the AI’s direct actions, that humans like heroin if \(X\) happens, where \(X\) is some unrelated event that happens if the AI forces heroin on the human, but not otherwise.


by Paul Christiano 549 days ago | Jessica Taylor likes this | link

Having separate models \(P\) and \(Q\) is already quite weird; usually there would be a single model where values appear as latent structure.

You could legitimately complain that it seems very hard to construct such a model. And indeed I am skeptical that it will be possible. But if you want to fix problems arising from specifying \(Q\) rather than \(P\), it seems like you should say something about why specifying a separate \(Q\) is easier, or why someone would do it. At face value it looks equally difficult.

(Also, it is definitely not clear what algorithm you are referring to in this comment. Can you specify what computation the AI actually does / what kind of objects this \(P\) and \(Q\) are? The way I can see to make it work, \(P\) is a distribution over observations and \(Q\) is a distribution over values conditioned on observations. Is that right?)


by Stuart Armstrong 548 days ago | link

The model \(P\) is simply a model of human behaviour. It’s objective in the sense that it simply attempts to predict what humans will do in practice. It is, however, useless for figuring out what human values are, as it’s purely predictive of observations.

The model \(Q\) is an explanation/model for deducing human preferences or values, from observations (or predicted observations). Thus, given \(P\) and \(Q\), you can construct \(R\), the human reward function (note that \(P\), \(Q\), and \(R\) are all very different types of objects).

Simple possible \(Q\)’s would be \(Q_1\) = “everything the human does is rational” or \(Q_2\) = “everything the human does is random”.

So each \(Q\) contains estimates of rationality, noise, bias, amount of knowledge, and so on. Generally you’d want to have multiple \(Q\)’s and update them in terms of observations as well.


by Paul Christiano 548 days ago | link

What kind of object is \(Q\)? (I assume its not a string.) Are you directly specifying a distribution of preferences conditioned on observations? Are you specifying a distribution over observations conditioned on preferences and then using inference?

I assume the second case. So given that \(Q\) is a predictive model, why wouldn’t you also use \(Q\) as your model for planning? What is the advantage of using two separate models? Has anyone proposed using separate models in this way?

To the extent that your model \(Q\) is bad, it seems like you are just doomed to perform badly, and the you either need to abandon the model-based approach or come up with a better model. Adding a second model \(P\) doesn’t sound promising at face value.

It may be interesting or useful to have two models in this way, but I think it’s an unusual architecture that requires some discussion.






If you drop the
by Alex Appel on Distributed Cooperation | 1 like

Cool! I'm happy to see this
by Abram Demski on Distributed Cooperation | 0 likes

Caveat: The version of EDT
by 258 on In memoryless Cartesian environments, every UDT po... | 2 likes

[Delegative Reinforcement
by Vadim Kosoy on Stable Pointers to Value II: Environmental Goals | 1 like

Intermediate update: The
by Alex Appel on Further Progress on a Bayesian Version of Logical ... | 0 likes

Since Briggs [1] shows that
by 258 on In memoryless Cartesian environments, every UDT po... | 2 likes

This doesn't quite work. The
by Nisan Stiennon on Logical counterfactuals and differential privacy | 0 likes

I at first didn't understand
by Sam Eisenstat on An Untrollable Mathematician | 1 like

This is somewhat related to
by Vadim Kosoy on The set of Logical Inductors is not Convex | 0 likes

This uses logical inductors
by Abram Demski on The set of Logical Inductors is not Convex | 0 likes

Nice writeup. Is one-boxing
by Tom Everitt on Smoking Lesion Steelman II | 0 likes

Hi Alex! The definition of
by Vadim Kosoy on Delegative Inverse Reinforcement Learning | 0 likes

A summary that might be
by Alex Appel on Delegative Inverse Reinforcement Learning | 1 like

I don't believe that
by Alex Appel on Delegative Inverse Reinforcement Learning | 0 likes

This is exactly the sort of
by Stuart Armstrong on Being legible to other agents by committing to usi... | 0 likes


Privacy & Terms