Intelligent Agent Foundations Forumsign up / log in
Simplified explanation of stratification
post by Stuart Armstrong 966 days ago | Patrick LaVictoire likes this | 5 comments

A putative new idea for AI control; index here.

I’ve previously talked about stratified indifference/learning. In this short post, I’ll try and present the idea, as simply and clearly as possible.


The truth is yet written

Assume the AI maximising a utility \(W\), but is unsure whether \(W\) is equal to utility \(u\) or instead utility \(v\). Humans have written the true utility on a paper somewhere, and, in an hour, the AI will look at that paper to check what \(W\) is. Let \(I_u\) be the indicator function that \(u\) is written on that paper in an hour; \(I_v=1-I_u\) is the obvious converse.

Thus \(\mathbf{W=I_u u + I_v v}\).

Here’s the problem. There is nothing stopping the AI from looking up the paper currently. And there is nothing stopping the AI from rewriting the paper, or substituting another (\(I_u\) is determined by the paper in an hour’s time, however that paper got there). In terms of causal graphs, we have the following setup:

Here \(\mathbf{W=}\) is the node that represents the utility that \(W\) is ultimately equal to, \(\mathbf{W_0\ u/v}\) is what is currently written on the paper (from now on, written as $ $ for short), \(\mathbf{W_1\ u/v}\) is what will be written on the paper at decision time (written as \(\mathbf{W_1}\)), and \(\mathbf{Act}\) is the AI’s action. Note that \(\mathbf{W_1}\) determines \(\mathbf{W=}\) directly, while it itself is determined by \(\mathbf{W_0}\) (the past paper) and \(\mathbf{Act}\) (the AI’s actions).

Default policy

Now assume that there is a default action or policy \(\emptyset\) where the AI simply does nothing. In that case, the value of \(\mathbf{W_0}\) determines \(\mathbf{W_1}\) completely. The AI has another action \(OW\) (short for overwrite) in which it overwrites the paper’s choice (this subdivides into \(ORu\), overwrite with \(u\), and \(ORv\), overwrite with \(v\)). Given \(OW\), \(\mathbf{W_1}\) is then independent of \(\mathbf{W_0}\).

Generally speaking, the AI will always choose \(OW\), as one of \(u\) and \(v\) will be easier to maximise, so the AI will choose that one to maximise.

So, informally, what do we really want to happen? We want the AI to maximise \(u\), if \(u\) would have been chosen, absent its intervention (and similarly for \(v\)). “Absent its intervention” seems to designate equality between the nodes \(\mathbf{W_0}\) and \(\mathbf{W=}\). More generally, it designates what happens when \(\mathbf{Act}=\emptyset\).

Stratification

So let’s modify the AI’s expected utility calculation to meet that informal desiderata. First, let’s look at the universe where, absent AI intervention, \(W=u\). What that means is that, conditional on \(\mathbf{Act}=\emptyset\), \(\mathbf{W=}\) is set to \(u\).

In this universe, \(\mathbf{W_0}\) and \(\mathbf{W_1}\) are both set to \(u\). Now, it makes no sense to keep \(\mathbf{W_1}\) being set to \(u\), since that node is a consequence of \(\mathbf{Act}\) as well as \(\mathbf{W_0}\).

So we’ll define a probability distribution \(P_{\emptyset, u}\) (and corresponding expectation \(\mathbb{E}_{\emptyset,u}\)) over all nodes that are not descendants of \(\mathbf{Act}\) (to do so, we’ll assume that all nodes that are descendants of \(\mathbf{Act}\) and ancestors of \(\mathbf{W=}\) are endogenous variables that are deterministic functions of their inputs; the stochasticity is put into exogenous variables, see this paper for more details). Then we will make the AI choose its actions by calculating the expectation of \(u\), given \(P_{\emptyset,u}\).

So, more precisely, this is the expectation of \(u\) in the universes where \(I_u\) would have been \(1\) if the AI had chosen to do nothing – thus \(\mathbf{Act}=\emptyset\).

So if the AI choose action \(a\), this is \(\mathbb{E}_{\emptyset,u}(u|\mathbf{Act}=a)\). Note that, in this instance, this is equal to the classical expectation \(\mathbb{E}(u|\mathbf{Act}=a,\mathbf{W_0}=u)\).

What happens if the AI choose \(\mathbf{Act}=ORv\), to overwrite the paper with \(v\)? This is irrelevant for the expected utility calculation, since we’re using \(\mathbb{E}_{\emptyset,u}(u|\mathbf{Act}=a)\): this is the expectation of \(u\) (the utility that would have been chosen, if \(\mathbf{Act}=\emptyset\)). The actual values of node \(\mathbf{W=}\) becomes irrelevant, only the counterfactual value it would have had. So changing \(\mathbf{W=}\) does not benefit the AI. In fact, if the AI is time-inconsistent, it makes things worse, as it means its future self will be maximising \(v\) instead of \(u\).

Then we do the same thing for \(v\), resulting in the formula for the stratified expectation \(\mathbb{E}^s\):

  • \(\mathbb{E}^s(U|a)= P(I_u=1|\emptyset)\mathbb{E}_{\emptyset,u}(u|a)+ P(I_v=1|\emptyset)\mathbb{E}_{\emptyset,v}(v|a)\).

In a sense, the AI should maximise \(W'=u(I_u|\emptyset)+v(I_v|\emptyset)\).

If you just want to get the basic idea of stratification, you need read no further than this. The rest is just some examples and potential issues.

Lottery and outcome pumps

Some versions of indifference have problems with infinite improbability drives.

A simple example of this would be a situation where the AI buys a lottery ticket and decides that it will overwrite the paper with \(u\) if the ticket wins, and with \(w\) if the ticket loses. Call this action \(\mathbf{Act}=iff\).

Presumably, it can use the money gained to increase utility, so \(\mathbb{E}(u|\mathbf{W=}=u,\mathbf{Act}=iff)\) is rather high (as \(W=u\) now implies that the lottery was won).

This gives the following causal graph:

Now let’s look at the stratified expectation. Given \(\mathbf{W=}=u\) and \(\mathbf{Act}=\emptyset\), we can compute \(P_{\emptyset,u}\). This is simply \(P(\mathbf{W_0}=u)=1\), and the value of \(\mathbf{Lotto}\) being what it was before, and independent of \(\mathbf{W_0}\).

Thus \(\mathbb{E}_{u,\emptyset}(u|\mathbf{Act}=iff)\) will continue to treat the node \(\mathbf{Lotto}\) as an independent variable with the same probability as under \(\mathbf{Act}=\emptyset\). Choosing \(iff\) thus does not help the AI’s expected stratified utility.

Humans changing their minds

Another situation to consider is one where humans have written the paper initially, but decided to change their minds at a later date. This can be captured in the following diagram:

Here the node \(\mathbf{Hum}\) determines how the humans change their minds and rewrite the paper. What is the dotted arrow between \(\mathbf{Act}\) and \(\mathbf{Hum}\)? That represents whether or not the change is a consequence of the AI’s actions or not. If \(\mathbf{Hum}\) is independent of the AI’s actions – if we would have changed our minds even if \(\mathbf{Act}=\emptyset\) – then \(P_{\emptyset,u}\) will set \(\mathbf{Hum}=u\) and \(\mathbf{W_0}\) to whatever its prior was. Thus \(\mathbf{Hum}\) determines \(\mathbf{W=}\), overriding \(\mathbf{W_0}\), the human’s initial decision.

So far, so good. But what if the change in policy is a consequence of the AI’s actions? What if the humans only change the utility because they see the AI behaving badly? Then \(\mathbf{Hum}\) is a descendant node from \(\mathbf{Act}\), and stratification doesn’t apply to it. Now, we could decompose that node further as:

Here \(\mathbf{Hum\ Fun}\) are the fundamentals of human personalities. Since it is not a descendant node from \(\mathbf{Act}\), stratification will put a probability distribution over it. However, this still doesn’t allow us to intervene, given the AI’s action. Under stratification, the AI’s sole concern is still “would \(\mathbf{Hum\ Fun}\) have lead to \(u\) (or \(v\)) under \(\mathbf{Act}=\emptyset\)”, rather than correcting itself under a different choice of actions.

How to model humans learning from the AI’s actions is hard. Indeed, it might be fundamentally hard: you want to distinguish humans learning from the AI (legit) from the AI manipulating humans (non-legit) and this likely can’t be done by just looking at the causal graph.

Notes on default policy

Note that \(\emptyset\) might be a really bad policy (take people apart to see what they prefer). Then, if it’s pretty clear that taking people apart is a bad idea according to the preferences that will emerge from that process, the AI will not choose \(\emptyset\), but will instead leave people together and try to deduce what the outcome of that process would have been.

Alternatively, if doing a logical counterfactual over its own action is bad, we could have the AI take \(\emptyset\) to not be its own action, but some other causal process that would have prevented the AI from being turned on in the first place.



by David Krueger 966 days ago | Jessica Taylor and Stuart Armstrong like this | link

Abstractly, I think of this as adding a utility node, U, with no parents, and having the agent try to maximize the expected value of U.

I think there are some implicit assumptions (which seem reasonable for many situations, prime facie) about the agent’s ability to learn about U via some observations when taking null actions (i.e. A and U share some descendant(s), D, and A knows something about P(D | U, A=null).

RE: the last bit, it seems like you can define learning from manipulating in a straightforward way similar to what is proposed here. The intuition is that the humans belief about U should be collapsing around a point, u* (in the absence of interference by the AI), and the AI helps learning if it accelerates this process. If this is literally true, then we can just say that learning is accelerated (at tstep t) if the probability H assigns to u* is higher given an agents action a than it would be given the null action, i.e.

P_H_t(u* | A_0 = a) > P_H_t(u* | A_0 = A1 = … = null).

reply

by Charlie Steiner 965 days ago | Jessica Taylor and Stuart Armstrong like this | link

I think you can put this scheme on a nicer foundation by talking about strategies rather than actions, and by letting the AI have some probability distribution over \(W_0\).

Then you just use the strategy that maximizes \(P(W_0=u)· P(u|W_0=u, do(strategy)) + P(W_0=v) · P(v|W_0=v, do(strategy))\). You can also think of this as doing a simplification of the expected utility calculation that bakes in the assumption that the AI can’t change \(W_0\).

You can then reintroduce the action \(\emptyset\) with the observation that the AI will also be well-behaved if it maximizes \(P(W_1=u|do(\emptyset))· P(u|W_1=u, do(strategy)) + P(W_1=v|do(\emptyset)) · P(v|W_1=v, do(strategy))\).

reply

by Stuart Armstrong 965 days ago | link

In this example, it’s clear that \(W_0\) is a special node. However, the AI only deduced that because, under \(\emptyset\), \(W_0\) determines \(W\). It’s perfectly plausible that under action \(b\), say, \(Hum\) instead determines it. Under \(ORu\) and \(ORv\), none of those nodes have any impact.

Therefore we need \(\emptyset\) to be a special strategy, as it allows us to identify what nodes connect with \(W\). The advantage of this method is that it lets the AI find the causal graph and compute the dependencies.

Agree strategies are better than actions.

reply

by David Krueger 966 days ago | Jessica Taylor likes this | link

So after talking w/Stuart, I guess what he means by “humans learning from the AI’s actions” is that what humans’ beliefs about U converges to actually changes (for the better). I’m not sure if that’s really desirable, atm.

On a separate note, my proposal has the practical issue that this agent only views its own potential influence on u* as undesirable (and not other agents’). So I think ultimately we want a more rich set of counter-factuals, including, e.g. that humans continue to exist indefinitely (otherwise P_Ht becomes undefined when humanity is extinct).

reply



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

[Note: This comment is three
by Ryan Carey on A brief note on factoring out certain variables | 0 likes

There should be a chat icon
by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes

Apparently "You must be
by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like

There is a replacement for
by Alex Mennen on Meta: IAFF vs LessWrong | 1 like

Regarding the physical
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think that we should expect
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think I understand your
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

This seems like a hack. The
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

After thinking some more,
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yes, I think that we're
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

My intuition is that it must
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

To first approximation, a
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Actually, I *am* including
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yeah, when I went back and
by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes

> Well, we could give up on
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

RSS

Privacy & Terms