Intelligent Agent Foundations Forumsign up / log in
Games for factoring out variables
post by Stuart Armstrong 611 days ago | Jessica Taylor and Patrick LaVictoire like this | 9 comments

All the methods proposed for factoring out \(B\) (for having the AI maximise a certain value while ‘ignoring’ it’s impact via \(B\)) can be put on the same general footing. For some set \(\mathbb{A}\), define a function \(Q\) on \(\mathbb{A}\times\mathbb{A}\) with \(Q(a,a')\geq 0\) and \(\sum_{a,a'} Q(a,a')=1\).

Then for a utility \(u\), the general requirements is for the AI to maximise the quantity:

  • \(\mathbb{E}_Q(u)= \sum_{a,a',b} \mathbb{E}(u | B=b, A=a)P(B=b|A=a')Q(a,a')\),

subject to some constraints on \(Q\).


Let’s play a game

Define the two-player game \(G\) by allowing each player to have moves in \(\mathbb{A}\). The expected utility, for player \(1\), of the moves \((a,a')\) is defined to be \(\sum_{b} \mathbb{E}(u | B=b, A=a)P(B=b|A=a')\). To completely define the game, we need the expected utility for the second player. We’ll just set that to ensure the game is symmetric the expected utility of player two for \((a,a')\) is the same as the expected utility for player one for \((a',a)\).

Then \(Q\) is effectively a probability distribution over the choice of possible moves for \(G\). Standard symmetric games include the stag hunt, the prisoner’s dilemma, and the coordination game. For our purposes, since the actions for have meaningful labels, we’ll be considering skew coordination game given as follows:

l

r

l

(0,0)

(1,1)

r

(1,1)

(0,0)

These ‘games’ will be mainly used to show that the various methods reach different solutions in different situations, hence that they are genuinely different methods.

Standard maximalisation

If we constrain \(Q(a,a')=0\) if \(a\neq a'\), then that equation becomes:

  • \(\sum_{a,a',b} \mathbb{E}(u | B=b, A=a)P(B=b|A=a')Q(a,a') =\\ \sum_{a,b} \mathbb{E}(u | B=b, A=a)P(B=b|A=a)Q(a,a) =\\ \sum_a \mathbb{E}(u | A=a)Q(a,a).\)

This makes \(Q(a,a)\) into the action distribution over \(A=a\) the AI must choose from to maximise \(u\) in the classical sense. In terms of \(G\), this means the AI will make the superrational choice between the two players, if they lack any way of distinguishing themselves. Thus it will pick \(s,s\) for the stag hunt, \(c,c\) in the prisoner’s dilemma, and \(l,l\) or \(r,r\) (or some mixture between the two) in the skew coordination game.

Double variable maximalisation

First consider that there are no additional constraints on \(Q\). The AI can pick the values to be whatever it wants. It therefore acts like a dictator in \(G\), choosing both player’s outcomes. Under this, it will pick \(S,S\) for the stag hunt, \(D,C\) in the prisoner’s dilemma, \(L,R\) or \(R,L\) (or some mixture between the two) in the skew coordination game. Note that the AI can always achieve maximal utility by choosing a single action for both players. Thus if we added the restriction that \(Q(a,a')=R(a)R'(a')\), it could still reach at least one of these optimums.

What if we added the restriction \(Q(a,a')=R(a)R(a')\) for a single \(R\)? Maximising this over possible \(R\) is equivalent with the AI reaching the superrational decision for both players, where they have access to individual but not joint sources of randomness. Thus it will pick \(s,s\) for the stag hunt, \(c,c\) in the prisoner’s dilemma, and \(R(l)=R(r)=0.5\) in the skew coordination game.

For the next case, consider that \(Q(a,a')=R(a)R(a')\), as above, but the AI is not aiming to maximise \(\mathbb{E}_Q(u)\), but to pick an \(R\) that ensures

  • \(\mathbb{E}_{RR}(u) \geq \mathbb{E}_{R'R}(u)\) for all \(R'\).

This is Chris and Jessica’s method. You can transform that requirement into a quantity to be maximised, but it’s not particularly illuminating. What’s more illuminating is that the \(R\) that solve this are precisely the symmetric Nash equilibriums to \(G\) (all symmetric games have at least one symmetric Nash equilibrium). \(Q=RR\) ensures they must be symmetric, the condition on picking \(R\) ensures the first player cannot improve their expectation unilaterally, and, since \(G\) itself is symmetric, the second player cannot improve their expectation unilaterally.

In cases where there are multiple symmetric Nash equilibriums, we may as well have the AI choose the one that maximises the \(\mathbb{E}_{RR}(u)\). Thus the AI will choose \(s,s\) for the stag hunt, \(d,d\) in the prisoner’s dilemma, and \(R(l)=R(r)=0.5\) in the skew coordination game.

Single variable maximalisation

Here \(Q(a,a')=R(a)R'(a')\) where \(R'\) is some fixed distribution. There are some obvious candidates for that - maybe one action is a default action, or \(R'\) is uniform across all actions. There are some more complicated methods to assign probabilities in a way that is sensible if there are multiple branching decisions.

In this case, the AI will pick \(s\) or \(h\) for the stag hunt, depending on \(R'\) and the exact rewards of the game, will always choose \(d\) in the prisoner’s dilemma, and will choose \(r\) if \(R'(r)>0.5\) and \(l\) if \(R'(r)<0.5\) in the skew coordination game.

Summary

There are thus five methods, clearly distinguished by making different choices in the different games:

  1. \(Q(a,a')=0\) unless \(a=a'\). \(\mathbb{E}_Q(u) = \mathbb{E}(u)\) is maximalised.
  2. There are no constraints on \(Q\), or \(Q(a,a')=R(a)R'(a')\). \(\mathbb{E}_Q(u)\) is maximalised.
  3. \(Q=R(a)R(a')\) and \(\mathbb{E}_Q(u)\) is maximalised.
  4. \(Q=R(a)R(a')\) and \(\mathbb{E}_{RR}(u) \geq \mathbb{E}_{R'R}(u)\) for all \(R'\).
  5. \(Q=R(a)R'(a')\) for fixed \(R'\) and \(\mathbb{E}_Q(u)\) is maximalised.

Arbitrarily ‘bad’ decisions

All the methods (except for the first one) can reach arbitrarily bad decision in terms of real expected utility, as compared with standard expected utility maximalisation. Consider the following extension of the skew coordination problem, for large \(W\):

l

r

c

l

(-W,-W)

(W+1,W+1)

(0,0)

r

(W+1,W+1)

(-W,-W)

(0,0)

c

(0,0)

(0,0)

(-1,-1)

All alternative methods will choose actions from among \(r\) and \(l\) only. This condemns them to a utility of \(-W\), while the best action choice, \(c\), has an expected utility of \(-1\).

Further considerations

To distinguish which method we should be using, issues like stability, self-consistency, and other properties will probably be needed.



by Jessica Taylor 610 days ago | Stuart Armstrong likes this | link

You might be interested in a way of ensuring that 2 players always have the same mixed strategy in all Nash equilibria of some game:

Assume we have a player \(A\) and a player \(B\). Player \(A\) has some already-specified utility function; we would like player \(B\) to play the same mixed strategy as \(A\). Introduce a new player \(C\) who gets to observe either \(A\) or \(B\)’s action (unknown with 50% probability for each), and tries to determine who took this action (getting a utility of 1 for guessing correctly and 0 otherwise). \(B\)’s utility function is 1 if \(C\) guesses incorrectly, and 0 if \(C\) guesses correctly. \(B\) will use the same mixed strategy as \(A\) in all Nash equilibria.

A similar method is used in the appendix A of the reflective oracles paper.

reply

by Jessica Taylor 610 days ago | link

I’m not sure what the “arbitrarily bad decisions” example is meant to illustrate? If the 2 agents randomize uniformly between r and l, they each get an expected utility of 1/2, which is greater than -1.

reply

by Stuart Armstrong 610 days ago | link

But there aren’t two players, that’s just the model. What I mean is that all these ways of factoring out B can lead to arbitrary bad real expected utility, as compared with the agent that doesn’t factor.

reply

by Jessica Taylor 609 days ago | link

I still don’t understand why the expected utility is \(-W\) rather than \(1/2\).

reply

by Stuart Armstrong 609 days ago | link

In the real world, the utility is given by the diagonal (since \(a\) and \(a'\) being different in \(Q(a,a')\) is the fiction allowing for factoring of \(B\)). Therefore the genuine expected utilities are only on the diagonal, and anything else than \(c\) will give \(-W\).

reply

by Patrick LaVictoire 597 days ago | link

There’s nothing in the setup preventing the players from having access to independent random bits, though it’s fair to say that these approaches assume this to be the case even when it’s not.

But then the fault is with that assumption of access to randomness, not with any of the constraints on \(Q\). So I don’t think this is a strike against these methods.

reply

by Stuart Armstrong 596 days ago | link

I’m not following. This “game” isn’t a real game. There are not multiple players. There is one agent, where we have taken its real, one-valued probability, and changed it into a two-valued \(Q\), for the purposes of factoring out the impact of the variable. The real probability is the original probability, which is the diagonal of \(Q\).

reply

by Patrick LaVictoire 597 days ago | link

Typo in the “Single Variable Maximalisation” section: you meant to write \(R(a)R'(a')\) rather than \(R(a)R(a')\).

reply

by Stuart Armstrong 596 days ago | link

Thanks, corrected!

reply



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

This is exactly the sort of
by Stuart Armstrong on Being legible to other agents by committing to usi... | 0 likes

When considering an embedder
by Jack Gallagher on Where does ADT Go Wrong? | 0 likes

The differences between this
by Abram Demski on Policy Selection Solves Most Problems | 0 likes

Looking "at the very
by Abram Demski on Policy Selection Solves Most Problems | 0 likes

Without reading closely, this
by Paul Christiano on Policy Selection Solves Most Problems | 1 like

>policy selection converges
by Stuart Armstrong on Policy Selection Solves Most Problems | 0 likes

Indeed there is some kind of
by Vadim Kosoy on Catastrophe Mitigation Using DRL | 0 likes

Very nice. I wonder whether
by Vadim Kosoy on Hyperreal Brouwer | 0 likes

Freezing the reward seems
by Vadim Kosoy on Resolving human inconsistency in a simple model | 0 likes

Unfortunately, it's not just
by Vadim Kosoy on Catastrophe Mitigation Using DRL | 0 likes

>We can solve the problem in
by Wei Dai on The Happy Dance Problem | 1 like

Maybe it's just my browser,
by Gordon Worley III on Catastrophe Mitigation Using DRL | 2 likes

At present, I think the main
by Abram Demski on Looking for Recommendations RE UDT vs. bounded com... | 0 likes

In the first round I'm
by Paul Christiano on Funding opportunity for AI alignment research | 0 likes

Fine with it being shared
by Paul Christiano on Funding opportunity for AI alignment research | 0 likes

RSS

Privacy & Terms