discussion post by Stuart Armstrong 791 days ago | discuss

A putative new idea for AI control; index here.

The description I gave here about how to balance and negotiate between different utility functions, is a bit incomplete, as this post will show. Here I’ll give more details on possible decision algorithms the agents could run.

Here, two agents $$A_1$$ and $$A_2$$, who maximise $$u_1$$ and $$u_2$$, exist with probabilities $$q_1$$ and $$q_2$$, not necessarily independent. The joint probability that both agents exist is $$r_{12}$$.

## Waiting souls

In this perspective, we imagine that each utility function is represented by a disembodied entity, that negotiate the terms of acausal trade before the universe begins and before any agents exist or not. This is the Rawlsian veil of ignorance, that I said we’d ignore in the original post, with justifications presented here.

Now, however, we need to consider it, as a gateway to the case we want. How would the agents balance each other’s utilities?

One possibility is that the agents assign equal weight to both utilities. In that case they will be both maximising $$u_1+u_2$$. But this poses a continuity problem as the probability of any agent declines towards $$0$$. So the best option seems to be to have the agents agree to maximise $$q_1u_1+q_2u_2$$.

Then, in the situation presented in the previous post, both agents would increase the other utility until the marginal cost of doing so increased to either $$q_2/q_1$$ (for agent $$A_1$$) or $$q_1/q_2$$ (for agent $$A_2$$).

There is no “double decrease” in this situation.

## Existing souls

Here we restrict ourselves to agents that actually exist. So, if for instance $$r_{12}=0$$ (the two agents cannot both exist) then agent $$A_1$$, should they exist, will have no compunction about not maximising $$u_2$$ in any way.

One way of modelling this is to go back to the “waiting souls” formalism, but replace $$u_i$$ with $$u'_i= u_i I_{i}$$ where $$I_{i}$$ is the indicator variable on whether agent $$A_i$$ existed at any point in the universe. Thus all utilities depend on the existence of the agent that prefers them, in order to be maximised by anyone.

There is not longer a continuity issue with $$u_1'+u_2'$$ when the probabilities $$q_i$$ tend to zero, since low $$q_i$$ mean that changes to $$u_i'$$ become smaller and smaller in expectation.

So, when maximising $$u_1'+u_2'$$, the agent $$A_1$$ will consider that increases to $$u_2'$$ have an effect that is $$r_{12}/q_1$$ times as large as increases to $$u_2$$ (while increases to $$u_1$$ and $$u_1'$$ are identical from its perspective, since the agent exists). Thus it will increase $$u_2$$ until the marginal cost of further increases is $$r_{12}/q_1$$; similarly, $$A_2$$ will increase $$u_1$$ until the marginal cost of further increases is $$r_{12}/q_2$$.

Setting $$q_1=q_2=q$$ and $$r_{12}=q^2$$ reproduces the situation of this post. This acausal trade is subject to double decrease.

Alternatively, maximising $$q_1u_1'+q_2u_2'$$ means agent $$A_1$$ will increase $$u_2$$ till the marginal cost of doing so is $$r_{12}q_2/(q_1)^2$$ (and conversely for $$A_2$$ to $$r_{12}q_1/(q_2)^2$$). This is also subject to a double decrease, and improves the relative position of those agents most likely to exist.

Some agents may decide to join an acausal trade network if there something to gain for them - an actual gain once they look at the agents or potential agents in the network. This will exacerbate any double decrease, because agents who would have previously been willing to maximise some mix of $$u_1$$ and $$u_2$$, where maximising that mix would have been against their utility, will no longer be willing to trade.

These agents therefore treat the “no trade” position as a default disagreement point.

## Other options

Of course, there are many ways of reaching a trade deal, and they will give quite different results – especially when agents that use different types of fairness criteria attempt to reach a deal. In general, any extra difficulty will decrease the size of the trading network.

### NEW DISCUSSION POSTS

[Note: This comment is three
 by Ryan Carey on A brief note on factoring out certain variables | 0 likes

There should be a chat icon
 by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes

Apparently "You must be
 by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like

There is a replacement for
 by Alex Mennen on Meta: IAFF vs LessWrong | 1 like

Regarding the physical
 by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think that we should expect
 by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think I understand your
 by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

This seems like a hack. The
 by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

After thinking some more,
 by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yes, I think that we're
 by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

My intuition is that it must
 by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

To first approximation, a
 by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Actually, I *am* including
 by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yeah, when I went back and
 by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes

> Well, we could give up on
 by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes