discussion post by Stuart Armstrong 282 days ago | discuss

A putative new idea for AI control; index here.

The description I gave here about how to balance and negotiate between different utility functions, is a bit incomplete, as this post will show. Here I’ll give more details on possible decision algorithms the agents could run.

Here, two agents $$A_1$$ and $$A_2$$, who maximise $$u_1$$ and $$u_2$$, exist with probabilities $$q_1$$ and $$q_2$$, not necessarily independent. The joint probability that both agents exist is $$r_{12}$$.

## Waiting souls

In this perspective, we imagine that each utility function is represented by a disembodied entity, that negotiate the terms of acausal trade before the universe begins and before any agents exist or not. This is the Rawlsian veil of ignorance, that I said we’d ignore in the original post, with justifications presented here.

Now, however, we need to consider it, as a gateway to the case we want. How would the agents balance each other’s utilities?

One possibility is that the agents assign equal weight to both utilities. In that case they will be both maximising $$u_1+u_2$$. But this poses a continuity problem as the probability of any agent declines towards $$0$$. So the best option seems to be to have the agents agree to maximise $$q_1u_1+q_2u_2$$.

Then, in the situation presented in the previous post, both agents would increase the other utility until the marginal cost of doing so increased to either $$q_2/q_1$$ (for agent $$A_1$$) or $$q_1/q_2$$ (for agent $$A_2$$).

There is no “double decrease” in this situation.

## Existing souls

Here we restrict ourselves to agents that actually exist. So, if for instance $$r_{12}=0$$ (the two agents cannot both exist) then agent $$A_1$$, should they exist, will have no compunction about not maximising $$u_2$$ in any way.

One way of modelling this is to go back to the “waiting souls” formalism, but replace $$u_i$$ with $$u'_i= u_i I_{i}$$ where $$I_{i}$$ is the indicator variable on whether agent $$A_i$$ existed at any point in the universe. Thus all utilities depend on the existence of the agent that prefers them, in order to be maximised by anyone.

There is not longer a continuity issue with $$u_1'+u_2'$$ when the probabilities $$q_i$$ tend to zero, since low $$q_i$$ mean that changes to $$u_i'$$ become smaller and smaller in expectation.

So, when maximising $$u_1'+u_2'$$, the agent $$A_1$$ will consider that increases to $$u_2'$$ have an effect that is $$r_{12}/q_1$$ times as large as increases to $$u_2$$ (while increases to $$u_1$$ and $$u_1'$$ are identical from its perspective, since the agent exists). Thus it will increase $$u_2$$ until the marginal cost of further increases is $$r_{12}/q_1$$; similarly, $$A_2$$ will increase $$u_1$$ until the marginal cost of further increases is $$r_{12}/q_2$$.

Setting $$q_1=q_2=q$$ and $$r_{12}=q^2$$ reproduces the situation of this post. This acausal trade is subject to double decrease.

Alternatively, maximising $$q_1u_1'+q_2u_2'$$ means agent $$A_1$$ will increase $$u_2$$ till the marginal cost of doing so is $$r_{12}q_2/(q_1)^2$$ (and conversely for $$A_2$$ to $$r_{12}q_1/(q_2)^2$$). This is also subject to a double decrease, and improves the relative position of those agents most likely to exist.

Some agents may decide to join an acausal trade network if there something to gain for them - an actual gain once they look at the agents or potential agents in the network. This will exacerbate any double decrease, because agents who would have previously been willing to maximise some mix of $$u_1$$ and $$u_2$$, where maximising that mix would have been against their utility, will no longer be willing to trade.

These agents therefore treat the “no trade” position as a default disagreement point.

## Other options

Of course, there are many ways of reaching a trade deal, and they will give quite different results – especially when agents that use different types of fairness criteria attempt to reach a deal. In general, any extra difficulty will decrease the size of the trading network.

### NEW DISCUSSION POSTS

[Delegative Reinforcement
 by Vadim Kosoy on Stable Pointers to Value II: Environmental Goals | 1 like

Intermediate update: The
 by Alex Appel on Further Progress on a Bayesian Version of Logical ... | 0 likes

Since Briggs [1] shows that
 by 258 on In memoryless Cartesian environments, every UDT po... | 2 likes

This doesn't quite work. The
 by Nisan Stiennon on Logical counterfactuals and differential privacy | 0 likes

I at first didn't understand
 by Sam Eisenstat on An Untrollable Mathematician | 1 like

This is somewhat related to
 by Vadim Kosoy on The set of Logical Inductors is not Convex | 0 likes

This uses logical inductors
 by Abram Demski on The set of Logical Inductors is not Convex | 0 likes

Nice writeup. Is one-boxing
 by Tom Everitt on Smoking Lesion Steelman II | 0 likes

Hi Alex! The definition of
 by Vadim Kosoy on Delegative Inverse Reinforcement Learning | 0 likes

A summary that might be
 by Alex Appel on Delegative Inverse Reinforcement Learning | 1 like

I don't believe that
 by Alex Appel on Delegative Inverse Reinforcement Learning | 0 likes

This is exactly the sort of
 by Stuart Armstrong on Being legible to other agents by committing to usi... | 0 likes

When considering an embedder
 by Jack Gallagher on Where does ADT Go Wrong? | 0 likes

The differences between this
 by Abram Demski on Policy Selection Solves Most Problems | 1 like

Looking "at the very
 by Abram Demski on Policy Selection Solves Most Problems | 0 likes