Intelligent Agent Foundations Forumsign up / log in
Acausal trade: double decrease
discussion post by Stuart Armstrong 796 days ago | 2 comments

A putative new idea for AI control; index here.

Other posts in the series: Introduction, Double decrease, Pre-existence deals, Full decision algorithms, Breaking acausal trade, Trade in different types of utility functions, Being unusual, and Summary.

A reminder that we won’t be looking at any “utilities I might have had before I knew who I was” scenarios.

This post is for illustrating a point about acausal trade: weakening the acausal trade network(s) in any way tends to reduce acausal trade more than linearly, as the traders cut back further on their trading. And the converse for strengthening the acausal trade network(s).


How to weaken the network

How could the acausal trade network be weakened? In potentially many ways. Greater uncertainty about the existence or the utilities of other agents, for instance. More agents who might defect from the trade, not have right utility function, or with who you can’t reach a deal because of negotiation breakdown.

Basically anything that lowers the expected number of agents acausally trading with you - and also causes those agents to similarly have a lower expectation on the number of agents trading with you.

Illustration

Take the case where \(N=2\), so there are only two possible agents, you (\(A_1\)) and one other (\(A_2\)), with utilities \(u_1\) and \(u_2\) respectively. Both agents are sure to exist, so \(q_1=q_2=1\).

Trade can’t happen unless there is some gain from trade - if it costs you more (in terms of \(u_1\)) to increase \(u_2\), than the gain in \(u_1\) that the other agent is capable of giving you in exchange, then there is no trade that can happen.

So suppose you can increase \(u_2\) quite easily initially, but it gets harder and harder as you increase it more. Specifically, if you’ve already increased \(u_2\) by \(x\), then it costs you, marginally, \(x\) to increase \(u_2\) further.

So the marginal cost is linear in \(x\); cost, here, always refers to the decrease in \(u_1\) needed to pay for the increase in \(u_2\).

Assume the other agent is in exactly the same situation, mirrored.

Then, since we’re assuming that the negotiations divide the gains from trade equally, then if you and the other agent have full knowledge of these facts and use a functional decision theory.

Then the rational acausal decision is for both of you to increase the utility of the other agent by \(1\), paying \(\int_0^1 x dx = 1/2\) utility each, and hence each gaining \(1/2\) utility total.

But now imagine that the probability of each agent existing is \(q=q_1=q_2\), and that \(q\) is not necessarily \(1\). You know you yourself exist, so put the probability of the other agent existing at \(q\) (note that this argument is robust to different types of anthropic reasoning, as it’s the change that happens when \(q\) varies that’s important).

Then the rational thing for both of you is to increase the other utility until the marginal cost of doing so reaches \(q\). Thus each agent increases the utility by \(q\), at a cost of \(\int_0^q xdx = q^2/2\). With probability \(q\), the other agent exists and will thus give you \(q\) utility. Thus the expected gain for each of you is \(q(q)-q^2/2=q^2/2\).

The fact that this is quadratic in \(q\) rather than linear is the “double decrease” effect: as the expected size of the network goes down, the expected return for participation goes down as well, causing those in it to decrease their own participation, until an equilibrium is reached at a lower level.



by Owen Cotton-Barratt 791 days ago | link

I think the double decrease effect kicks in with uncertainty, but not with confident expectation of a smaller network.

reply

by Stuart Armstrong 788 days ago | link

I think it does do the double decrease for the known smaller network.

Take three agent \(A_1\), \(A_2\), and \(A_3\), with utilities \(u_1\), \(u_2\), and \(u_3\). Assume the indexes \(i\), \(j\), and \(k\) are always distinct.

For each \(A_i\), they can boost \(u_j\) at the cost described above in terms of \(u_i\).

What I haven’t really specified is the three-way synergy - can \(A_i\) boost \(u_j+u_k\) more efficiently that simply boosting \(u_j\) and \(u_k\) independently? In general yes (the two utilities \(u_j\) and \(u_k\) are synergistic with each other, after all), but let’s first assume there is zero three-way synergy.

Then each agent \(A_i\) will sacrifice \(1/2+1/2=1\) in \(u_i\) to boost \(u_j\) and \(u_k\) each by \(1\). Overall, each utility function goes up by \(1+1-1=1\). This scales linearly with the size of the trade network each agent sees (excluding themselves): if there were two agents total, each utility would go up by \(1/2\), as in the top post example. And if there were \(n+1\) agents, each utility would go up by \(n/2\).

However, if there are any three-way, four-way,…, or \(n\)-way synergies, then the trade network is more efficient than that. So there is a double decrease (or double increase, from the other perspective), as long as there are higher-order synergies between the utilities.

reply



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

[Note: This comment is three
by Ryan Carey on A brief note on factoring out certain variables | 0 likes

There should be a chat icon
by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes

Apparently "You must be
by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like

There is a replacement for
by Alex Mennen on Meta: IAFF vs LessWrong | 1 like

Regarding the physical
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think that we should expect
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think I understand your
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

This seems like a hack. The
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

After thinking some more,
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yes, I think that we're
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

My intuition is that it must
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

To first approximation, a
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Actually, I *am* including
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yeah, when I went back and
by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes

> Well, we could give up on
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

RSS

Privacy & Terms