Intelligent Agent Foundations Forumsign up / log in
Acausal trade: double decrease
discussion post by Stuart Armstrong 193 days ago | 2 comments

A putative new idea for AI control; index here.

Other posts in the series: Introduction, Double decrease, Pre-existence deals, Full decision algorithms, Breaking acausal trade, Trade in different types of utility functions, Being unusual, and Summary.

A reminder that we won’t be looking at any “utilities I might have had before I knew who I was” scenarios.

This post is for illustrating a point about acausal trade: weakening the acausal trade network(s) in any way tends to reduce acausal trade more than linearly, as the traders cut back further on their trading. And the converse for strengthening the acausal trade network(s).


How to weaken the network

How could the acausal trade network be weakened? In potentially many ways. Greater uncertainty about the existence or the utilities of other agents, for instance. More agents who might defect from the trade, not have right utility function, or with who you can’t reach a deal because of negotiation breakdown.

Basically anything that lowers the expected number of agents acausally trading with you - and also causes those agents to similarly have a lower expectation on the number of agents trading with you.

Illustration

Take the case where \(N=2\), so there are only two possible agents, you (\(A_1\)) and one other (\(A_2\)), with utilities \(u_1\) and \(u_2\) respectively. Both agents are sure to exist, so \(q_1=q_2=1\).

Trade can’t happen unless there is some gain from trade - if it costs you more (in terms of \(u_1\)) to increase \(u_2\), than the gain in \(u_1\) that the other agent is capable of giving you in exchange, then there is no trade that can happen.

So suppose you can increase \(u_2\) quite easily initially, but it gets harder and harder as you increase it more. Specifically, if you’ve already increased \(u_2\) by \(x\), then it costs you, marginally, \(x\) to increase \(u_2\) further.

So the marginal cost is linear in \(x\); cost, here, always refers to the decrease in \(u_1\) needed to pay for the increase in \(u_2\).

Assume the other agent is in exactly the same situation, mirrored.

Then, since we’re assuming that the negotiations divide the gains from trade equally, then if you and the other agent have full knowledge of these facts and use a functional decision theory.

Then the rational acausal decision is for both of you to increase the utility of the other agent by \(1\), paying \(\int_0^1 x dx = 1/2\) utility each, and hence each gaining \(1/2\) utility total.

But now imagine that the probability of each agent existing is \(q=q_1=q_2\), and that \(q\) is not necessarily \(1\). You know you yourself exist, so put the probability of the other agent existing at \(q\) (note that this argument is robust to different types of anthropic reasoning, as it’s the change that happens when \(q\) varies that’s important).

Then the rational thing for both of you is to increase the other utility until the marginal cost of doing so reaches \(q\). Thus each agent increases the utility by \(q\), at a cost of \(\int_0^q xdx = q^2/2\). With probability \(q\), the other agent exists and will thus give you \(q\) utility. Thus the expected gain for each of you is \(q(q)-q^2/2=q^2/2\).

The fact that this is quadratic in \(q\) rather than linear is the “double decrease” effect: as the expected size of the network goes down, the expected return for participation goes down as well, causing those in it to decrease their own participation, until an equilibrium is reached at a lower level.



by Owen Cotton-Barratt 188 days ago | link

I think the double decrease effect kicks in with uncertainty, but not with confident expectation of a smaller network.

reply

by Stuart Armstrong 185 days ago | link

I think it does do the double decrease for the known smaller network.

Take three agent \(A_1\), \(A_2\), and \(A_3\), with utilities \(u_1\), \(u_2\), and \(u_3\). Assume the indexes \(i\), \(j\), and \(k\) are always distinct.

For each \(A_i\), they can boost \(u_j\) at the cost described above in terms of \(u_i\).

What I haven’t really specified is the three-way synergy - can \(A_i\) boost \(u_j+u_k\) more efficiently that simply boosting \(u_j\) and \(u_k\) independently? In general yes (the two utilities \(u_j\) and \(u_k\) are synergistic with each other, after all), but let’s first assume there is zero three-way synergy.

Then each agent \(A_i\) will sacrifice \(1/2+1/2=1\) in \(u_i\) to boost \(u_j\) and \(u_k\) each by \(1\). Overall, each utility function goes up by \(1+1-1=1\). This scales linearly with the size of the trade network each agent sees (excluding themselves): if there were two agents total, each utility would go up by \(1/2\), as in the top post example. And if there were \(n+1\) agents, each utility would go up by \(n/2\).

However, if there are any three-way, four-way,…, or \(n\)-way synergies, then the trade network is more efficient than that. So there is a double decrease (or double increase, from the other perspective), as long as there are higher-order synergies between the utilities.

reply



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

Unfortunately, it's not just
by Vadim Kosoy on Catastrophe Mitigation Using DRL | 0 likes

>We can solve the problem in
by Wei Dai on The Happy Dance Problem | 1 like

Maybe it's just my browser,
by Gordon Worley III on Catastrophe Mitigation Using DRL | 2 likes

At present, I think the main
by Abram Demski on Looking for Recommendations RE UDT vs. bounded com... | 0 likes

In the first round I'm
by Paul Christiano on Funding opportunity for AI alignment research | 0 likes

Fine with it being shared
by Paul Christiano on Funding opportunity for AI alignment research | 0 likes

I think the point I was
by Abram Demski on Predictable Exploration | 0 likes

(also x-posted from
by Sören Mindermann on The Three Levels of Goodhart's Curse | 0 likes

(x-posted from Arbital ==>
by Sören Mindermann on The Three Levels of Goodhart's Curse | 0 likes

>If the other players can see
by Stuart Armstrong on Predictable Exploration | 0 likes

Thinking about this more, I
by Abram Demski on Predictable Exploration | 0 likes

> So I wound up with
by Abram Demski on Predictable Exploration | 0 likes

Hm, I got the same result
by Alex Appel on Predictable Exploration | 1 like

Paul - how widely do you want
by David Krueger on Funding opportunity for AI alignment research | 0 likes

I agree, my intuition is that
by Abram Demski on Smoking Lesion Steelman III: Revenge of the Tickle... | 0 likes

RSS

Privacy & Terms