Intelligent Agent Foundations Forumsign up / log in
Acausal trade: double decrease
discussion post by Stuart Armstrong 347 days ago | 2 comments

A putative new idea for AI control; index here.

Other posts in the series: Introduction, Double decrease, Pre-existence deals, Full decision algorithms, Breaking acausal trade, Trade in different types of utility functions, Being unusual, and Summary.

A reminder that we won’t be looking at any “utilities I might have had before I knew who I was” scenarios.

This post is for illustrating a point about acausal trade: weakening the acausal trade network(s) in any way tends to reduce acausal trade more than linearly, as the traders cut back further on their trading. And the converse for strengthening the acausal trade network(s).


How to weaken the network

How could the acausal trade network be weakened? In potentially many ways. Greater uncertainty about the existence or the utilities of other agents, for instance. More agents who might defect from the trade, not have right utility function, or with who you can’t reach a deal because of negotiation breakdown.

Basically anything that lowers the expected number of agents acausally trading with you - and also causes those agents to similarly have a lower expectation on the number of agents trading with you.

Illustration

Take the case where \(N=2\), so there are only two possible agents, you (\(A_1\)) and one other (\(A_2\)), with utilities \(u_1\) and \(u_2\) respectively. Both agents are sure to exist, so \(q_1=q_2=1\).

Trade can’t happen unless there is some gain from trade - if it costs you more (in terms of \(u_1\)) to increase \(u_2\), than the gain in \(u_1\) that the other agent is capable of giving you in exchange, then there is no trade that can happen.

So suppose you can increase \(u_2\) quite easily initially, but it gets harder and harder as you increase it more. Specifically, if you’ve already increased \(u_2\) by \(x\), then it costs you, marginally, \(x\) to increase \(u_2\) further.

So the marginal cost is linear in \(x\); cost, here, always refers to the decrease in \(u_1\) needed to pay for the increase in \(u_2\).

Assume the other agent is in exactly the same situation, mirrored.

Then, since we’re assuming that the negotiations divide the gains from trade equally, then if you and the other agent have full knowledge of these facts and use a functional decision theory.

Then the rational acausal decision is for both of you to increase the utility of the other agent by \(1\), paying \(\int_0^1 x dx = 1/2\) utility each, and hence each gaining \(1/2\) utility total.

But now imagine that the probability of each agent existing is \(q=q_1=q_2\), and that \(q\) is not necessarily \(1\). You know you yourself exist, so put the probability of the other agent existing at \(q\) (note that this argument is robust to different types of anthropic reasoning, as it’s the change that happens when \(q\) varies that’s important).

Then the rational thing for both of you is to increase the other utility until the marginal cost of doing so reaches \(q\). Thus each agent increases the utility by \(q\), at a cost of \(\int_0^q xdx = q^2/2\). With probability \(q\), the other agent exists and will thus give you \(q\) utility. Thus the expected gain for each of you is \(q(q)-q^2/2=q^2/2\).

The fact that this is quadratic in \(q\) rather than linear is the “double decrease” effect: as the expected size of the network goes down, the expected return for participation goes down as well, causing those in it to decrease their own participation, until an equilibrium is reached at a lower level.



by Owen Cotton-Barratt 341 days ago | link

I think the double decrease effect kicks in with uncertainty, but not with confident expectation of a smaller network.

reply

by Stuart Armstrong 339 days ago | link

I think it does do the double decrease for the known smaller network.

Take three agent \(A_1\), \(A_2\), and \(A_3\), with utilities \(u_1\), \(u_2\), and \(u_3\). Assume the indexes \(i\), \(j\), and \(k\) are always distinct.

For each \(A_i\), they can boost \(u_j\) at the cost described above in terms of \(u_i\).

What I haven’t really specified is the three-way synergy - can \(A_i\) boost \(u_j+u_k\) more efficiently that simply boosting \(u_j\) and \(u_k\) independently? In general yes (the two utilities \(u_j\) and \(u_k\) are synergistic with each other, after all), but let’s first assume there is zero three-way synergy.

Then each agent \(A_i\) will sacrifice \(1/2+1/2=1\) in \(u_i\) to boost \(u_j\) and \(u_k\) each by \(1\). Overall, each utility function goes up by \(1+1-1=1\). This scales linearly with the size of the trade network each agent sees (excluding themselves): if there were two agents total, each utility would go up by \(1/2\), as in the top post example. And if there were \(n+1\) agents, each utility would go up by \(n/2\).

However, if there are any three-way, four-way,…, or \(n\)-way synergies, then the trade network is more efficient than that. So there is a double decrease (or double increase, from the other perspective), as long as there are higher-order synergies between the utilities.

reply



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

I think that in that case,
by Alex Appel on Smoking Lesion Steelman | 1 like

Two minor comments. First,
by Sam Eisenstat on No Constant Distribution Can be a Logical Inductor | 1 like

A: While that is a really
by Alex Appel on Musings on Exploration | 0 likes

> The true reason to do
by Jessica Taylor on Musings on Exploration | 0 likes

A few comments. Traps are
by Vadim Kosoy on Musings on Exploration | 1 like

I'm not convinced exploration
by Abram Demski on Musings on Exploration | 0 likes

Update: This isn't really an
by Alex Appel on A Difficulty With Density-Zero Exploration | 0 likes

If you drop the
by Alex Appel on Distributed Cooperation | 1 like

Cool! I'm happy to see this
by Abram Demski on Distributed Cooperation | 0 likes

Caveat: The version of EDT
by 258 on In memoryless Cartesian environments, every UDT po... | 2 likes

[Delegative Reinforcement
by Vadim Kosoy on Stable Pointers to Value II: Environmental Goals | 1 like

Intermediate update: The
by Alex Appel on Further Progress on a Bayesian Version of Logical ... | 0 likes

Since Briggs [1] shows that
by 258 on In memoryless Cartesian environments, every UDT po... | 2 likes

This doesn't quite work. The
by Nisan Stiennon on Logical counterfactuals and differential privacy | 0 likes

I at first didn't understand
by Sam Eisenstat on An Untrollable Mathematician | 1 like

RSS

Privacy & Terms