Intelligent Agent Foundations Forumsign up / log in
by Owen Cotton-Barratt 127 days ago | link | parent

I think the double decrease effect kicks in with uncertainty, but not with confident expectation of a smaller network.



by Stuart Armstrong 125 days ago | link

I think it does do the double decrease for the known smaller network.

Take three agent \(A_1\), \(A_2\), and \(A_3\), with utilities \(u_1\), \(u_2\), and \(u_3\). Assume the indexes \(i\), \(j\), and \(k\) are always distinct.

For each \(A_i\), they can boost \(u_j\) at the cost described above in terms of \(u_i\).

What I haven’t really specified is the three-way synergy - can \(A_i\) boost \(u_j+u_k\) more efficiently that simply boosting \(u_j\) and \(u_k\) independently? In general yes (the two utilities \(u_j\) and \(u_k\) are synergistic with each other, after all), but let’s first assume there is zero three-way synergy.

Then each agent \(A_i\) will sacrifice \(1/2+1/2=1\) in \(u_i\) to boost \(u_j\) and \(u_k\) each by \(1\). Overall, each utility function goes up by \(1+1-1=1\). This scales linearly with the size of the trade network each agent sees (excluding themselves): if there were two agents total, each utility would go up by \(1/2\), as in the top post example. And if there were \(n+1\) agents, each utility would go up by \(n/2\).

However, if there are any three-way, four-way,…, or \(n\)-way synergies, then the trade network is more efficient than that. So there is a double decrease (or double increase, from the other perspective), as long as there are higher-order synergies between the utilities.

reply



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

Note that the problem with
by Vadim Kosoy on Open Problems Regarding Counterfactuals: An Introd... | 0 likes

Typos on page 5: *
by Vadim Kosoy on Open Problems Regarding Counterfactuals: An Introd... | 0 likes

Ah, you're right. So gain
by Abram Demski on Smoking Lesion Steelman | 0 likes

> Do you have ideas for how
by Jessica Taylor on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

I think I understand what
by Wei Dai on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

>You don’t have to solve
by Wei Dai on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

Your confusion is because you
by Vadim Kosoy on Delegative Inverse Reinforcement Learning | 0 likes

My confusion is the
by Tom Everitt on Delegative Inverse Reinforcement Learning | 0 likes

> First of all, it seems to
by Abram Demski on Smoking Lesion Steelman | 0 likes

> figure out what my values
by Vladimir Slepnev on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

I agree that selection bias
by Jessica Taylor on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

>It seems quite plausible
by Wei Dai on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

> defending against this type
by Paul Christiano on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

2. I think that we can avoid
by Paul Christiano on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

I hope you stay engaged with
by Wei Dai on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

RSS

Privacy & Terms