post by Stuart Armstrong 407 days ago | discuss

A putative new idea for AI control; index here.

I’ve never really understood acausal trade. So in a short series of posts, I’ll attempt to analyse the concept sufficiently that I can grasp it - and hopefully so others can grasp it as well.

# The simplest model

There are $$N$$ different rooms, with potential agents in them. The probability of the agents existing is a distribution $$Q$$, with marginal probabilities $$q_i$$, representing the probability that agent $$A_i$$ exists in room $$i$$. That agent has a utility $$u_i$$, which they are motivated to maximise.

The agents will never meet, never interact in any way, won’t even be sure of each other’s existence, may not known $$N$$, and may have uncertainty over the values of the other $$u_j$$’s. Each agent only acts in their own room. They may choose to diminish $$u_i$$ to increase one or more other $$u_j$$ with $$i\neq j$$; this is what allows the possibility of trade.

# Infinities, utility weights, negotiations, trade before existence

There are a number of things I won’t be considering here. First of all, infinities. In reality, acausal trade would happen in the real universe, which is likely infinite. It’s not clear at all how to rank infinitely many causally disconnected world-pieces. So I’ll avoid that entirely, assuming $$N$$ is finite (though possibly large).

There’s also the thorny issue of how to weigh and compare different utility functions, and/or the process of negotiation about how to divide the gains from trade.

I’ll ignore all these issues, and see the $$u_i$$ as functions from states of the world to real numbers: individual representatives of utility functions, not equivalence classes of equivalence functions. And the bargaining will be a straight one for one increase and decrease: a fair deal is one where $$u_i$$ and $$u_j$$ get the same benefit – as measured by $$u_i$$ and $$u_j$$.

I’ll also ignore the possibility of trade before existence, or Rawlsian veils of ignorance. If you are a $$u_i$$ maximiser, but you could have been a $$u_j$$ maximiser if things had been different, then you have no responsibility to increase $$u_j$$. Similarly, if there are $$u_j$$ maximisers out there, then you have no responsibility to maximiser $$u_j$$ without getting any $$u_i$$ increases out of that. See this post for more on that.

Changing that last assumption could radically alter the nature of acausal trade - potentially reducing it to simply maximising a universal prior utility function. See this post for more on that behaviour.

### NEW DISCUSSION POSTS

I found an improved version
 by Alex Appel on A Loophole for Self-Applicative Soundness | 0 likes

I misunderstood your
 by Sam Eisenstat on A Loophole for Self-Applicative Soundness | 0 likes

Caught a flaw with this
 by Alex Appel on A Loophole for Self-Applicative Soundness | 0 likes

As you say, this isn't a
 by Sam Eisenstat on A Loophole for Self-Applicative Soundness | 1 like

Note: I currently think that
 by Jessica Taylor on Predicting HCH using expert advice | 0 likes

Counterfactual mugging
 by Jessica Taylor on Doubts about Updatelessness | 0 likes

What do you mean by "in full
 by David Krueger on Doubts about Updatelessness | 0 likes

It seems relatively plausible
 by Paul Christiano on Maximally efficient agents will probably have an a... | 1 like

I think that in that case,
 by Alex Appel on Smoking Lesion Steelman | 1 like

 by Sam Eisenstat on No Constant Distribution Can be a Logical Inductor | 1 like

A: While that is a really
 by Alex Appel on Musings on Exploration | 0 likes

> The true reason to do
 by Jessica Taylor on Musings on Exploration | 0 likes