Intelligent Agent Foundations Forumsign up / log in
Acausal trade: trade barriers
discussion post by Stuart Armstrong 67 days ago | discuss

A putative new idea for AI control; index here.

Other posts in the series: Introduction, Double decrease, Pre-existence deals, Full decision algorithms, Breaking acausal trade, Trade in different types of utility functions, Being unusual, and Summary.

In a previous post, I discussed how one might convince an agent not to engage in acausal trade.

The idea was to reward the agent only for extra utility \(U\) that accrued because the agent was turned on (by a stochastic event \(X\)). Since causally disconnected agents couldn’t observe \(X\), they would “offer” the same “deals” whether or not the agent was turned on.

So the agent might be able to get a tremendous boost in utility from an acausal deal, but that boost would happen in the \(X\) world as well as the \(\neg X\) world, so the agent wouldn’t count that boost as a benefit.

That was effective as far as it went, but there was one kind of situation it didn’t deal with: what if the agent was simulated? Then the event \(X\) would be within the simulation, and the simulating ‘lords of the Matrix’ would be causally connected with the agent, hence the agent would act taking their preferences into account.

That in itself is still not a problem; but what if the agent had uncertainty about its own location? It might be in the “real” world, or it might be a simulation made other entities, causally disconnected from the “real” world. Then if the agent acted given that uncertainty, it would be in effect doing a form of acausal trade.

Grounding the world

There is no costless solution, for any such solution must rule out the agent acting like it was in a simulation, which means that we incur a real cost if we are in a simulation.

But if we’re willing to pay that cost, then one way of reducing the problem is to ground \(U\) in our understanding of physics. So instead of \(U\)=“human flourishing”, we have:

  • \(U\)=“human flourishing in a universe that roughly follows the known laws of physics, will last at least this many trillion years, has these restrictions on how fast information moves and how causality works”.

The idea is that a simulation that detailed would be indistinguishable with the real world (and the simulated humans therein would be real moral subjects).

Graded miracles

Of course, if \(U\) hits a constant when the laws of physics are violated, then the agent will ignore all “miracles”, no matter how convincing. The booming voice of god coming from all electrons in the universe, would be interpreted as just an unlikely quantum fluke.

We might not want our agent to be so incompetent in those worlds. So one solution would be to multiply \(U\) by \(r(w)\), where \(r\) is a measure of how “realistic” world \(w\) is. For very plausible worlds, \(r\approx 1\). For miraculous or clearly simulated worlds, \(r\) is much lower.

Thus the agent would be capable of functioning in those worlds, once it had accumulated enough evidence it was in one, but would not expect ahead of time to be in a miraculous world (to avoid Pascal’s muggings, it helps if \(U\) is bounded with a bound reasonably easy to approach - eg getting to within \(10\%\) of maximum \(U\) is not hard).

If the agent has no ability to modify \(r(w)\) through its own actions, this is equivalent with modifying the prior probabilities of various simulations vs realistic worlds. We should be careful to ensure that the total probability of all realistic worlds is much higher than that of all the simulated worlds, and that generic events do not cause the ratio between them to change much.



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

A few thoughts: I agree
by Sam Eisenstat on Some Criticisms of the Logical Induction paper | 0 likes

Thanks, so to paraphrase your
by Wei Dai on Current thoughts on Paul Christano's research agen... | 0 likes

> Why does Paul think that
by Paul Christiano on Current thoughts on Paul Christano's research agen... | 0 likes

Given that ALBA was not meant
by Wei Dai on Current thoughts on Paul Christano's research agen... | 0 likes

Thank you for writing this.
by Wei Dai on Current thoughts on Paul Christano's research agen... | 1 like

I mostly agree with this
by Paul Christiano on Current thoughts on Paul Christano's research agen... | 2 likes

>From my perspective, I don’t
by Johannes Treutlein on Smoking Lesion Steelman | 2 likes

Replying to Rob. I don't
by Vadim Kosoy on Some Criticisms of the Logical Induction paper | 0 likes

Replying to Rob. Actually,
by Vadim Kosoy on Some Criticisms of the Logical Induction paper | 0 likes

Replying to 240 (I can't
by Vadim Kosoy on Some Criticisms of the Logical Induction paper | 0 likes

Yeah, you're right. This
by Vadim Kosoy on Smoking Lesion Steelman | 1 like

The non-smoke-loving agents
by Abram Demski on Smoking Lesion Steelman | 1 like

Replying to "240" First,
by Vadim Kosoy on Some Criticisms of the Logical Induction paper | 0 likes

Clarification: I'm not the
by Tarn Somervell Fletcher on Some Criticisms of the Logical Induction paper | 0 likes

Alex, the difference between
by Vadim Kosoy on Some Criticisms of the Logical Induction paper | 1 like

RSS

Privacy & Terms