Intelligent Agent Foundations Forumsign up / log in
The universal prior is malign
link by Paul Christiano 46 days ago | Ryan Carey, Vadim Kosoy, Jessica Taylor and Patrick LaVictoire like this | 4 comments


by Paul Christiano 36 days ago | link

I’m curious about the extent to which people:

  • agree with this argument,
  • expect to find a form of induction to avoid this problem (e.g. by incorporating the anthropic update),
  • expect to completely avoid anything like the universal prior (e.g. via UDT)

reply

by Vadim Kosoy 21 days ago | link

I think that the problem is worse than what you believe. You seem to think it only applies to exotic AI designs that “depend on the universal prior,” but I think this problem naturally arises in most realistic AI designs.

Any realistic AI has to be able to effectively model its environment, even though the environment is much more complex than the AI itself and cannot be emulated directly inside the AI. This means that the AI will make the sort of predictions that would result from a process that “reasons abstractly about the universal prior.” Indeed, if there is a compelling reason to believe that an alien superintelligence Mu has strong incentives to simulate me, then it seems rational for me to believe that, with high probability, I am inside Mu’s simulation. In these conditions it seems that any rational agent (including a relatively rational human) would make decisions as if its assigns high probability to being inside Mu’s simulation.

I don’t see how UDT solves the problem. Yes, if I already know my utility function, then UDT tells me that, if many copies of me are inside Mu’s simulation, I should still behave as if I am outside the simulation, since the copies outside the simulation have much more influence on the universe. We don’t even need fully fledged UDT for that. As long as the simulation hypotheses have much lower utility variance than normal hypotheses, normal hypotheses will win despite lower probability. The problem is that the AI doesn’t a priori know the correct utility function, and whatever process it uses to discover that function is going to be attacked by Mu. For example, if the AI is doing IRL, Mu will “convince” the AI that what looks like a human is actually a “muman”, something that only pretends to be human in only to take over the IRL process, whereas its true values are Mu-ish.

reply

by Paul Christiano 20 days ago | link

Re: UDT solving the problem, I agree with what you say. UDT fixes some possible problems, but something like the universal prior still plays a role in all credible proposals for recovering a utility function.

reply

by Paul Christiano 20 days ago | link

I agree that for now, this problem is likely to be a deal-breaker for any attempt to formally analyze any AI.

We may disagree about the severity of the problem or how likely it is to disappear once we have a deeper understanding. But we probably both agree that it is a pain point for current theory, so it’s not clear our disagreements are action-relevant.

reply



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

I agree that the epistemic
by Tsvi Benson-Tilsen on Open problem: thin logical priors | 0 likes

A very similar idea is
by Paul Christiano on Online Learning 1: Bias-detecting online learners | 0 likes

I think the fact that traders
by Paul Christiano on Open problem: thin logical priors | 1 like

Prior to working more on
by Paul Christiano on Updatelessness and Son of X | 0 likes

It seems quite challenging to
by Vadim Kosoy on Towards learning incomplete models using inner pre... | 0 likes

> I see minimally
by Paul Christiano on My current take on the Paul-MIRI disagreement on a... | 0 likes

> If such a recipe existed
by Paul Christiano on My current take on the Paul-MIRI disagreement on a... | 0 likes

> My current estimate is that
by Paul Christiano on Towards learning incomplete models using inner pre... | 0 likes

Regarding exploration, I
by Vadim Kosoy on Towards learning incomplete models using inner pre... | 0 likes

If an AI causes its human
by Wei Dai on My current take on the Paul-MIRI disagreement on a... | 0 likes

This result features in the
by Ryan Carey on In memoryless Cartesian environments, every UDT po... | 0 likes

Cool! It seems to me that
by Paul Christiano on Towards learning incomplete models using inner pre... | 0 likes

I see what you're arguing.
by Jessica Taylor on Pursuing convergent instrumental subgoals on the u... | 0 likes

It's just meant to be a
by Jessica Taylor on My current take on the Paul-MIRI disagreement on a... | 0 likes

Thanks, I think I understand
by David Krueger on My current take on the Paul-MIRI disagreement on a... | 0 likes

RSS

Privacy & Terms (NEW 04/01/15)