Intelligent Agent Foundations Forumsign up / log in
The universal prior is malign
link by Paul Christiano 144 days ago | Ryan Carey, Vadim Kosoy, Jessica Taylor and Patrick LaVictoire like this | 4 comments


by Paul Christiano 134 days ago | link

I’m curious about the extent to which people:

  • agree with this argument,
  • expect to find a form of induction to avoid this problem (e.g. by incorporating the anthropic update),
  • expect to completely avoid anything like the universal prior (e.g. via UDT)

reply

by Vadim Kosoy 119 days ago | link

I think that the problem is worse than what you believe. You seem to think it only applies to exotic AI designs that “depend on the universal prior,” but I think this problem naturally arises in most realistic AI designs.

Any realistic AI has to be able to effectively model its environment, even though the environment is much more complex than the AI itself and cannot be emulated directly inside the AI. This means that the AI will make the sort of predictions that would result from a process that “reasons abstractly about the universal prior.” Indeed, if there is a compelling reason to believe that an alien superintelligence Mu has strong incentives to simulate me, then it seems rational for me to believe that, with high probability, I am inside Mu’s simulation. In these conditions it seems that any rational agent (including a relatively rational human) would make decisions as if its assigns high probability to being inside Mu’s simulation.

I don’t see how UDT solves the problem. Yes, if I already know my utility function, then UDT tells me that, if many copies of me are inside Mu’s simulation, I should still behave as if I am outside the simulation, since the copies outside the simulation have much more influence on the universe. We don’t even need fully fledged UDT for that. As long as the simulation hypotheses have much lower utility variance than normal hypotheses, normal hypotheses will win despite lower probability. The problem is that the AI doesn’t a priori know the correct utility function, and whatever process it uses to discover that function is going to be attacked by Mu. For example, if the AI is doing IRL, Mu will “convince” the AI that what looks like a human is actually a “muman”, something that only pretends to be human in only to take over the IRL process, whereas its true values are Mu-ish.

reply

by Paul Christiano 118 days ago | link

Re: UDT solving the problem, I agree with what you say. UDT fixes some possible problems, but something like the universal prior still plays a role in all credible proposals for recovering a utility function.

reply

by Paul Christiano 118 days ago | link

I agree that for now, this problem is likely to be a deal-breaker for any attempt to formally analyze any AI.

We may disagree about the severity of the problem or how likely it is to disappear once we have a deeper understanding. But we probably both agree that it is a pain point for current theory, so it’s not clear our disagreements are action-relevant.

reply



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

This isn't too related to
by Sam Eisenstat on Generalizing Foundations of Decision Theory II | 0 likes

I also commented there last
by Daniel Dewey on Where's the first benign agent? | 0 likes

(I replied last weekend, but
by Paul Christiano on Where's the first benign agent? | 0 likes

$g$ can be a fiber of $f$,
by Alex Mennen on Formal Open Problem in Decision Theory | 0 likes

>It seems like that can be
by Stuart Armstrong on ALBA: can you be "aligned" at increased "capacity"... | 0 likes

I disagree. I'm arguing that
by Stuart Armstrong on ALBA: can you be "aligned" at increased "capacity"... | 0 likes

But this could happen even if
by Paul Christiano on ALBA: can you be "aligned" at increased "capacity"... | 0 likes

If I read Paul's post
by Daniel Dewey on ALBA: can you be "aligned" at increased "capacity"... | 0 likes

I like this suggestion of a
by Patrick LaVictoire on Proposal for an Implementable Toy Model of Informe... | 0 likes

>It may generalize
by Stuart Armstrong on ALBA: can you be "aligned" at increased "capacity"... | 0 likes

I don't know what you really
by Paul Christiano on ALBA: can you be "aligned" at increased "capacity"... | 0 likes

>“is trying its best to do
by Stuart Armstrong on ALBA: can you be "aligned" at increased "capacity"... | 0 likes

In practice, I'd run your
by Stuart Armstrong on ALBA: can you be "aligned" at increased "capacity"... | 0 likes

>that is able to give
by Stuart Armstrong on ALBA: can you be "aligned" at increased "capacity"... | 0 likes

> good in practice, but has
by Paul Christiano on ALBA: can you be "aligned" at increased "capacity"... | 0 likes

RSS

Privacy & Terms