Intelligent Agent Foundations Forumsign up / log in
by Wei Dai 987 days ago | Ryan Carey, Abram Demski and Scott Garrabrant like this | link | parent

This does seem to be the “obvious” next step in the UDT approach. I proposed something similar as “UDT2” in a 2011 post to the “decision theory workshop” mailing list, and others have made similar proposals.

But there is a problem with having to choose how much time/computing resources to give to the initial decision process. If you give it too little then its logical probabilities might be very noisy and you could end up with a terrible decision, but if you give it too much then it could update on too many logical facts and lose on acausal bargaining problems. With multiple AI builders, UDT2 seems to imply a costly arms-race situation where each has an incentive to give their initial decision process less time (than would otherwise be optimal) so that their AI could commit faster (and hopefully be logically updated upon by other AIs) and also avoid updating on other AI’s commitments.

I’d like to avoid this but don’t know how. I’m also sympathetic to Nesov’s (and others such as Gary Drescher’s) sentiment that maybe there is a better approach to the problems that UDT is trying to solve, but I don’t know what that is either.



by Scott Garrabrant 987 days ago | link

So my plan is to “solve” the problem of choosing how much time to give it by having a parameter (which stage of a logical inductor to use), and trying to get results saying that if we set the parameter sufficiently high, and we only consider the output on sufficiently far out problems, then we can prove that it does well.

It does not solve the problem, but it might let us analyze what we would get if we did solve the problem.

reply



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

[Note: This comment is three
by Ryan Carey on A brief note on factoring out certain variables | 0 likes

There should be a chat icon
by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes

Apparently "You must be
by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like

There is a replacement for
by Alex Mennen on Meta: IAFF vs LessWrong | 1 like

Regarding the physical
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think that we should expect
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think I understand your
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

This seems like a hack. The
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

After thinking some more,
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yes, I think that we're
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

My intuition is that it must
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

To first approximation, a
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Actually, I *am* including
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yeah, when I went back and
by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes

> Well, we could give up on
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

RSS

Privacy & Terms