Intelligent Agent Foundations Forumsign up / log in
Asymptotic Logical Uncertainty: Iterated Resource Bounded Solomonoff Induction
post by Scott Garrabrant 1429 days ago | Sam Eisenstat, Abram Demski and Jessica Taylor like this | discuss

This post is part of the Asymptotic Logical Uncertainty series. In this post, I present a possible fix to the strategy of using resource bounded Solomonoff induction for asymptotic logical uncertainty. This algorithm came out of conversations with Benja Fallenstein when I visited MIRI this week.

Warning: I do not have any positive results about this algorithm yet. I just suspect that it has a nontrivial chance of having some good properties, so I wanted to share it in case anyone else wants to think about it.

This algorithm takes in a Turing machine \(E\) which outputs an infinite string of bits, an integer \(n\). The algorithm must “quickly” output a bit, and the probability that it outputs 1 represents the “probability” that the \(n\)th bit output by \(E\) is a 1.

IRBSI(E,n)
 0: w=""
 1: Run E for n steps to get the first k bits
 2: for i = 1 to n
 3:     sample a randorm Turing machine M
 4:     for j = 1 to k
 5:         run M for 2^(n+j-i) steps on input the first j-1 bits output by E
 6:         if M does not output the jth bit output by E
 7:             break and goto 3
 8:     run M for time 2^n on input w
10:         if M outputs a 1
11:             concatinate a 1 to the end of w
12:         else
13:             concatinate a 0 to the end of w
14: return the last bit of w

The runtime of this algorithm is in \(O(n2^n)\), if \(E\) is sufficiently slow.

The rough justification behind this algorithm is that the Solomonoff induction inspired approach is good at getting good probabilities for bits conditioned on all the the previous bits. The problems come from the algorithm applying those conditional probabilities to past bits that were necessarily computed with fewer resources.

This algorithm only uses Solomonoff induction only to get conditional probabilities. It uses these conditional probabilities in such a way that they are only combined with other conditional probabilities computed with the exact same \(2^n\) amount of resources. Therefore it does not run into problems from trying to stay consistent with its past dumber self.

I do not know how good this algorithm will end up being, but I suspect that it gets simple things like this correct.

It may be that to get good results for this algorithm you have to use tricks like we used with the original Solomonoff approach by sampling the Turing machines in a way that is further biased towards simple machines, or by allowing the machines to run longer by sacrificing some of their probability mass in being sampled.



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

[Note: This comment is three
by Ryan Carey on A brief note on factoring out certain variables | 0 likes

There should be a chat icon
by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes

Apparently "You must be
by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like

There is a replacement for
by Alex Mennen on Meta: IAFF vs LessWrong | 1 like

Regarding the physical
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think that we should expect
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think I understand your
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

This seems like a hack. The
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

After thinking some more,
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yes, I think that we're
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

My intuition is that it must
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

To first approximation, a
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Actually, I *am* including
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yeah, when I went back and
by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes

> Well, we could give up on
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

RSS

Privacy & Terms