Intelligent Agent Foundations Forumsign up / log in
Asymptotic Logical Uncertainty: Introduction
post by Scott Garrabrant 698 days ago | Jessica Taylor and Patrick LaVictoire like this | discuss

In this post, I will introduce a new way of thinking about logical uncertainty. The main goal of logical uncertainty is to learn how to assign probabilities to logical sentences which have not yet been proven true or false.

One common approach is to change the question, assume logical omniscience and only try to assign probabilities to the sentences that are independent of your axioms (in hopes that this gives insight to the other problem). Another approach is to limit yourself to a finite set of sentences or deductive rules, and assume logical omniscience on them. Yet another approach is to try to define and understand logical counterfactuals, so you can try to assign probabilities to inconsistent counterfactual worlds.

One thing all three of these approaches have in common is they try to allow (a limited form of) logical omniscience. This makes a lot of sense. We want a system that not only assigns decent probabilities, but which we can formally prove has decent behavior. By giving the system a type of logical omniscience, you make it predictable, which allows you to prove things about it.

However, there is another way to make it possible to prove things about a logical uncertainty system. We can take a program which assigns probabilities to sentences, and let it run forever. We can then ask about whether or not the system eventually gives good probabilities.

At first, it seems like this approach cannot work for logical uncertainty. Any machine which searches through all possible proofs will eventually give a good probability (1 or 0) to any provable or disprovable sentence. To counter this, as we give the machine more and more time to think, we have to ask it harder and harder questions.

We therefore have to analyze the machine’s behavior not on individual sentences, but on infinite sequences of sentences. For example, instead of asking whether or not the machine quickly assigns \(\frac{1}{10}\) to the probability that the \(3\uparrow\uparrow\uparrow\uparrow 3^{rd}\) digit of \(\pi\) is a \(5\) we look at the sequence:

\(a_n:=\) the probability the machine assigns at timestep \(2^n\) to the \(n\uparrow\uparrow\uparrow\uparrow n^{th}\) digit of \(\pi\) being \(5\),

and ask whether or not this sequence converges to \(\frac{1}{10}\).

Here is the list of posts in this sequence:

  1. Introduction
  2. The Benford Test
  3. Solomonoff Induction Inspired Approach
  4. Irreducible Patterns
  5. A Benford Learner
  6. Passing the Berford Test
  7. Connection to Random Logical Extensions
  8. Concrete Failure of the Solomonoff Approach
  9. Uniform Coherence
  10. Uniform Coherence 2
  11. A Modification to the Demski Prior
  12. The Modified Demski Prior is Uniformly Coherent
  13. Self Reference
  14. Iterated Resource Bounded Solomonoff Induction

Disclaimer: This sequence is a paper that I am trying to write. I intend to make minor changes to the posts in this sequence over time. I will try not to make changes that change important content, and will provide a warning if I do. There may be large gaps in time between posts. For now, I am trying to get these posts out quickly, so there may be typos.





NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

This isn't too related to
by Sam Eisenstat on Generalizing Foundations of Decision Theory II | 0 likes

I also commented there last
by Daniel Dewey on Where's the first benign agent? | 0 likes

(I replied last weekend, but
by Paul Christiano on Where's the first benign agent? | 0 likes

$g$ can be a fiber of $f$,
by Alex Mennen on Formal Open Problem in Decision Theory | 0 likes

>It seems like that can be
by Stuart Armstrong on ALBA: can you be "aligned" at increased "capacity"... | 0 likes

I disagree. I'm arguing that
by Stuart Armstrong on ALBA: can you be "aligned" at increased "capacity"... | 0 likes

But this could happen even if
by Paul Christiano on ALBA: can you be "aligned" at increased "capacity"... | 0 likes

If I read Paul's post
by Daniel Dewey on ALBA: can you be "aligned" at increased "capacity"... | 0 likes

I like this suggestion of a
by Patrick LaVictoire on Proposal for an Implementable Toy Model of Informe... | 0 likes

>It may generalize
by Stuart Armstrong on ALBA: can you be "aligned" at increased "capacity"... | 0 likes

I don't know what you really
by Paul Christiano on ALBA: can you be "aligned" at increased "capacity"... | 0 likes

>“is trying its best to do
by Stuart Armstrong on ALBA: can you be "aligned" at increased "capacity"... | 0 likes

In practice, I'd run your
by Stuart Armstrong on ALBA: can you be "aligned" at increased "capacity"... | 0 likes

>that is able to give
by Stuart Armstrong on ALBA: can you be "aligned" at increased "capacity"... | 0 likes

> good in practice, but has
by Paul Christiano on ALBA: can you be "aligned" at increased "capacity"... | 0 likes

RSS

Privacy & Terms