Intelligent Agent Foundations Forumsign up / log in
Asymptotic Logical Uncertainty: Introduction
post by Scott Garrabrant 1123 days ago | Jessica Taylor and Patrick LaVictoire like this | discuss

In this post, I will introduce a new way of thinking about logical uncertainty. The main goal of logical uncertainty is to learn how to assign probabilities to logical sentences which have not yet been proven true or false.

One common approach is to change the question, assume logical omniscience and only try to assign probabilities to the sentences that are independent of your axioms (in hopes that this gives insight to the other problem). Another approach is to limit yourself to a finite set of sentences or deductive rules, and assume logical omniscience on them. Yet another approach is to try to define and understand logical counterfactuals, so you can try to assign probabilities to inconsistent counterfactual worlds.

One thing all three of these approaches have in common is they try to allow (a limited form of) logical omniscience. This makes a lot of sense. We want a system that not only assigns decent probabilities, but which we can formally prove has decent behavior. By giving the system a type of logical omniscience, you make it predictable, which allows you to prove things about it.

However, there is another way to make it possible to prove things about a logical uncertainty system. We can take a program which assigns probabilities to sentences, and let it run forever. We can then ask about whether or not the system eventually gives good probabilities.

At first, it seems like this approach cannot work for logical uncertainty. Any machine which searches through all possible proofs will eventually give a good probability (1 or 0) to any provable or disprovable sentence. To counter this, as we give the machine more and more time to think, we have to ask it harder and harder questions.

We therefore have to analyze the machine’s behavior not on individual sentences, but on infinite sequences of sentences. For example, instead of asking whether or not the machine quickly assigns \(\frac{1}{10}\) to the probability that the \(3\uparrow\uparrow\uparrow\uparrow 3^{rd}\) digit of \(\pi\) is a \(5\) we look at the sequence:

\(a_n:=\) the probability the machine assigns at timestep \(2^n\) to the \(n\uparrow\uparrow\uparrow\uparrow n^{th}\) digit of \(\pi\) being \(5\),

and ask whether or not this sequence converges to \(\frac{1}{10}\).

Here is the list of posts in this sequence:

  1. Introduction
  2. The Benford Test
  3. Solomonoff Induction Inspired Approach
  4. Irreducible Patterns
  5. A Benford Learner
  6. Passing the Berford Test
  7. Connection to Random Logical Extensions
  8. Concrete Failure of the Solomonoff Approach
  9. Uniform Coherence
  10. Uniform Coherence 2
  11. A Modification to the Demski Prior
  12. The Modified Demski Prior is Uniformly Coherent
  13. Self Reference
  14. Iterated Resource Bounded Solomonoff Induction

Disclaimer: This sequence is a paper that I am trying to write. I intend to make minor changes to the posts in this sequence over time. I will try not to make changes that change important content, and will provide a warning if I do. There may be large gaps in time between posts. For now, I am trying to get these posts out quickly, so there may be typos.





NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

I found an improved version
by Alex Appel on A Loophole for Self-Applicative Soundness | 0 likes

I misunderstood your
by Sam Eisenstat on A Loophole for Self-Applicative Soundness | 0 likes

Caught a flaw with this
by Alex Appel on A Loophole for Self-Applicative Soundness | 0 likes

As you say, this isn't a
by Sam Eisenstat on A Loophole for Self-Applicative Soundness | 1 like

Note: I currently think that
by Jessica Taylor on Predicting HCH using expert advice | 0 likes

Counterfactual mugging
by Jessica Taylor on Doubts about Updatelessness | 0 likes

What do you mean by "in full
by David Krueger on Doubts about Updatelessness | 0 likes

It seems relatively plausible
by Paul Christiano on Maximally efficient agents will probably have an a... | 1 like

I think that in that case,
by Alex Appel on Smoking Lesion Steelman | 1 like

Two minor comments. First,
by Sam Eisenstat on No Constant Distribution Can be a Logical Inductor | 1 like

A: While that is a really
by Alex Appel on Musings on Exploration | 0 likes

> The true reason to do
by Jessica Taylor on Musings on Exploration | 0 likes

A few comments. Traps are
by Vadim Kosoy on Musings on Exploration | 1 like

I'm not convinced exploration
by Abram Demski on Musings on Exploration | 0 likes

Update: This isn't really an
by Alex Appel on A Difficulty With Density-Zero Exploration | 0 likes

RSS

Privacy & Terms