Intelligent Agent Foundations Forumsign up / log in
Asymptotic Logical Uncertainty: Uniform Coherence 2
post by Scott Garrabrant 1437 days ago | Jessica Taylor and Patrick LaVictoire like this | 1 comment

Part of the Asymptotic Logical Uncertainty series. A couple weeks ago, I proposed a definition of uniform coherence. I decided to change the definition so that it applies to a more general context. This post is a self contained definition of uniform coherence designed so that people can start working on the problem with no background other than this post.

Every Turing machine discussed in this post will have two infinite blank tapes to work with in addition to an infinite write only output tape. There will be a special character # for the output tape which will separate the output tape into chunks.

A sentence enumerator is a machine which outputs an infinite number of # symbols, where between each pair of adjacent # symbols is a binary string which encodes a logical sentence \(\phi\). If \(M\) is a sentence enumerator, we let \(M(t)\) denote the sentence written between the two most recent # symbols at time \(t\). If \(M\) has not yet written two # symbols by time \(t\), we say that \(M(t)=\bot\).

Similarly, a logical predictor is a machine which outputs an infinite number of # symbols, where between each pair of adjacent # symbols is a binary string which encodes a finite sequence of ordered pairs, with each ordered pair containing a logical sentence \(\phi\) and a rational number \(0\leq p\leq 1\). If \(L\) is a logical predictor, we let \(L(\phi,t)\) denote the rational number \(p\) such that when \(L\) outputs its \(t^{th}\) # symbol, \((\phi,p)\) has been output between more recently than any other pair of the form \((\phi,p^\prime)\). If \(L\) has not yet output any pair of the form \((\phi,p^\prime)\) by time it outputs \(t\) # symbols, we say that \(L(\phi,t)=\frac{1}{2}\).

We say that a logical predictor \(L\) is uniformly coherent if the following three axioms hold:

  1. \[\lim_{t\rightarrow\infty}L(\bot,t)=0.\]

  2. If \(M\) is a sentence enumerator such that for all \(n\) the sentence \(M(n)\rightarrow M(n+1)\) is provable, then \[\lim_{t\rightarrow\infty}L(M(t),t)\] exists.

  3. If \(M_1\), \(M_2\), and \(M_3\) are three sentence enumerators such that for all \(n\) sufficiently large, the sentence “Exactly one of \(M_1(n)\), \(M_2(n)\), and \(M_3(n)\) is true” is provable, then \[\lim_{t\rightarrow\infty}L(M_1(t),t)+L(M_2(t),t)+L(M_3(t),t)=1.\]

Open Question: Do there exist any uniformly coherent logical predictors?

[EDIT: This problem is no longer open]

If there are uniformly coherent logical predictors, how frequently can we make a uniformly coherent logical predictor output a # symbol?

Given a logical predictor \(L\), observe that for any sentence \(\phi\), axiom 2 implies that there must be some real number \(\mathbb{P}_L(\phi)\) such that \[\lim_{t\rightarrow\infty}L(\phi,t)=\mathbb{P}_L(\phi).\]

Theorem: If \(L\) is a uniformly coherent logical predictor, then \(\mathbb{P}_L\) is a coherent computably approximable prior.

The proof is similar to the proof given for the old definition, but I encourage anyone who wants to think about this problem to verify this fact for themselves as a simple exercise. You can find a definition of coherence as definition 2 here. Computable approximability just means that the probability assigned to each sentence is the limit of numbers output by a computable process.



by Scott Garrabrant 1437 days ago | link

The two main advantages that this definition has oven the old one are that this definition allows you to disconnect the run time from the Godel number of the sentence, and this definition does not require anything to be “quickly computable.” It notices patterns of all forms, but patterns that take longer to express have more time to be recognized.

reply



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

[Note: This comment is three
by Ryan Carey on A brief note on factoring out certain variables | 0 likes

There should be a chat icon
by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes

Apparently "You must be
by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like

There is a replacement for
by Alex Mennen on Meta: IAFF vs LessWrong | 1 like

Regarding the physical
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think that we should expect
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think I understand your
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

This seems like a hack. The
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

After thinking some more,
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yes, I think that we're
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

My intuition is that it must
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

To first approximation, a
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Actually, I *am* including
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yeah, when I went back and
by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes

> Well, we could give up on
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

RSS

Privacy & Terms