by Sam Eisenstat 1316 days ago | link | parent We can also construct an example where $${\rm PA} \vdash \phi \rightarrow \psi$$ with a short proof and $$\rm PA$$ also proves $$\phi \rightarrow \neg\psi$$, but any such proof is much longer. We only need to put a bound on the proof length in A’s proof search. Then, the argument that $${\tt A()} = 1 \wedge {\tt U()} \ne 0$$ proves its own consistency still works, and is rather short: $$O(\log n)$$ as the proof length bound $$n$$ increases. However, there cannot be a proof of $${\tt A()} = 1 \rightarrow {\tt U()} = 10$$ within $${\tt A}$$’s proof length bound, since if it found one it would immediately take action 1. In this case $$\rm PA$$ can still prove that $${\tt A()} = 1 \rightarrow {\tt U()} = 10$$ simply by running the agent, but this argument shows that any such proof must be long.

### NEW DISCUSSION POSTS

[Note: This comment is three
 by Ryan Carey on A brief note on factoring out certain variables | 0 likes

There should be a chat icon
 by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes

Apparently "You must be
 by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like

There is a replacement for
 by Alex Mennen on Meta: IAFF vs LessWrong | 1 like

Regarding the physical
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think that we should expect
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think I understand your
 by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

This seems like a hack. The
 by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

After thinking some more,
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yes, I think that we're
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

My intuition is that it must
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

To first approximation, a
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Actually, I *am* including
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yeah, when I went back and
 by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes

> Well, we could give up on
 by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes