 A Loophole for Self-Applicative Soundness discussion post by Alex Appel 411 days ago | Abram Demski likes this | 4 comments See comments. This is a rediscovery of a result from the 1980’s that allow concluding $$\phi$$ from $$\Box_{n}(\phi)$$ via a $$\mathcal{O}(n)$$-length proof, and even the statement that the theory has no disproof of length $$n$$ or less has a single $$\mathcal{O}(n)$$-length proof. This result is just an idea developed very recently, and I’d put ~2:1 odds on it having a fatal flaw, but it looks extremely promising if it works out. EDIT: It works. None of the proof theory checks have been done yet, but it does causes both Lob’s theorem, and Critch’s Bounded Parametric Lob result to fail. So, to begin with, if a theory thinks it is sound, then it is inconsistent. Proof by Lob’s theorem. $\vdash\Box\bot\to\bot\Rightarrow\vdash\bot$ Well that didn’t work. What if we give the theory a soundness schema over any proof which is of bounded length? Maybe the “there exists a proof” in the standard provability predicate is causing problems. Well then Critch’s Bounded Parametric Lob comes in to ruin our day. The entire proof will be reproduced below. Let $$g(k)$$, $$f(k)$$, and $$h(k)$$ be such that $$f(k)\ge g(k)+h(k)+\log(k)$$, $$\mathcal{E}g(k)k_{1}:\Box_{g(k)}\psi(k)\to(\Box_{h(k)}\Box_{g(k)}\psi(k)\to\Box_{f(k)}\bot)\] $\vdash\forall k,a:\Box_{a}\psi(k)\to\Box_{\mathcal{E}a}\Box_{a}\psi(k)$ (Bounded Inner Necessitation) Now, Specializing to a=g(k), we get $\vdash\forall k:\Box_{g(k)}\psi(k)\to\Box_{\mathcal{E}g(k)}\Box_{g(k)}\psi(k)$ Now, since \(\mathcal{E}g(k)k_{2}:\Box_{g(k)}\psi(k)\to\Box_{f(k)}\bot\] Pick a specific value of k, \(k'$$, which is sufficiently large. $\vdash\Box_{g(k')}\psi(k')\to\Box_{f(k')}\bot$ By the soundness schema, $$\Box_{f(k')}\bot\to\bot$$. The length of this proof isn’t constant, because $$f(k')$$ might be really big, so then it’d take about $$\log(f(k'))$$ characters to write down the single application of the soundness schema. $\vdash_{\log(f(k'))}\Box_{g(k')}\psi(k')\to\bot$ $\vdash_{\log(f(k'))}\psi(k')$ (By the definition of $$\psi(k')$$) $\vdash_{\mathcal{E}\log(f(k'))}\Box_{\log(f(k'))}\psi(k')$ $$\log(f(k))  by Sam Eisenstat 410 days ago | Alex Appel likes this | link As you say, this isn’t a proof, but it wouldn’t be too surprising if this were consistent. There is some \(k \in \mathbb{N}$$ such that $$\square_n \phi \to \phi$$ has a proof of length $$n^k$$ by a result of Pavel Pudlák (On the length of proofs of finitistic consistency statements in first order theories). Here I’m making the dependence on $$n$$ explicit, but not the dependence on $$\phi$$. I haven’t looked at it closely, but the proof strategy in Theorems 5.4 and 5.5 suggests that $$k$$ will not depend on $$\phi$$, as long as we only ask for the weaker property that $$\square_n \phi \to \phi$$ will only be provable in length $$n^k$$ for sentences $$\phi$$ of length at most $$k$$. reply  by Alex Appel 404 days ago | link I found an improved version by Pavel, that gives a way to construct a proof of $$\phi$$ from $$\Box_{n}\phi$$ that has a length of $$\mathcal{O}(n)$$. The improved version is here. There are restrictions to this result, though. One is that the C-rule must apply to the logic. This is just the ability to go from $$\exists x:\phi(x)$$ to instantiating a $$c$$ such that $$\phi(c)$$. Pretty much all reasonable theorem provers have this. The second restriction is that the theory must be finitely axiomatizable. No axiom schemas allowed. Again, this isn’t much of a restriction in practice, because NBG set theory, which proves the consistency of ZFC, is finitely axiomatizable. The proof strategy is basically as follows. It’s shown that the shortest proof of a statement with quantifier depth n must have a length of $$\Omega(n^2)$$, if the maximum quantifier depth in the proof is $$2n$$ or greater. This can be flipped around to conclude that if there’s a length-n proof of $$\phi$$, the maximum quantifier depth in the proof can be at most $$\mathcal{O}(\sqrt{n})$$. The second part of the proof involves constructing a bounded-quantifier version of a truth predicate. By Tarski’s undefinability of truth, a full truth predicate cannot be constructed, but it’s possible to exhibit a formula for which it’s provable that $qd(\overline{\psi})\le n\to(Sat_{n}(\overline{\psi},x)\leftrightarrow\Sigma(Sat_{n},\overline{\psi},x))$ ($$\Sigma$$ is the formula laying out Tarski’s conditions for something to be a truth predicate). Also, if $$n\ge$$ quantifier depth of $$\psi$$, there’s a proof of $Sat_{n}(\overline{\psi},x)\leftrightarrow\psi[x]$ ($$\psi[x]$$ is the sentence $$\psi$$ with its free variables substituted for the elements enumerated in the list $$x$$) Also, there’s a proof that $$Sat_{n}$$ is preserved under inference rules and logical axioms, as long as everything stays below a quantifier depth of $$n$$. All these proofs can be done in $$\mathcal{O}(n^2)$$ lines. One factor of $$n$$ comes from the formula abbreviated as $$Sat_{n}(x,y)$$ getting longer at a linear rate, and the other factor comes from having to prove $$Sat_{n}$$ for each $$n$$ seperately as an ingredient for the next proof. Combining the two parts, the $$\mathcal{O}(\sqrt{n})$$ bound on the quantifier depth and the $$\mathcal{O}(n^2)$$ bound on how long it takes to prove stuff about the truth predicate, make it take $$\mathcal{O}(n)$$ steps to prove all the relevant theorems about a sufficiently large bounded quantifier depth truth predicate, and then you can just go “the statement that we are claiming to have been proved must have $$Sat_{n}$$ apply to it, and we’ve proved this is equivalent to the statement itself” As a further bonus, a single $$\mathcal{O}(n)$$-length proof can establish the consistency of the theory itself for all $$n$$-length proofs. It seems like a useful project to develop a program that will automatically write a proof of this form, to assess whether abstract unpacking of bounded proofs is usable in practice, but it will require digging into a bunch of finicky details of exactly how to encode a math theory inside itself. reply  by Alex Appel 410 days ago | link Caught a flaw with this proposal in the currently stated form, though it is probably patchable. When unpacking a proof, at some point the sentence $$\Box_{1}\phi$$ will be reached as a conclusion, which is a false statement. reply  by Sam Eisenstat 409 days ago | link I misunderstood your proposal, but you don’t need to do this work to get what you want. You can just take each sentence $$\square_n \phi \to \phi$$ as an axiom, but declare that this axiom takes $$n$$ symbols to invoke. This could be done by changing the notion of length of a proof, or by taking axioms $$\psi_{\phi,n} \to (\square_n \phi \to \phi)$$ and $$\psi_{\phi,n}$$ with $$\psi_{\phi,n}$$ very long. reply

### NEW DISCUSSION POSTS

[Note: This comment is three
 by Ryan Carey on A brief note on factoring out certain variables | 0 likes

There should be a chat icon
 by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes

Apparently "You must be
 by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like

There is a replacement for
 by Alex Mennen on Meta: IAFF vs LessWrong | 1 like

Regarding the physical
 by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think that we should expect
 by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think I understand your
 by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

This seems like a hack. The
 by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

After thinking some more,
 by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yes, I think that we're
 by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

My intuition is that it must
 by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

To first approximation, a
 by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Actually, I *am* including
 by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yeah, when I went back and
 by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes

> Well, we could give up on
 by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes