by Patrick LaVictoire 1210 days ago | link | parent Sam Eisenstat produced the following counterexample to a stronger version of the revived trolljecture (one using soundness rather than consistency in the escalated theories): Take a statement $$X$$ in the language of PA such that neither $$X$$ nor $$\neg X$$ has a short proof (less than length $$L$$) in PA+Sound(PA); $$X$$ can be provably true, provably false, or undecidable in that system. Define $$U$$ to equal 10 if $$X\wedge A()=a$$, 0 if $$\neg X\wedge A()=a$$, and 5 if $$A()\neq a$$. Define $$A$$ to equal $$a$$ if there is a short proof in PA (less than length $$L$$) that $$A()=a\to U()=10$$, and $$b$$ otherwise. Clearly $$A()=b$$ and $$U()=5$$. Now I claim that in the formal system PA+Sound(PA)+$$(A()=a)$$, there is a very short proof of $$U()=10$$. And this is the case whether $$X$$ is true, false, or undecidable! In order to prove $$U()=0$$, take the axiom $$A()=a$$. It follows from the definition of $$A$$ that there is a short proof in PA (less than length $$L$$) that $$A()=a\to U()=10$$. And it follows from the soundness of PA that in fact $$A()=a\to U()=10$$, and thus $$U()=10$$. On the other hand, we cannot quickly prove $$U()=0$$, for if we could, then we could prove $$\neg X$$. Things get tricky because we’re using the extra (inconsistent) axiom $$A()=a$$, but intuitively, if $$X$$ is some arithmetical assertion that has nothing to do with the decision problem, then there shouldn’t be any short proof of $$\neg X$$ in this system either, and so the shortest proof of $$U()=0$$ should be at least comparable to length $$L$$.

 by Alex Appel 1207 days ago | link I don’t know, that line of reasoning that U()=10 seems like a pretty clear consequence of PA+Sound(PA)+A()=a, and the lack of a counterfactual for “X is false” doesn’t violate any of my intuitions. It’s just reasoning backwards from “The agent takes action a” to the mathematical state of affairs that must have produced it (there is a short proof of X). On second thought, the thing that broke the original trolljecture was reasoning backwards from “I take action a” to the mathematical state of affairs that produced it. Making inferences about the mathematical state of affairs in your counterfactuals using knowledge of your own decision procedure does seem to be a failure mode at first glance. Maybe use the counterfactual of “find-and-replace all instances of X’s source code in the universe program U with action a, and evaluate”? But that wouldn’t work for different algorithms that depend on checking the same math facts. There needs to be some way to go from “X takes action A” to “closely related algorithm Y takes action B”. But that’s just inferring mathematical statements from the combination of actions and knowing X’s decision rule. I’ll stick with the trolljecture as the current best candidate for “objective” counterfactuals, because reasoning backwards from actions and decision rules a short way into math facts seems needed to handle “logically related” algorithms, and this counterexample looks intuitively correct. reply

### NEW DISCUSSION POSTS

[Note: This comment is three
 by Ryan Carey on A brief note on factoring out certain variables | 0 likes

There should be a chat icon
 by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes

Apparently "You must be
 by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like

There is a replacement for
 by Alex Mennen on Meta: IAFF vs LessWrong | 1 like

Regarding the physical
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think that we should expect
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think I understand your
 by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

This seems like a hack. The
 by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

After thinking some more,
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yes, I think that we're
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

My intuition is that it must
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

To first approximation, a
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Actually, I *am* including
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yeah, when I went back and
 by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes

> Well, we could give up on
 by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes