by Benja Fallenstein 1281 days ago | Patrick LaVictoire likes this | link | parent Next, we consider the case that PA is consistent and work through the agent’s decision. PA can’t prove $$A()\neq1$$, since we used the chicken rule, so since the sentence $$A()=1\to U()=5$$ is easily provable, the sentence $$A()=1\to U()=10$$ (ie. the first sentence that the agents checks for proofs of) must be unprovable. It seems like this argument needs soundness of PA, not just consistency of PA. Do you see a way to prove in PA that if $$\mathrm{PA}\vdash A()\neq 1$$, then PA is inconsistent? [edited to add:] However, your idea reminds me of my post on the odd counterfactuals of playing chicken, and I think the example I gave there makes your idea go through: The scenario is that you get 10 if you take action 1 and it’s not provable that you don’t take action 1; you get 5 if you take action 2; and you get 0 if you take action 1 and it’s provable that you don’t. Clearly you should take action 1, but I prove that modal UDT actually takes action 2. To do so, I show that PA proves $$A() = 1 \to \neg\square\ulcorner A() = 1\urcorner$$. (Then from the outside, $$A() = 2$$ follows from the outside by soundness of PA.) This seems to make your argument go through if we can also show that PA doesn’t show $$A() \neq 1$$. But if it did, then modal UDT would take action 1 because this comes first in its proof search, contradiction. Thus, PA proves $$A() = 1 \to U() = 0$$ (because this follows from $$A() = 1 \to \neg\square\ulcorner A() = 1\urcorner$$), and also PA doesn’t prove $$A() = 1 \to U() = 10$$. As in your argument, then, the trolljecture implies that we should think “if the agent takes action 1, it gets utility 0” is a good counterfactual, and we don’t think that’s true. Still interested in whether you can make your argument go through in your case as well, especially if you can use the chicken step in a way I’m not seeing yet. Like Patrick, I’d encourage you to develop this into a post.

 by Sam Eisenstat 1280 days ago | Benja Fallenstein and Patrick LaVictoire like this | link The argument that I had in mind was that if $$\rm{PA} \vdash \tt{A()} \ne 1$$, then $$\rm{PA} \vdash \square \ulcorner \tt{A()} \ne 1 \urcorner$$, so $$\rm{PA} \vdash \tt{A()} = 1$$ since PA knows how the chicken rule works. This gives us $$\rm{PA} \vdash \bot$$, so PA can prove that if $$\rm{PA} \vdash \tt{A()} \ne 1$$, then PA is inconsistent. I’ll include this argument in my post, since you’re right that this was too big a jump. Edit: We also need to use this argument to show that the modal UDT agent gets to the part where it iterates over utilities, rather than taking an action at the chicken rule step. I didn’t mention this explicitly, since I felt like I had seen it before often enough, but now I realize it is nontrivial enough to point out. reply

### NEW DISCUSSION POSTS

[Note: This comment is three
 by Ryan Carey on A brief note on factoring out certain variables | 0 likes

There should be a chat icon
 by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes

Apparently "You must be
 by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like

There is a replacement for
 by Alex Mennen on Meta: IAFF vs LessWrong | 1 like

Regarding the physical
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think that we should expect
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think I understand your
 by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

This seems like a hack. The
 by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

After thinking some more,
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yes, I think that we're
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

My intuition is that it must
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

To first approximation, a
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Actually, I *am* including
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yeah, when I went back and
 by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes

> Well, we could give up on
 by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes