by Jessica Taylor 921 days ago | link | parent I’m not sure what the “arbitrarily bad decisions” example is meant to illustrate? If the 2 agents randomize uniformly between r and l, they each get an expected utility of 1/2, which is greater than -1.

 by Stuart Armstrong 921 days ago | link But there aren’t two players, that’s just the model. What I mean is that all these ways of factoring out B can lead to arbitrary bad real expected utility, as compared with the agent that doesn’t factor. reply
 by Jessica Taylor 920 days ago | link I still don’t understand why the expected utility is $$-W$$ rather than $$1/2$$. reply
 by Stuart Armstrong 920 days ago | link In the real world, the utility is given by the diagonal (since $$a$$ and $$a'$$ being different in $$Q(a,a')$$ is the fiction allowing for factoring of $$B$$). Therefore the genuine expected utilities are only on the diagonal, and anything else than $$c$$ will give $$-W$$. reply
 by Patrick LaVictoire 908 days ago | link There’s nothing in the setup preventing the players from having access to independent random bits, though it’s fair to say that these approaches assume this to be the case even when it’s not. But then the fault is with that assumption of access to randomness, not with any of the constraints on $$Q$$. So I don’t think this is a strike against these methods. reply
 by Stuart Armstrong 907 days ago | link I’m not following. This “game” isn’t a real game. There are not multiple players. There is one agent, where we have taken its real, one-valued probability, and changed it into a two-valued $$Q$$, for the purposes of factoring out the impact of the variable. The real probability is the original probability, which is the diagonal of $$Q$$. reply

### NEW DISCUSSION POSTS

[Note: This comment is three
 by Ryan Carey on A brief note on factoring out certain variables | 0 likes

There should be a chat icon
 by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes

Apparently "You must be
 by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like

There is a replacement for
 by Alex Mennen on Meta: IAFF vs LessWrong | 1 like

Regarding the physical
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think that we should expect
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think I understand your
 by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

This seems like a hack. The
 by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

After thinking some more,
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yes, I think that we're
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

My intuition is that it must
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

To first approximation, a
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Actually, I *am* including
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yeah, when I went back and
 by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes

> Well, we could give up on
 by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes