by Jessica Taylor 707 days ago | link | parent I’m not sure what the “arbitrarily bad decisions” example is meant to illustrate? If the 2 agents randomize uniformly between r and l, they each get an expected utility of 1/2, which is greater than -1.

 by Stuart Armstrong 706 days ago | link But there aren’t two players, that’s just the model. What I mean is that all these ways of factoring out B can lead to arbitrary bad real expected utility, as compared with the agent that doesn’t factor. reply
 by Jessica Taylor 706 days ago | link I still don’t understand why the expected utility is $$-W$$ rather than $$1/2$$. reply
 by Stuart Armstrong 705 days ago | link In the real world, the utility is given by the diagonal (since $$a$$ and $$a'$$ being different in $$Q(a,a')$$ is the fiction allowing for factoring of $$B$$). Therefore the genuine expected utilities are only on the diagonal, and anything else than $$c$$ will give $$-W$$. reply
 by Patrick LaVictoire 694 days ago | link There’s nothing in the setup preventing the players from having access to independent random bits, though it’s fair to say that these approaches assume this to be the case even when it’s not. But then the fault is with that assumption of access to randomness, not with any of the constraints on $$Q$$. So I don’t think this is a strike against these methods. reply
 by Stuart Armstrong 692 days ago | link I’m not following. This “game” isn’t a real game. There are not multiple players. There is one agent, where we have taken its real, one-valued probability, and changed it into a two-valued $$Q$$, for the purposes of factoring out the impact of the variable. The real probability is the original probability, which is the diagonal of $$Q$$. reply

NEW DISCUSSION POSTS

If you drop the
 by Alex Appel on Distributed Cooperation | 1 like

Cool! I'm happy to see this
 by Abram Demski on Distributed Cooperation | 0 likes

Caveat: The version of EDT
 by 258 on In memoryless Cartesian environments, every UDT po... | 2 likes

[Delegative Reinforcement
 by Vadim Kosoy on Stable Pointers to Value II: Environmental Goals | 1 like

Intermediate update: The
 by Alex Appel on Further Progress on a Bayesian Version of Logical ... | 0 likes

Since Briggs [1] shows that
 by 258 on In memoryless Cartesian environments, every UDT po... | 2 likes

This doesn't quite work. The
 by Nisan Stiennon on Logical counterfactuals and differential privacy | 0 likes

I at first didn't understand
 by Sam Eisenstat on An Untrollable Mathematician | 1 like

This is somewhat related to
 by Vadim Kosoy on The set of Logical Inductors is not Convex | 0 likes

This uses logical inductors
 by Abram Demski on The set of Logical Inductors is not Convex | 0 likes

Nice writeup. Is one-boxing
 by Tom Everitt on Smoking Lesion Steelman II | 0 likes

Hi Alex! The definition of
 by Vadim Kosoy on Delegative Inverse Reinforcement Learning | 0 likes

A summary that might be
 by Alex Appel on Delegative Inverse Reinforcement Learning | 1 like

I don't believe that
 by Alex Appel on Delegative Inverse Reinforcement Learning | 0 likes

This is exactly the sort of
 by Stuart Armstrong on Being legible to other agents by committing to usi... | 0 likes