by Alex Mennen 503 days ago | link | parent The behavior of players with finite strategies is not computable. To see this, let $$n$$-HaltingBot be the strategy that cooperates iff the $$n$$th Turing machine halts (on null input) in fewer steps than the length of the string of predictions. If the $$n$$th Turing machine halts, then the depth of this strategy is the number of steps it takes for it to halt. If it doesn’t halt, then the depth of this strategy is $$0$$. Either way, it’s a finite strategy. But you’re paying attention to the limiting behavior as the length of the string of predictions approaches infinity, and in that sense, the $$n$$-HaltingBot cooperates with the other player iff the $$n$$th program halts. However, I’m pretty sure the behavior of players with provable finite bounds on the depths of their strategies are computable. In terms of philosophical reasonableness, I’m kind of skeptical of this. In an actual implementation of this with real agents, presumably there isn’t some prediction oracle that will be assisting the agents, so the agents will have to generate these predictions themselves, and then they’ll actually have to compute the limits of their and the other player’s behavior in order to decide what to actually do. You also want to avoid getting exploited by players who act like ProbabilisticFairBot when the string of predictions is finite but defect when the string is infinite. So I think they’ll need to look for proofs that the other player is an agent that behaves according to this framework, with some bound on the depth of their strategy. Two players doing this then end up doing a Lobian handshake that they’ll both act according to this framework, but if they need to do a Lobian handshake, they may as well use it to cooperate, instead of using it to adopt a different framework that will lead them to cooperate.

### NEW DISCUSSION POSTS

There should be a chat icon
 by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes

Apparently "You must be
 by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like

There is a replacement for
 by Alex Mennen on Meta: IAFF vs LessWrong | 1 like

Regarding the physical
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think that we should expect
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think I understand your
 by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

This seems like a hack. The
 by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

After thinking some more,
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yes, I think that we're
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

My intuition is that it must
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

To first approximation, a
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Actually, I *am* including
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yeah, when I went back and
 by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes

> Well, we could give up on
 by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

> For another thing, consider
 by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes