Intelligent Agent Foundations Forumsign up / log in
by Jessica Taylor 827 days ago | Stuart Armstrong likes this | link | parent

You might be interested in a way of ensuring that 2 players always have the same mixed strategy in all Nash equilibria of some game:

Assume we have a player \(A\) and a player \(B\). Player \(A\) has some already-specified utility function; we would like player \(B\) to play the same mixed strategy as \(A\). Introduce a new player \(C\) who gets to observe either \(A\) or \(B\)’s action (unknown with 50% probability for each), and tries to determine who took this action (getting a utility of 1 for guessing correctly and 0 otherwise). \(B\)’s utility function is 1 if \(C\) guesses incorrectly, and 0 if \(C\) guesses correctly. \(B\) will use the same mixed strategy as \(A\) in all Nash equilibria.

A similar method is used in the appendix A of the reflective oracles paper.



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

There should be a chat icon
by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes

Apparently "You must be
by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like

There is a replacement for
by Alex Mennen on Meta: IAFF vs LessWrong | 1 like

Regarding the physical
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think that we should expect
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think I understand your
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

This seems like a hack. The
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

After thinking some more,
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yes, I think that we're
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

My intuition is that it must
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

To first approximation, a
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Actually, I *am* including
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yeah, when I went back and
by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes

> Well, we could give up on
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

> For another thing, consider
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

RSS

Privacy & Terms