Intelligent Agent Foundations Forumsign up / log in
A correlated analogue of reflective oracles
post by Jessica Taylor 295 days ago | Sam Eisenstat, Vadim Kosoy, Abram Demski and Scott Garrabrant like this | discuss

Summary: Reflective oracles correspond to Nash equilibria. A correlated version of reflective oracles exists and corresponds to correlated equilibria. The set of these objects is convex, which is useful.

(thanks to Sam for a lot of help with coming up with constructions)

Motivation

Reflective oracles assign a probability to each query. The set of reflective oracles is not convex. For example, consider a machine \(M^O() := O(M, 0.5)\) (i.e. a machine that asks whether the probability that it returns 1 is at least 0.5). There are reflective oracles that assign probabilities 0, 0.5, and 1 to the query \(O(M, 0.5)\), and there aren’t reflective oracles that assign other probabilities. So the set of possible reflective oracles isn’t convex. This is for the same reason that the set of Nash equilibria isn’t always convex.

Related to the non-convexity, there isn’t in general a continuous way of mapping the numerical parameters of some Turing machines to a reflective oracle for those machines (or doing so as a Kakutani map). (Like there isn’t a way of Kakutani-mapping the parameters of a game to an \(\epsilon\)-approximate Nash equilibrium for that game).

This makes decision problems involving reflective oracles harder to analyze: the function mapping agents’ policies to the resulting reflective oracle will be discontinuous.

So to analyze these decision problems, it might be useful to construct a convex set analogous to the non-convex set of reflective oracles. Luckily, the set of correlated equilibria in a game is convex. This post presents an analogue of them in reflective oracle land.

Setup

An oracle (mapping from queries to answers) will be selected at random from some distribution. A query can ask about the distribution over oracles, conditional on the answer to that query.

Definitions: machines, oracles, queries

Let \(\mathcal{M}\) be some finite set of Turing machines that ZFC-provably halt on every input. (We could in principle deal with both infinite sets of Turing machines and possibly-non-halting Turing machines, as ordinary reflective oracles do, but this complicates things somewhat).

An oracle \(O\) maps each machine in \(\mathcal{M}\) to a natural number. The set of oracles is \(\mathcal{O} := \mathcal{M} \rightarrow \mathbb{N}\).

An oracle distribution \(D\) is a probability distribution over oracles. The set of oracle distributions is \(\mathcal{D} := \Delta \mathcal{O}\).

A query to an oracle is a way of asking about an oracle distribution in an “argmax” fashion. A query \(q\) is represented as a list of functions \(q_1, q_2, ..., q_{l(q)} : \mathcal{O} \rightarrow \mathbb{R}\). Write \(q(D) := \arg\max_{i \in \{1, 2, ..., k_q\}} \mathbb{E}_D[q_i(O)]\); in words, the query asks for some \(i\) such that \(q_i\) has maximum expectation on the oracle distribution. Note that:

  1. \(q(D)\) is non-empty for each \(D\)
  2. for each \(i \in \{1, 2, ..., k_q\}\), \(\{D | i \in q(D)\}\) is convex

Let \(\mathcal{Q}\) be the set of queries. Roughly, the set of allowed queries are those that partition the set of oracle distributions into a bunch of convex polygons; this allows for arbitrarily fine-precision queries about the distribution.

Let us interpret the output of each Turing machine (on the empty input) as a query. Some encoding scheme is necessary; I don’t think it matters much. So assume we have a map \(\mathrm{Eval} : \mathcal{M} \rightarrow \mathcal{Q}\). (The main reason for representing queries using Turing machines is to allow quining).

We will impose one additional restriction: for any machine \(M\), the query \(q\) it outputs must not depend on the distribution over \(O(M)\) (i.e. the query is actually only a function of the joint distribution over each \(O(M')\) value other than \(O(M)\) itself). This is to avoid liar’s paradoxes.

Reflectivity

An oracle distribution \(D\) is reflective if, for each \(M \in \mathcal{M}\), and each \(a\) such that \(D(O(M) = a) > 0\):

\[a \in \mathrm{Eval}(M)(\mathrm{Condition}(D, M, a))\]

where \(\mathrm{Condition}(D, M, a)\) is an oracle distribution formed by conditioning the oracle distribution \(D\) on the event \(O(M) = a\).

In words, an oracle distribution is reflective if each possible answer to each query is correct thing to say about the distribution over oracles conditional on the answer to that query.

Theorem 1: A reflective joint oracle exists.

Proof:

This oracle distribution will independently roll a die for each query to produce \(O\). Specifically, it is formed by a mapping \(\tilde{O} : \mathcal{M} \rightarrow \Delta \mathbb{N}\). Let the function \(d\) map \(\tilde{O}\) to an oracle distribution through independent sampling.

Define \(f(\tilde{O}) := \times_{M \in \mathcal{M}} \Delta (\mathrm{Eval}(M)(d(\tilde{O}))\), where \(\times\) indicates a Cartesian product, and \(\Delta S\) is the set of probability distributions over the set \(S\) (which is finite in this case). Note that \(f(\tilde{O})\) is always non-empty and that \(f\) is upper-hemicontinuous (by Berge’s Maximum Theorem). So by Kakutani’s Fixed Point Theorem, there is some mapping \(\tilde{O}\) such that \(\tilde{O} \in f(\tilde{O})\).

Consider the oracle distribution \(d(\tilde{O})\). For any \(M \in \mathcal{M}\), and \(O \in \mathrm{Support}(d(\tilde{O}))\), we have

\[O(M) \in \mathrm{Eval}(M)(d(\tilde{O})) = \mathrm{Eval}(M)(\mathrm{Condition}(d(\tilde{O}), M, O(M)))\]

where the second equality follows from (a) the fact that \(\mathrm{Eval}(M)\) does not depend on the distribution over \(O(M)\) and (b) the fact that all \(O(M)\) values are independent under \(d(\tilde{O})\). So \(d(O^*)\) is reflective.

\(\square\)

Theorem 2: The set of reflective oracle distributions is convex.

Proof:

Let \(D_0, D_1\) be reflective oracle distributions. Let \(\theta \in (0, 1)\). Define \(D_\theta := \theta D_0 + (1 - \theta) D_1\). This proof will show that \(D_\theta\) is reflective.

Let \(1 \leq i \leq n\). Let \(M \in \mathcal{M}\). Let \(a\) be such that \(D_\theta(O(M) = a) > 0\). Consider 3 cases:

  1. \(D_0(O(M) = a) > 0, D_1(O(M) = a) = 0\). Then \(a \in \mathrm{Eval}(M)(\mathrm{Condition}(D_0, M, a))\). But since \(D_1(O(M) = a) = 0\), we have \(\mathrm{Condition}(D_\theta, M, a) = \mathrm{Condition}(D_0, M, a)\). So \(a \in \mathrm{Eval}(M)(\mathrm{Condition}(D_\theta, M, a))\) as desired.
  2. \(D_0(O(M) = a) = 0, D_1(O(M) = a) > 0\). This is analogous to case 1.
  3. \(D_0(O(M) = a) > 0, D_1(O(M) = a) > 0\). We have \(a \in \mathrm{Eval}(M)(\mathrm{Condition}(D_0, M, a))\) and \(a \in \mathrm{Eval}(M)(\mathrm{Condition}(D_1, M, a))\). Note that \(\mathrm{Condition}(D_\theta, M, a)\) is a convex combination of \(\mathrm{Condition}(D_0, M, a)\) and \(\mathrm{Condition}(D_1, M, a)\) (this is a basic property of mixture distributions: the conditioning of a mixture of components is a mixture of the conditionings of the components). The set \(\{D | a \in \mathrm{Eval}(M)(D)\}\) is convex (from \(\mathrm{Eval}(M)\) being a query); therefore \(a \in \mathrm{Eval}(M)(\mathrm{Condition}(D_\theta, M, a))\), as desired.

\(\square\)

Correspondence with correlated equilibria

Reflective oracle distributions can be used to find correlated equilibria. Say we have a normal-form game with \(n\) players, where each player \(i\) selects an action \(A_i \in \mathcal{A}_i\), and receives utility \(U_i(A_1, ..., A_n)\). Define queries as follows:

\[q_i(D) := \arg\max_{a \in \mathcal{A}_i} \mathbb{E}_D[U_i(O(M_1), ..., O(M_{i-1}), a, O(M_{i+1}), ..., O(M_n))]\]

where the machine \(M_i\) computes a representation of the query \(q_i\) (the recursion works through mutual quining). It’s easy to show that reflective oracle distributions (for the set of machines \(\{M_1, ..., M_n\}\)) correspond exactly with correlated equilibria in the game.

Thus, as ordinary reflective oracles naturally yield Nash equilibria when causal decision theorists using them play games with each other, so do reflective oracle distributions naturally yield correlated equilibria when causal decision theorists using them play games with each other.

\(\epsilon\)-approximate correlated equilibria can be computed in polynomial time (see lectures 17+18 here). I conjecture that \(\epsilon\)-approximate reflective oracle distributions over a finite set of queries can also be found in polynomial time, perhaps by reducing the problem of finding \(\epsilon\)-approximate reflective oracle distributions to the problem of finding \(\epsilon\)-approximate correlated equilibria.



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

[Delegative Reinforcement
by Vadim Kosoy on Stable Pointers to Value II: Environmental Goals | 1 like

Intermediate update: The
by Alex Appel on Further Progress on a Bayesian Version of Logical ... | 0 likes

Since Briggs [1] shows that
by 258 on In memoryless Cartesian environments, every UDT po... | 2 likes

This doesn't quite work. The
by Nisan Stiennon on Logical counterfactuals and differential privacy | 0 likes

I at first didn't understand
by Sam Eisenstat on An Untrollable Mathematician | 1 like

This is somewhat related to
by Vadim Kosoy on The set of Logical Inductors is not Convex | 0 likes

This uses logical inductors
by Abram Demski on The set of Logical Inductors is not Convex | 0 likes

Nice writeup. Is one-boxing
by Tom Everitt on Smoking Lesion Steelman II | 0 likes

Hi Alex! The definition of
by Vadim Kosoy on Delegative Inverse Reinforcement Learning | 0 likes

A summary that might be
by Alex Appel on Delegative Inverse Reinforcement Learning | 1 like

I don't believe that
by Alex Appel on Delegative Inverse Reinforcement Learning | 0 likes

This is exactly the sort of
by Stuart Armstrong on Being legible to other agents by committing to usi... | 0 likes

When considering an embedder
by Jack Gallagher on Where does ADT Go Wrong? | 0 likes

The differences between this
by Abram Demski on Policy Selection Solves Most Problems | 1 like

Looking "at the very
by Abram Demski on Policy Selection Solves Most Problems | 0 likes

RSS

Privacy & Terms