Delegative Inverse Reinforcement Learning
post by Vadim Kosoy 168 days ago | 8 comments

We introduce a reinforcement-like learning setting we call Delegative Inverse Reinforcement Learning (DIRL). In DIRL, the agent can, at any point of time, delegate the choice of action to an “advisor”. The agent knows neither the environment nor the reward function, whereas the advisor knows both. Thus, DIRL can be regarded as a special case of CIRL. A similar setting was studied in Clouse 1997, but as far as we can tell, the relevant literature offers few theoretical results and virtually all researchers focus on the MDP case (please correct me if I’m wrong). On the other hand, we consider general environments (not necessarily MDP or even POMDP) and prove a natural performance guarantee.

The use of an advisor allows us to kill two birds with one stone: learning the reward function and safe exploration (i.e. avoiding both the Scylla of “Bayesian paranoia” and the Charybdis of falling into traps). We prove that, given certain assumption about the advisor, a Bayesian DIRL agent (whose prior is supported on some countable set of hypotheses) is guaranteed to attain most of the value in the slow falling time discount (long-term planning) limit (assuming one of the hypotheses in the prior is true). The assumption about the advisor is quite strong, but the advisor is not required to be fully optimal: a “soft maximizer” satisfies the conditions. Moreover, we allow for the existence of “corrupt states” in which the advisor stops being a relevant signal, thus demonstrating that this approach can deal with wireheading and avoid manipulating the advisor, at least in principle (the assumption about the advisor is still unrealistically strong). Finally we consider advisors that don’t know the environment but have some beliefs about the environment, and show that in this case the agent converges to Bayes-optimality w.r.t. the advisor’s beliefs, which is arguably the best we can expect.

All the proofs are in the Appendix.

## Notation

The set of natural numbers $${\mathbb{N}}$$ is defined to begin from 0. Given $$n \in {\mathbb{N}}$$, $$[n]$$ denotes the set $$\{m \in {\mathbb{N}}\mid m < n\}$$. Given a logical formula $$\varphi$$, $$[[\varphi]] \in \{0,1\}$$ denotes its truth value.

Given a set $${\mathcal{X}}$$, we denote $${\mathcal{X}}^*:=\bigsqcup_{n \in {\mathbb{N}}} X^n$$, the set of finite strings over the alphabet $${\mathcal{X}}$$. The unique string of length 0 is denoted $${\boldsymbol{\lambda}}$$. We also denote by $${\mathcal{X}}^\omega$$ the set of infinite strings over alphabet $${\mathcal{X}}$$. Given $$\alpha \in {\mathcal{X}}^* \sqcup {\mathcal{X}}^\omega$$ and $$n \in {\mathbb{N}}$$, $$\alpha_n \in {\mathcal{X}}$$ is the $$n$$-th symbol in $$\alpha$$ (i.e. $$\alpha = \alpha_0\alpha_1\alpha_2 \ldots$$) and $$\alpha_{:n} \in {\mathcal{X}}^*$$ is the prefix of length $$n$$ of $$\alpha$$ (ending in $$\alpha_{n-1}$$). Given $$\alpha,\beta \in {\mathcal{X}}^*$$, $${\lvert \alpha \rvert} \in {\mathbb{N}}$$ is the length of $$\alpha$$ and $$\alpha\beta \in {\mathcal{X}}^*$$ is the concatenation of $$\alpha$$ and $$\beta$$. The latter notation is also applicable when $$\beta \in {\mathcal{X}}$$. The notation $$\alpha \sqsubseteq \beta$$ means that $$\alpha$$ is a prefix of $$\beta$$. Given sets $${\mathcal{X}},{\mathcal{Y}}$$, $$x \in {\mathcal{X}}$$ and $$y \in {\mathcal{Y}}$$, we sometimes use the notation $$xy = (x,y) \in {\mathcal{X}}\times {\mathcal{Y}}$$. Given $$\alpha \in ({\mathcal{X}}\times {\mathcal{Y}})^*$$, and $$n \in {\mathbb{N}}$$, $$\alpha_{:n+1/2} \in ({\mathcal{X}}\times {\mathcal{Y}})^* \times {\mathcal{X}}$$ is defined by $$\alpha_{:n+1/2} = (\alpha_n, x)$$ where $$\alpha_n=(x,y)$$.

Given sets $$A$$ and $$B$$, the notation $$f: A {\xrightarrow{\circ}}B$$ means that $$f$$ is a partial mapping from $$A$$ to $$B$$, i.e. a mapping from some set $$\operatorname{dom}{f} \subseteq A$$ to $$B$$.

Given a topological space $$A$$, $$\Delta A$$ is the space of Borel probability measures on $$A$$. When no topology is specified, $$A$$ is understood to be discrete: in this case, $$\mu \in \Delta A$$ can also be regarded as a function from $$A$$ to $$[0,1]$$. The space $${\mathcal{X}}^\omega$$ is understood to have the product topology. Given topological spaces $$A,B$$, $$\mu \in \Delta A$$ and $$\nu \in \Delta B$$, $$\operatorname{supp}{\mu} \subseteq A$$ is the support of $$\mu$$ and $$\mu \times \nu \in \Delta(A \times B)$$ is the product measure. Given $$K$$ a Markov kernel from $$A$$ to $$B$$, $$\mu \ltimes A \in \Delta(A \times B)$$ is the semidirect product of $$\mu$$ and $$K$$. When $$A$$ and $$B$$ are discrete, $${\operatorname{H}}(\mu)$$ is the Shannon entropy of $$\mu$$ (in natural base) and $${\operatorname{D}_{\mathrm{KL}}(\mu \| \nu)}$$ is the Kullback-Leibler divergence from $$\nu$$ to $$\mu$$.

Given $$d \in {\mathbb{N}}$$ and $$x \in {\mathbb{R}}^d$$, we use the notation

${\lVert x \rVert}_1 := \sum_{i < d} {\lvert x_i \rvert}$

${\lVert x \rVert}_\infty := \max_{i < d} {\lvert x_i \rvert}$

The symbols $$o,O,\omega,\Omega,\Theta$$ will refer to usual $$O$$-notation.

## Results

An interface $${\mathcal{I}}= ({\mathcal{A}},{\mathcal{O}})$$ is a pair of finite sets (“actions” and “observations”). An $${\mathcal{I}}$$-policy is a function $$\pi: {({\mathcal{A}}\times {\mathcal{O}})^*}\rightarrow \Delta{\mathcal{A}}$$. An $${\mathcal{I}}$$-environment is a partial function $$\mu: {({\mathcal{A}}\times {\mathcal{O}})^*}\times {\mathcal{A}}{\xrightarrow{\circ}}\Delta{\mathcal{O}}$$ s.t.

1. $${\boldsymbol{\lambda}}\times {\mathcal{A}}\subseteq \operatorname{dom}\mu$$

2. Given $$h \in {({\mathcal{A}}\times {\mathcal{O}})^*}$$ and $$aob \in {\mathcal{A}}\times {\mathcal{O}}\times {\mathcal{A}}$$, $$haob \in \operatorname{dom}\mu$$ iff $$h \in \operatorname{dom}\mu$$ and $$\mu(ha)(o) > 0$$.

It is easy to see that $$\operatorname{dom}\mu$$ is always of the form $$X \times {\mathcal{A}}$$ for some $$X \subseteq {({\mathcal{A}}\times {\mathcal{O}})^*}$$. We denote $$\operatorname{hdom}\mu := X$$.

Given an $${\mathcal{I}}$$-policy $$\pi$$ and an $${\mathcal{I}}$$-environment $$\mu$$, we get $$\mu\bowtie\pi \in \Delta{({\mathcal{A}}\times {\mathcal{O}})^\omega}$$ in the usual way.

An $${\mathcal{I}}$$-reward function is a partial function $$r: ({\mathcal{A}}\times {\mathcal{O}})^* {\xrightarrow{\circ}}[0,1]$$. An $${\mathcal{I}}$$-universe is a pair $$(\mu,r)$$ where $$\mu$$ is an $${\mathcal{I}}$$-environment and $$r$$ is an $${\mathcal{I}}$$-reward function s.t. $$\operatorname{dom}{r} \supseteq \operatorname{hdom}{\mu}$$. We denote the space of $${\mathcal{I}}$$-universes by $$\Upsilon_{\mathcal{I}}$$. Given an $${\mathcal{I}}$$-reward function $$r$$ and $$t \in (0,\infty)$$, we have the associated utility function $${\operatorname{U}}_t^r: {({\mathcal{A}}\times {\mathcal{O}})^\omega}{\xrightarrow{\circ}}[0,1]$$ defined by

${\operatorname{U}}_t^{r}(x):=\frac{\sum_{n=0}^\infty e^{-n/t} r(x_{:n})}{\sum_{n=0}^\infty e^{-n/t}}$

Here and throughout, we use geometric time discount, however this choice is mostly for notational simplicity. More or less all results carry over to other shapes of the time discount function.

Denote $$\Pi_{{\mathcal{I}}}$$ the space of $${\mathcal{I}}$$-policies. An $${\mathcal{I}}$$-metapolicy is a family $$\{\pi_t \in \Pi_{\mathcal{I}}\}_{t \in (0, \infty)}$$, where the parameter $$t$$ is thought of as setting the scale of the time discount. An $${\mathcal{I}}$$-meta-universe is a family $$\{\upsilon_t \in \Upsilon\}_{t \in (0, \infty)}$$. This latter concept is useful for analyzing multi-agent systems, where the environment contains other agents and we study the asymptotics when all agents’ time discount scales go to infinity. We won’t focus on the multi-agent case in this essay, but for future reference, it seems useful to make sure the results hold in the greater generality of meta-universes.

Given an $${\mathcal{I}}$$-policy $$\pi$$, an $${\mathcal{I}}$$-universe $$\upsilon$$ and $$t > 0$$, we denote $${\operatorname{EU}}_\upsilon^\pi(t):={\underset{\mu\bowtie\pi}{\operatorname{E}}}[{\operatorname{U}}^r_t]$$ (this is well-defined since $${\operatorname{U}}^r_t$$ is defined on the support of $$\mu\bowtie\pi$$). We also denote $${\operatorname{EU}}_\upsilon^*(t):=\max_{\pi \in \Pi_{\mathcal{I}}} {\operatorname{EU}}_\upsilon^\pi(t)$$. We will omit $${\mathcal{I}}$$ when it is obvious from the context.

# Definition 1

Fix an interface $${\mathcal{I}}$$. Consider $$\pi^*$$ a metapolicy and $${\mathcal{H}}$$ a set of meta-universes. $$\pi^*$$ is said to learn $${\mathcal{H}}$$ when for any $$\upsilon \in {\mathcal{H}}$$

$\lim_{t \rightarrow \infty} ({\operatorname{EU}}_{\upsilon_t}^{*}(t) - {\operatorname{EU}}_{\upsilon_t}^{\pi^*_t}(t)) = 0$

$${\mathcal{H}}$$ is said to be learnable when there exists $$\pi^*$$ that learns $${\mathcal{H}}$$.

Our notion of learnability is closely related to the notion of sublinear regret, as defined in Leike 2016, except that we allow the policy to explicitly depend on the time discount scale. This difference is important: for example, given a single universe $$\upsilon$$, it might be impossible to achieve sublinear regret, but $$\{\upsilon\}$$ is always learnable.

# Proposition 1

Fix an interface $${\mathcal{I}}$$. Consider $${\mathcal{H}}$$ a countable learnable set of meta-universes. Consider any $$\zeta \in \Delta{\mathcal{H}}$$ s.t. $$\operatorname{supp}\zeta = {\mathcal{H}}$$. Consider $$\pi^\zeta$$ a $$\zeta$$-Bayes optimal metapolicy, i.e.

$\pi^\zeta_t \in {\underset{\pi \in \Pi}{\operatorname{arg\,max}}\,} {\underset{\upsilon \sim \zeta}{\operatorname{E}}}[{\operatorname{EU}}_{\upsilon_t}^{\pi}(t)]$

Then, $$\pi^\zeta$$ learns $${\mathcal{H}}$$.

Proposition 1 can be regarded as a “frequentist” justification for Bayesian agents: if any metapolicy is optimal in a “frequentist” sense for the class $${\mathcal{H}}$$ (i.e. learns it), then the Bayes optimal metapolicy is such.

Another handy property of learnability is the following.

# Proposition 2

Fix an interface $${\mathcal{I}}$$. Let $${\mathcal{H}}$$ be a countable set of meta-universes s.t. any finite $$\mathcal{G} \subseteq {\mathcal{H}}$$ is learnable. Then, $${\mathcal{H}}$$ is learnable.

We now introduce the formalism needed to discuss advisors. Define $${\bar{{\mathcal{A}}}}:={\mathcal{A}}\sqcup \{\bot\}$$, $${\bar{{\mathcal{O}}}}:={\bar{{\mathcal{A}}}}\times {\mathcal{O}}$$ and $${\bar{{\mathcal{I}}}}:=({\bar{{\mathcal{A}}}},{\bar{{\mathcal{O}}}})$$. Here, the $${\bar{{\mathcal{A}}}}$$ factor of $${\bar{{\mathcal{O}}}}$$ is the action taken by the advisor, assumed to be observable by the agent. The environments we will consider are s.t. this action is $$\bot$$ unless the agent delegated to the advisor at this point of time, which is specified by the agent taking action $$\bot$$. It will also be the case that whenever the agent takes action $$\bot$$, the advisor cannot take action $$\bot$$.

Denote $${\overline{{\mathcal{A}}\times {\mathcal{O}}}}:= {\bar{{\mathcal{A}}}}\times {\bar{{\mathcal{O}}}}\setminus \{\bot\bot o \mid o \in {\mathcal{O}}\}$$. Given $$abo \in {\overline{{\mathcal{A}}\times {\mathcal{O}}}}$$, we define $$\underline{abo} \in {\mathcal{A}}\times {\mathcal{O}}$$ by

$\underline{abo}:=\begin{cases} ao \text{ if } a\ne\bot \\ bo \text{ if } a=\bot \end{cases}$

Given $$h \in {{\overline{{\mathcal{A}}\times {\mathcal{O}}}}^*}$$, we define $$\underline{h} \in {({\mathcal{A}}\times {\mathcal{O}})^*}$$ by $$\underline{h}_n:=\underline{h_n}$$.

# Definition 2

An $${\bar{{\mathcal{I}}}}$$-policy $$\alpha$$ is said to be autonomous when for any $$h \in {{\overline{{\mathcal{A}}\times {\mathcal{O}}}}^*}$$, $$\alpha(h)(\bot)=0$$.

Consider an $${\mathcal{I}}$$-environment $$\mu$$ and an autonomous $${\bar{{\mathcal{I}}}}$$-policy $$\alpha$$, which we think of as the advisor policy. We define the $${\bar{{\mathcal{I}}}}$$-environment $$\bar{\mu}[\alpha]$$ as follows. For any $$h \in {{\overline{{\mathcal{A}}\times {\mathcal{O}}}}^*}$$ s.t. $$\underline{h} \in \operatorname{hdom}\mu$$, $$a,b \in {\bar{{\mathcal{A}}}}$$ and $$o \in {\mathcal{O}}$$:

$\bar{\mu}[\alpha](ha)(bo):=\begin{cases} \mu(\underline{h}a)(o) \text{ if } a\ne\bot,\, b=\bot \\ \alpha(h)(b)\cdot\mu(\underline{h}b)(o) \text{ if } a=\bot,\,b\ne\bot \\ 0 \text{ if } a\ne\bot,\, b\ne\bot \text{ or } a=b=\bot \end{cases}$

It is easy to the above is a well-defined $${\bar{{\mathcal{I}}}}$$-environment with $$\operatorname{hdom}\mu \subseteq {{\overline{{\mathcal{A}}\times {\mathcal{O}}}}^*}$$.

Given an $${\mathcal{I}}$$-universe $$\upsilon=(\mu,r)$$, we define the $${\bar{{\mathcal{I}}}}$$-reward function $$\bar{r}: {{\overline{{\mathcal{A}}\times {\mathcal{O}}}}^*}{\xrightarrow{\circ}}[0,1]$$ by $$\bar{r}(h):=r(\underline{h})$$ and the $${\bar{{\mathcal{I}}}}$$-universe $$\bar{\upsilon}[\alpha]:=(\bar{\mu}[\alpha],\bar{r})$$.

We now introduce the conditions on the advisor policy which will allow us to prove a learnability theorem. First, we specify an advisor that always remains “approximately rational.”

The notation $${\underset{x \sim \rho}{\operatorname{E}}}[X \mid h]$$ will be used to mean $${\underset{x \sim \rho}{\operatorname{E}}}[X \mid h \sqsubseteq x]$$. Given a universe $$\upsilon=(\mu,r)$$ and $$t \in (0, \infty)$$ we define $${\operatorname{V}}_t^\upsilon: \operatorname{hdom}{\mu} \rightarrow [0,1]$$ and $${\operatorname{Q}}_t^\upsilon: \operatorname{hdom}{\mu} \times {\mathcal{A}}\rightarrow [0,1]$$ by

${\operatorname{V}}_t^\upsilon(h):=\max_{\pi \in \Pi} {{\underset{x \sim \mu\bowtie\pi}{\operatorname{E}}}[\frac{\sum_{n={\lvert h \rvert}}^\infty e^{-n/t} r(x_{:n})}{\sum_{n={\lvert h \rvert}}^\infty e^{-n/t}} \mid h]}$

${\operatorname{Q}}_t^\upsilon(ha):=(1-e^{-1/t})r(h)+e^{-1/t}{\underset{o \sim \mu(ha)}{\operatorname{E}}}[V_t^\upsilon(hao)]$

# Definition 3

Fix an interface $${\mathcal{I}}$$. Consider a universe $$\upsilon=(\mu,r)$$. Let $$t,\beta \in (0,\infty)$$. A policy $$\alpha$$ is called strictly $$\beta$$-rational for $$(\upsilon,t)$$ when for any $$h \in \operatorname{dom}{\mu}$$ and $$a \in {\mathcal{A}}$$

$\alpha(h)(a) \leq \exp{[\beta({\operatorname{Q}}^{\upsilon}_t(ha)-{\operatorname{V}}^\upsilon_t(h))]} \max_{a^* \in {\mathcal{A}}} \alpha(h)(a^*)$

Now we deal with the possibility of the advisor becoming “corrupt”. In practical implementations where the “advisor” is a human operator, this can correspond to several types of events, e.g. sabotaging the channel that transmits data from the operator to the AI (“wireheading”), manipulation of the operator or replacement of the operator by a different entity.

# Definition 4

Fix an interface $${\mathcal{I}}$$. Consider a family $$\{{\mathcal{C}}_t \subseteq {({\mathcal{A}}\times {\mathcal{O}})^*}\}_{t \in (0,\infty)}$$ s.t. for any $$h,g \in {({\mathcal{A}}\times {\mathcal{O}})^*}$$, if $$h \in {\mathcal{C}}_t$$ then $$hg \in {\mathcal{C}}_t$$. We think of $${\mathcal{C}}_t$$ as the set of histories in which a certain event occurred. Consider a meta-universe $$\upsilon=(\mu,r)$$. $${\mathcal{C}}$$ is said to be a $$\upsilon$$-avoidable event when there is a meta-policy $$\pi^*$$ and $$D: (0,\infty) \rightarrow {\mathbb{N}}$$ s.t.

1. $$\lim_{t \rightarrow \infty} {({\operatorname{EU}}_{\upsilon_t}^{*}(t)-{\operatorname{EU}}_{\upsilon_t}^{\pi^*_t}(t))} = 0$$

2. $$D = \omega(t)$$

3. $$\lim_{t \rightarrow \infty} \Pr_{x \sim \mu_t\bowtie\pi^*_t}[\exists n \leq D(t): x_{:n} \in {\mathcal{C}}_t] = 0$$

That is, $${\mathcal{C}}$$ is $$\upsilon$$-avoidable when it is possible to avoid the event for a long time while retaining most of the value. Consider a meta-universe $$\upsilon=(\mu,r)$$ and $${\mathcal{C}}$$ a $$\upsilon$$-avoidable event. Denote $$-\upsilon:=(\mu,1-r)$$. We define the reward function $$r^{\mathcal{C}}_t$$ by

$r^{\mathcal{C}}_t(h):=\begin{cases} r_t(h) \text{ if } h \not\in {\mathcal{C}}_t \\ {\operatorname{V}}^{-\upsilon}_t(h_{:n}) \text{ if } h \in {\mathcal{C}}_t \text{ and } n = \min\{m \in {\mathbb{N}}\mid h_{:m} \in {\mathcal{C}}_t\}\end{cases}$

We think of $$r^{\mathcal{C}}$$ as representing a process wherein once the event represented by $${\mathcal{C}}$$ occurs, the agent starts minimizing the utility function. We also use the notation $$\upsilon^{\mathcal{C}}:=(\mu,r^{\mathcal{C}})$$.

# Definition 5

Consider a meta-universe $$\upsilon=(\mu,r)$$ and $$\beta: (0,\infty) \rightarrow (0,\infty)$$, where we think of the argument of the function $$\beta$$ as the time discount scale. An autonomous $${\bar{{\mathcal{I}}}}$$-metapolicy $$\alpha$$ is called $$\beta$$-rational for $$\upsilon$$ when there exists a $$\upsilon$$-avoidable event $${\mathcal{C}}$$ (that we think of as the advisor becoming “corrupt”) and an autonomous $${\bar{{\mathcal{I}}}}$$-metapolicy $$\alpha^*$$ (representing advisor policy conditional on non-corruption) s.t.

1. For any $$h \in {{\overline{{\mathcal{A}}\times {\mathcal{O}}}}^*}$$ s.t. $$\underline{h} \not\in {\mathcal{C}}_t$$, $$\alpha_t(h) = \alpha^*_t(h)$$.

2. $$\alpha^*_t$$ is strictly $$\beta(t)$$-rational for $$(\bar{\upsilon}^{\mathcal{C}}_t[\alpha^*], t)$$.

Our definition of $$\beta$$-rationality requires the advisor to be extremely averse to corruption: the advisor behaves as if, once a corruption occurs, the agent policy becomes the worst possible. In general, this seems much too strong: by the time corruption occurs, the agent might have already converged into accurate beliefs about the universe that allow it to detect the corruption and keep operating without the advisor. Even better, the agent can usually outperform the worst possible policy using the prior alone. Moreover, we can allow for corruption to depend on unobservable random events and differentiate between different degrees of corruption and treat them accordingly. We leave those further improvements for future work.

We are now ready to formulate the main result.

# Theorem

Consider $${\mathcal{H}}= \{\upsilon^k\}_{k \in {\mathbb{N}}}$$ a countable family of $${\mathcal{I}}$$-meta-universes and $$\beta: (0,\infty) \rightarrow (0,\infty)$$ s.t. $$\beta(t) = \omega(t^{2/3})$$. Let $$\{\alpha^k\}_{k \in {\mathbb{N}}}$$ be a family of autonomous $${\bar{{\mathcal{I}}}}$$-metapolicies s.t. for every $$k \in {\mathbb{N}}$$, $$\alpha^k$$ is $$\beta$$-rational for $$\upsilon^k$$. Define $$\bar{{\mathcal{H}}}:=\{\bar{\upsilon}^k[\alpha^k]\}_{k \in {\mathbb{N}}}$$. Then, $$\bar{{\mathcal{H}}}$$ is learnable.

Some remarks:

• By Proposition 1, $$\bar{{\mathcal{H}}}$$ is learned by any Bayes optimal metapolicy with prior supported on $$\bar{{\mathcal{H}}}$$.

• To get a feeling for the condition $$\beta(t)=\omega(t^{2/3})$$, consider an environment where the reward depends only on the last action and observation. In such an environment, an advisor that performs softmax (with constant parameter) on the next reward has $$\beta(t) = \Theta(t)$$. It is thus “more rational” than the required minimum.

• It is easy to see that the Theorem can be generalized by introducing an external penalty (negative reward) for each time the agent delegates to the advisor: as it is, using the advisor already carries a penalty due to its suboptimal choice.

The conditions of the Theorem imply that, in some sense, the advisor “knows” the true environment. This is unrealistic: obviously, we expect the human operator to have some (!) uncertainty about the world. However, we clearly cannot do away with this assumption: if the same action triggers a trap in some universes and is necessary for approaching maximal utility in other universes, and there is no observable difference between the universes before the action is taken, then there is no way to guarantee optimality. The prior knowledge you have about the universe caps the utility you can guarantee to obtain. On the other hand, as an AI designer, one can reasonably expect the AI to do at least as well as possible using the designer’s own knowledge. If running the AI is the designer’s best strategy, the AI should be close to Bayes optimal (in some sense that includes computational bounds et cetera: complications that we currently ignore) with respect to the designer’s posterior rather than with respect to some simple prior. In other words, we need a way to transmit the designer’s knowledge to the AI, without hand-crafting an elaborate prior.

The following shows that the DIRL achieves this goal (theoretically, given the considerable simplifying assumptions).

Given an environment $$\mu$$, we define $$\hat{\mu}: {({\mathcal{A}}\times {\mathcal{O}})^*}\rightarrow [0,1]$$ as follows. For $$h = a_0 o_0 a_1 o_1 \ldots a_{n-1} o_{n-1} \in \operatorname{hdom}{\mu}$$

$\hat{\mu}(h):=\prod_{m < n} \mu(h_{:m}a_m)(o_m)$

For $$h \in {({\mathcal{A}}\times {\mathcal{O}})^*}\setminus \operatorname{hdom}{\mu}$$, $$\hat{\mu}(h):=0$$.

Given a family of environments $$\{\mu^k\}_{k \in {\mathbb{N}}}$$ and $$\xi \in \Delta{\mathbb{N}}$$, we will use the notation $${\underset{k \sim \xi}{\boldsymbol{\operatorname{E}}}}[\mu^k]$$ to denote the environment given by

$\operatorname{hdom}{{\underset{k \sim \xi}{\boldsymbol{\operatorname{E}}}}[\mu^k]}:= \bigcup_{k \in \operatorname{supp}\xi} \operatorname{hdom}{\mu^k}$

${\underset{k \sim \xi}{\boldsymbol{\operatorname{E}}}}[\mu^k](ha):=\frac{{\underset{k \sim \xi}{\operatorname{E}}}[ \hat{\mu}^k(h)\mu^k(ha)]}{{\underset{k \sim \xi}{\operatorname{E}}}[ \hat{\mu}^k(h)]}$

# Corollary 1

Consider $$\{\mu^k\}_{k \in {\mathbb{N}}}$$ a countable family of $${\mathcal{I}}$$-meta-environments, $$\{r^K\}_{K \in {\mathbb{N}}}$$ a countable family of reward functions and $$\{\xi^K \in \Delta{\mathbb{N}}\}_{K \in {\mathbb{N}}}$$ s.t. given $$k \in \operatorname{supp}\xi^K$$, $$\operatorname{dom}{r^K} \supseteq \operatorname{hdom}{\mu^k}$$. We think of $$\xi^K$$ as the advisor’s belief about the environment in universe $$K$$. Let $$\beta: (0,\infty) \rightarrow (0,\infty)$$ be s.t. $$\beta(t) = \omega(t^{2/3})$$ and $$\{\alpha^K\}_{K \in {\mathbb{N}}}$$ be a family of autonomous $${\bar{{\mathcal{I}}}}$$-metapolicies s.t. for every $$K \in {\mathbb{N}}$$, $$\alpha^K$$ is $$\beta$$-rational for $$({\underset{k \sim \xi^K}{\boldsymbol{\operatorname{E}}}}[\mu^k], r^K)$$. Let $$\zeta \in \Delta{\mathbb{N}}^2$$ be s.t. for any $$J,j \in {\mathbb{N}}$$

$\Pr_{(K,k) \in \zeta}[K = J] > 0$

$\Pr_{(K,k) \in \zeta}[k = j \mid K = J] = \xi^J(j)$

We think of $$\zeta$$ as the agent’s prior and the equation above as stating the agent’s belief that the advisor’s beliefs are “calibrated”. Consider $$\pi^\zeta$$ a $$\zeta$$-Bayes optimal $${\bar{{\mathcal{I}}}}$$-metapolicy, i.e.

$\pi^\zeta \in {\underset{\pi \in \Pi_{{\bar{{\mathcal{I}}}}}}{\operatorname{arg\,max}}\,}{{\underset{(K,k) \in \zeta}{\operatorname{E}}}[{\operatorname{EU}}_{\bar{\mu}^k[\alpha^K],\bar{r}^K}^\pi(t)]}$ Then, for every $$K \in {\mathbb{N}}$$

$\lim_{t \rightarrow \infty} (\max_{\pi \in \Pi_{\mathcal{I}}} {\underset{k \in \xi^K}{\operatorname{E}}}[{\operatorname{EU}}_{\mu^k,r^K}^\pi(t)] - {\underset{k \in \xi^K}{\operatorname{E}}}[{\operatorname{EU}}_{\bar{\mu}^k[\alpha^K],\bar{r}^K}^{\pi^\zeta}(t)]) = 0$

If we happen to be so lucky that the advisor’s (presumably justified) belief is supported on a learnable environment class, we get a stronger conclusion.

# Corollary 2

In the setting of Corollary 1, fix $$K \in {\mathbb{N}}$$. Define the set of meta-universes $${\mathcal{H}}^K$$ by

${\mathcal{H}}^K:= \{(\mu^k, r^K) \mid k \in \operatorname{supp}\xi^K\}$

Assume $${\mathcal{H}}^K$$ is learnable. Then, for every $$k \in {\mathbb{N}}$$

$\lim_{t \rightarrow \infty} ({\operatorname{EU}}_{\mu^k,r^K}^*(t) - {\operatorname{EU}}_{\bar{\mu}^k[\alpha^K],\bar{r}^K}^{\pi^\zeta}(t)) = 0$

We also believe that to some extent DIRL is effective against acausal attack. Indeed, the optimality we get from the Theorem + Proposition 1 holds for any prior. However, the speed of convergence to optimality certainly depends on the prior. It is therefore desirable to analyze this dependency and bound the damage an adversary can do by controlling a certain portion of the prior. We leave this for future work.

# Proposition A.0

Fix an interface $${\mathcal{I}}$$. Consider $${\mathcal{H}}$$ a countable learnable set of meta-universes. Consider any $$\zeta \in \Delta{\mathcal{H}}$$ s.t. $$\operatorname{supp}\zeta = {\mathcal{H}}$$. Consider $$\pi^\zeta$$ a metapolicy s.t.

$\lim_{t \rightarrow \infty} (\max_{\pi \in \Pi} {\underset{\upsilon \sim \zeta}{\operatorname{E}}}[{\operatorname{EU}}_{\upsilon_t}^{\pi}(t)]-\max_{\pi \in \Pi} {\underset{\upsilon \sim \zeta}{\operatorname{E}}}[{\operatorname{EU}}_{\upsilon_t}^{\pi^\zeta_t}(t)]) = 0$

Then, $$\pi^\zeta$$ learns $${\mathcal{H}}$$.

# Proof of Proposition A.0

Fix $$\pi^*$$ a metapolicy that learns $${\mathcal{H}}$$. Consider $$\epsilon > 0$$ and let $${\mathcal{H}}_\epsilon \subseteq {\mathcal{H}}$$ be finite s.t. $$\zeta({\mathcal{H}}\setminus {\mathcal{H}}_\epsilon) < \epsilon$$. For $$t \gg 0$$ and every $$\upsilon \in {\mathcal{H}}_\epsilon$$ we have

${\operatorname{EU}}_{\upsilon_t}^{\pi^*_t}(t) \geq {\operatorname{EU}}_{\upsilon_t}^{*}(t) - \epsilon$

Also

${\underset{\upsilon \sim \zeta}{\operatorname{E}}}[{\operatorname{EU}}_{\upsilon_t}^{\pi^\zeta_t}(t)] \geq {\underset{\upsilon \sim \zeta}{\operatorname{E}}}[{\operatorname{EU}}_{\upsilon_t}^{\pi^*_t}(t)] -\epsilon \geq {\underset{\upsilon \sim \zeta}{\operatorname{E}}}[{\operatorname{EU}}_{\upsilon_t}^{\pi^*_t}(t); \upsilon \in {\mathcal{H}}_\epsilon]-\epsilon$

Combining, we get

${\underset{\upsilon \sim \zeta}{\operatorname{E}}}[{\operatorname{EU}}_{\upsilon_t}^{\pi^\zeta_t}(t)] \geq {\underset{\upsilon \sim \zeta}{\operatorname{E}}}[{\operatorname{EU}}_{\upsilon_t}^{*}(t); \upsilon \in {\mathcal{H}}_\epsilon] - 2\epsilon$

${\underset{\upsilon \sim \zeta}{\operatorname{E}}}[{\operatorname{EU}}_{\upsilon_t}^{\pi^\zeta_t}(t)] \geq {\underset{\upsilon \sim \zeta}{\operatorname{E}}}[{\operatorname{EU}}_{\upsilon_t}^{*}(t)] - {\underset{\upsilon \sim \zeta}{\operatorname{E}}}[{\operatorname{EU}}_{\upsilon_t}^{*}(t); \upsilon \not\in {\mathcal{H}}_\epsilon] - 2\epsilon$

By definition of $${\mathcal{H}}_\epsilon$$, this implies

${\underset{\upsilon \sim \zeta}{\operatorname{E}}}[{\operatorname{EU}}_{\upsilon_t}^{\pi^\zeta_t}(t)] \geq {\underset{\upsilon \sim \zeta}{\operatorname{E}}}[ {\operatorname{EU}}_{\upsilon_t}^{*}(t)] - 3\epsilon$

For any $$\upsilon \in {\mathcal{H}}$$, we get

${\operatorname{EU}}_{\upsilon_t}^{\pi^\zeta_t}(t) \geq {\operatorname{EU}}_{\upsilon_t}^{*}(t) - \frac{3\epsilon}{\zeta(\upsilon)}$

Taking $$\epsilon$$ to 0, we get the desired result.

# Proof of Proposition 1

Immediate from Proposition A.0.

# Proof of Proposition 2

Let $${\mathcal{H}}= \{\upsilon^k\}_{k \in {\mathbb{N}}}$$. For each $$k \in {\mathbb{N}}$$, let $$\pi^k$$ learn $$\{\upsilon^l\}_{l < k}$$. Choose $$\{t^k \in (0,\infty)\}_{k \in {\mathbb{N}}}$$ s.t.

1. $$t^0 = 0$$

2. $$t^k < t^{k+1}$$

3. $$\lim_{k \rightarrow \infty} t^k = \infty$$

4. For any $$l < k$$ and $$t \geq t^k$$, $${\operatorname{EU}}_{\upsilon^l_t}^{\pi^k_t}(t) \geq {\operatorname{EU}}_{\upsilon^l_t}^*(t) - \frac{1}{k+1}$$.

Now define $$\pi^*_t:=\pi_t^{\max\{k \mid t \geq t^k\}}$$. Clearly, $$\pi^*$$ learns $${\mathcal{H}}$$.

# Proposition A.1

Consider $$d \in {\mathbb{N}}$$ and $$x,y \in {\mathbb{R}}^d \setminus \boldsymbol{0}$$. Then

${\lVert \frac{x}{{\lVert x \rVert}_\infty}-\frac{y}{{\lVert y \rVert}_\infty} \rVert}_\infty \leq2d {\lVert \frac{x}{{\lVert x \rVert}_1}-\frac{y}{{\lVert y \rVert}_1} \rVert}_1$

# Proof of Proposition A.1

Without loss of generality, assume $${\lVert x \rVert}_1 = {\lVert y \rVert}_1 = 1$$. For any $$i < d$$, we have

$\frac{x_i}{{\lVert x \rVert}_\infty} - \frac{y_i}{{\lVert y \rVert}_\infty} = \frac{x_i{\lVert y \rVert}_\infty-y_i{\lVert x \rVert}_\infty}{{\lVert x \rVert}_\infty{\lVert y \rVert}_\infty} = \frac{x_i{\lVert y \rVert}_\infty - x_i{\lVert x \rVert}_\infty + x_i{\lVert x \rVert}_\infty - y_i{\lVert x \rVert}_\infty}{{\lVert x \rVert}_\infty{\lVert y \rVert}_\infty}$

$\frac{x_i}{{\lVert x \rVert}_\infty} - \frac{y_i}{{\lVert y \rVert}_\infty} = \frac{x_i({\lVert y \rVert}_\infty - {\lVert x \rVert}_\infty) + (x_i - y_i){\lVert x \rVert}_\infty}{{\lVert x \rVert}_\infty{\lVert y \rVert}_\infty}$

Denote $$r:={\lVert x-y \rVert}_1$$. Obviously, $${\lVert x-y \rVert}_\infty \leq r$$ and therefore $${\lvert x_i-y_i \rvert} \leq r$$ and $${\lvert {\lVert x \rVert}_\infty - {\lVert y \rVert}_\infty \rvert} \leq r$$. We get

${\lvert \frac{x_i}{{\lVert x \rVert}_\infty} - \frac{y_i}{{\lVert y \rVert}_\infty} \rvert} \leq \frac{{\lvert x_i \rvert}r + r{\lVert x \rVert}_\infty}{{\lVert x \rVert}_\infty{\lVert y \rVert}_\infty} \leq \frac{{\lVert x \rVert}_\infty r + r{\lVert x \rVert}_\infty}{{\lVert x \rVert}_\infty{\lVert y \rVert}_\infty} \leq \frac{2r}{{\lVert y \rVert}_\infty}$

Since $${\lVert y \rVert}_1 = 1$$, $${\lVert y \rVert}_\infty \geq \frac{1}{d}$$ yielding the desired result.

# Proposition A.2

Consider $${\mathcal{H}}$$ a finite set, $$L: {\mathcal{H}}\rightarrow [0,\infty)$$, $$\zeta \in \Delta{\mathcal{H}}$$ and $$\beta,\epsilon \in (0,\infty)$$. Assume that

1. For any $$k \in {\mathcal{H}}$$, $$\zeta(k) \geq \beta\epsilon$$.

2. $${\underset{\zeta}{\operatorname{E}}}[L] \geq \epsilon$$

Then

${\underset{\zeta}{\operatorname{E}}}[e^{-\beta L}] \leq 1 - (1 - e^{-1})\beta\epsilon$

# Proof of Proposition A.2

Without loss of generality, we can assume that $${\underset{\zeta}{\operatorname{E}}}[L] = \epsilon$$, because otherwise we can rescale $$L$$ by a constant in $$(0,1)$$ which will only make $${\underset{\zeta}{\operatorname{E}}}[e^{-\beta L}]$$ larger. It now follows from conditions i+ii that for any $$k$$, $$L(k) \leq \beta^{-1}$$ and therefore $$\beta L(k) \in [0,1]$$. We have

$L(k) = (1 - \beta L(k)) \cdot 0 + \beta L(k) \cdot \beta^{-1}$

Since $$e^{-\beta x}$$ is a convex function, we get

$e^{-\beta L(k)} \leq (1 - \beta L(k)) \cdot e^{-\beta \cdot 0} + \beta L(k) \cdot e^{-\beta \cdot \beta^{-1}} = 1 - \beta L(k) + e^{-1} \beta L(k) = 1-(1-e^{-1})\beta L(k)$

${\underset{\zeta}{\operatorname{E}}}[e^{-\beta L}] \leq 1-(1-e^{-1})\beta {\underset{\zeta}{\operatorname{E}}}[L] = 1-(1-e^{-1})\beta\epsilon$

# Proposition A.3

Consider $${\mathcal{H}}$$ and $${\mathcal{A}}$$ finite sets, $$L: {\mathcal{H}}\times {\mathcal{A}}\rightarrow [0,\infty)$$, $$\zeta \in \Delta{\mathcal{H}}$$, $$\alpha: {\mathcal{H}}\rightarrow \Delta{\mathcal{A}}$$ and $$\beta,\epsilon \in (0,\infty)$$. Assume that

1. For any $$k \in {\mathcal{H}}$$, $$\zeta(k) \geq \beta\epsilon$$.

2. For any $$a \in A$$, $${\underset{k \in \zeta}{\operatorname{E}}}[L(k,a)] \geq \epsilon$$.

3. For any $$k \in {\mathcal{H}}$$ and $$a \in A$$, $$\alpha(k)(a) \leq \exp[-\beta L(k,a)] \max_{b \in {\mathcal{A}}} \alpha(k)(b)$$.

Define $$\zeta \ltimes \alpha \in \Delta({\mathcal{H}}\times {\mathcal{A}})$$ by $$(\zeta \ltimes \alpha)(k,a):=\zeta(k)\alpha(k,a)$$. Then, the mutual information $$I$$ between $$k$$ and $$a$$ in the distribution $$\zeta \ltimes \alpha$$ satisfies

$I \geq \frac{(1-e^{-1})^2}{8{\lvert {\mathcal{A}}\rvert}^2} \beta^2 \epsilon^2$

# Proof of Proposition A.3

Define $$\bar{\alpha} \in \Delta{\mathcal{A}}$$ by $$\bar{\alpha}:={\underset{\zeta}{\operatorname{E}}}[\alpha]$$. We have

$I = {\underset{\zeta}{\operatorname{E}}}[{\operatorname{D}_{\mathrm{KL}}(\alpha \| \bar{\alpha})}]$

Applying Pinsker’s inequality

$I \geq 2{\underset{\zeta}{\operatorname{E}}}[{\operatorname{d}_{\text{tv}}}(\alpha,\bar{\alpha})^2]=\frac{1}{2}{\underset{\zeta}{\operatorname{E}}}[{\lVert \alpha-\bar{\alpha} \rVert}_1^2] \geq \frac{1}{2}{\underset{\zeta}{\operatorname{E}}}[{\lVert \alpha-\bar{\alpha} \rVert}_1]^2$

By Proposition A.1

$I \geq \frac{1}{8{\lvert {\mathcal{A}}\rvert}^2}{\underset{\zeta}{\operatorname{E}}}[{\lVert \frac{\alpha}{{\lVert \alpha \rVert}_\infty}-\frac{\bar{\alpha}}{{\lVert \bar{\alpha} \rVert}_\infty} \rVert}_\infty]^2 \geq \frac{1}{8{\lvert {\mathcal{A}}\rvert}^2}{\lVert {\underset{\zeta}{\operatorname{E}}}[\frac{\alpha}{{\lVert \alpha \rVert}_\infty}-\frac{\bar{\alpha}}{{\lVert \bar{\alpha} \rVert}_\infty}] \rVert}_\infty^2=\frac{1}{8{\lvert {\mathcal{A}}\rvert}^2}{\lVert {\underset{\zeta}{\operatorname{E}}}[\frac{\alpha}{{\lVert \alpha \rVert}_\infty}]-\frac{\bar{\alpha}}{{\lVert \bar{\alpha} \rVert}_\infty} \rVert}_\infty^2$

$I \geq \frac{1}{8{\lvert {\mathcal{A}}\rvert}^2}({\lVert \frac{\bar{\alpha}}{{\lVert \bar{\alpha} \rVert}_\infty} \rVert}_\infty-{\lVert {\underset{\zeta}{\operatorname{E}}}[\frac{\alpha}{{\lVert \alpha \rVert}_\infty}] \rVert}_\infty)^2=\frac{1}{8{\lvert {\mathcal{A}}\rvert}^2}(1-{\lVert {\underset{\zeta}{\operatorname{E}}}[\frac{\alpha}{{\lVert \alpha \rVert}_\infty}] \rVert}_\infty)^2$

By condition iii, $$\alpha(k)(a) / {\lVert \alpha(k) \rVert}_\infty \leq \exp[-\beta L(k,a)]$$ and therefore

$I \geq \frac{1}{8{\lvert {\mathcal{A}}\rvert}^2}(1-\max_{a \in {\mathcal{A}}} {\underset{k \sim \zeta}{\operatorname{E}}}[e^{-\beta L(k,a)}])^2$

Applying Proposition A.2, we get

$I \geq \frac{1}{8{\lvert {\mathcal{A}}\rvert}^2}(1-(1-(1-e^{-1})\beta\epsilon))^2=\frac{(1-e^{-1})^2}{8{\lvert {\mathcal{A}}\rvert}^2}\beta^2\epsilon^2$

# Proposition A.4

Consider $${\mathcal{A}}$$ a finite set, $$L: {\mathcal{A}}\rightarrow [0,\infty)$$ and $$\alpha \in \Delta{\mathcal{A}}$$ and $$\beta \in (0,\infty)$$ s.t. for any $$a \in {\mathcal{A}}$$

$\alpha(a) \leq e^{-\beta L(a)} \max_{b \in {\mathcal{A}}} \alpha(b)$

Then, $${\underset{\alpha}{\operatorname{E}}}[L] \leq {\lvert {\mathcal{A}}\rvert} e^{-1}\beta^{-1}$$.

# Proof of Proposition A.4

We have

${\underset{\alpha}{\operatorname{E}}}[L] = \sum_{a \in {\mathcal{A}}} \alpha(a) L(a) \leq \max_{b \in {\mathcal{A}}} \alpha(b) \sum_{a \in {\mathcal{A}}} e^{-\beta L(a)} L(a) \leq {\lvert A \rvert} \max_{x \in [0,\infty)} e^{-\beta x} x$

We compute $$\max_{x \in [0,\infty)} e^{-\beta x} x$$:

$0=\frac{d}{dx}(e^{-\beta x}x)|_{x=x^*} = -\beta e^{-\beta x^*} x^* + e^{-\beta x^*}$

$x^* = \frac{1}{\beta}$

$e^{-\beta x^*} x^* = \frac{1}{e\beta}$

# Proposition A.5

Consider a universe $$\upsilon=(\mu,r)$$, a policy $$\pi^0$$ and $$t \in (0,\infty)$$. Then,

${\operatorname{EU}}_{\upsilon}^{*}(t) - {\operatorname{EU}}_{\upsilon}^{\pi^0}(t)=\sum_{n=0}^\infty e^{-n/t} {\underset{x\sim\mu\bowtie\pi^0}{\operatorname{E}}}[{\operatorname{V}}^{\upsilon}_t(x_{:n})-{\operatorname{Q}}^{\upsilon}_t(x_{:n+1/2})]$

# Proof of Proposition A.5

For any $$x \in {({\mathcal{A}}\times {\mathcal{O}})^\omega}$$, it is easy to see that

${\operatorname{EU}}_{\upsilon}^{*}(t)={\operatorname{V}}^{\upsilon}_t({\boldsymbol{\lambda}})=\sum_{n=0}^\infty e^{-n/t} ({\operatorname{V}}^{\upsilon}_t(x_{:n})-e^{-1/t}{\operatorname{V}}^{\upsilon}_t(x_{:n+1}))$

${\operatorname{U}}^{r}_t(x)=(1-e^{-1/t})\sum_{n=0}^\infty e^{-n/t} r(x_{:n})$

${\operatorname{EU}}_{\upsilon}^{*}(t) - {\operatorname{U}}^{r}_t(x)=\sum_{n=0}^\infty e^{-n/t} ({\operatorname{V}}^{\upsilon}_t(x_{:n})-(1-e^{-1/t})r(x_{:n})-e^{-1/t}{\operatorname{V}}^{\upsilon}_t(x_{:n+1}))$

${\operatorname{EU}}_{\upsilon}^{*}(t) - {\operatorname{U}}^{r}_t(x)=\sum_{n=0}^\infty e^{-n/t} ({\operatorname{V}}^{\upsilon}_t(x_{:n})-{\operatorname{Q}}^{\upsilon}_t(x_{:n+1/2})+{\operatorname{Q}}^{\upsilon}_t(x_{:n+1/2})-(1-e^{-1/t})r(x_{:n})-e^{-1/t}{\operatorname{V}}^{\upsilon}_t(x_{:n+1}))$

Taking expected value over $$x$$, we get

${\operatorname{EU}}_{\upsilon}^{*}(t) - {\operatorname{EU}}_{\upsilon}^{\pi^0}(t)=\sum_{n=0}^\infty e^{-n/t} ({\underset{\mu\bowtie\pi^0}{\operatorname{E}}}[{\operatorname{V}}^{\upsilon}_t(x_{:n})-{\operatorname{Q}}^{\upsilon}_t(x_{:n+1/2})]+{\underset{\mu\bowtie\pi^0}{\operatorname{E}}}[{\operatorname{Q}}^{\upsilon}_t(x_{:n+1/2})-(1-e^{-1/t})r(x_{:n})-e^{-1/t}{\operatorname{V}}^{\upsilon}_t(x_{:n+1})])$

It is easy to see that the second term vanishes, yielding the desired result.

# Lemma A

Consider the setting of Theorem, but assume that $${\mathcal{H}}= \{\upsilon^k = (\mu^k,r^k)\}_{k < N}$$ for some $$N \in {\mathbb{N}}$$ (i.e. it is finite) and that $$\alpha^k(t)$$ is strictly $$\beta(t)$$-rational for $$\bar{\upsilon}^k[\alpha^k]$$. Denote $$\nu^k:=\bar{\mu}^k[\alpha^k]$$ and $$\operatorname{hdom}{\bar{{\mathcal{H}}}}:= \bigcup_{k < N} \operatorname{hdom}{\nu^k}$$. Denote $$\zeta^0 \in \Delta[N]$$ the uniform probability distribution. For any $$t > N^3$$, define $$\zeta_t,\tilde{\zeta}_t: \operatorname{hdom}{\bar{{\mathcal{H}}}} \rightarrow \Delta[N]$$ recursively as follows

$\tilde{\zeta}_t({\boldsymbol{\lambda}}):=\zeta^0$

$\tilde{\zeta}_t(hao)(k):=\begin{cases} \tilde{Z}_t(h)^{-1} \cdot \zeta_t(h)(k) \cdot \nu^k(ha)(o) \text{ if } \exists j: \zeta_t(h)(j) \cdot \nu^j(ha)(o) > 0 \\ N^{-1} \text{ otherwise} \end{cases}$

$\zeta_t(h)(k):=Z_t(h)^{-1}\tilde{\zeta}_t(h)(k)[[\tilde{\zeta}_t(h)(k) > t^{-1/3}]]$

In the above, $$Z_t(h)$$ and $$\tilde{Z}_t(h)$$ are normalization factor chosen to make the probabilities sum to 1. That is, $$\zeta_t(h)$$ is obtained by starting from prior $$\zeta^0$$, updating on every observation, and setting to 0 the probability of any universe whose probability drops below $$t^{-1/3}$$. When encountering an “impossible” observation we reset to the uniform distribution, but this is arbitrary.

Define the “loss function” $$L_t: \operatorname{hdom}{\bar{{\mathcal{H}}}} \times {\mathcal{A}}\rightarrow [0,1]$$ by

$L_t(ha):={\underset{k \sim \zeta_t(h)}{\operatorname{E}}}[{\operatorname{V}}^{\upsilon^k_t}_t(\underline{h})-{\operatorname{Q}}^{\upsilon^k_t}_t(\underline{h}a)]$

Denote $$\epsilon_t:=\beta(t)^{-1}t^{-1/3}$$. Define the following $${\bar{{\mathcal{I}}}}$$-metapolicy $$\pi^*$$:

$\pi^*_t(h):=\begin{cases} {\underset{a \in {\mathcal{A}}}{\operatorname{arg\,min}}\,} L_t(ha) \text{ if } h \in \operatorname{hdom}{\bar{{\mathcal{H}}}},\, \min_{a \in {\mathcal{A}}}{L_t(ha)} < \epsilon_t \\ \bot \text{ if } h \in \operatorname{hdom}{\bar{{\mathcal{H}}}},\,\min_{a \in {\mathcal{A}}}{L_t(ha)} \geq \epsilon_t \\ \bot \text{ if } h \not\in \operatorname{hdom}{\bar{{\mathcal{H}}}}\end{cases}$

(Technically, we only defined $$\pi^*_t$$ for $$t > N^3$$, but it doesn’t matter.) Then, $$\pi^*$$ learns $$\bar{{\mathcal{H}}}$$.

# Proof of Lemma A

For every $$k \in [N]$$, we define $$\zeta^{!k}_t, \tilde{\zeta}^{!k}_t: \operatorname{hdom}{\bar{{\mathcal{H}}}} \rightarrow \Delta[N]$$ and $$S_t^k: \operatorname{hdom}{\bar{{\mathcal{H}}}} \rightarrow 2^{[N]}$$ recursively as follows

$\tilde{\zeta}^{!k}_t({\boldsymbol{\lambda}}):=\zeta^0$

$\tilde{\zeta}^{!k}_t(hao)(i):=\frac{\zeta^{!k}_t(h)(i) \cdot \nu^i(ha)(o)}{\sum_{j < N} \zeta^{!k}_t(h)(j) \cdot \nu^j(ha)(o)}$

$S_t^k(h):=\{i \in [N] \mid \tilde{\zeta}^{!k}_t(h)(i) > t^{-1/3}\}$

$\zeta^{!k}_t(h)(i):= [[k,i \in S_t^k(h)]]\frac{\tilde{\zeta}^{!k}_t(h)(i)}{\sum_{j \in S_t^k(h)}\tilde{\zeta}^{!k}_t(h)(j)}+[[k \not\in S_t^k(h),\,i=k]]$

That is, $$\zeta^{!k}$$ is a belief state that, besides updating on observations, behaves as if, at each moment of time, if the true universe $$k$$ is low probability according to current belief state (i.e. $$k \not\in S^k_t(h)$$), then the true universe is “magically” revealed (i.e. $$\zeta^{!k}$$ becomes the Kronecker delta), and otherwise it updates on the true universe not being revealed. Denote

$\rho_t:= \zeta^0 \ltimes ({\nu_t}\bowtie{\pi^*_t}) \in \Delta([N] \times {{\overline{{\mathcal{A}}\times {\mathcal{O}}}}^\omega})$

Here, the dependence of $${\nu^k_t}\bowtie{\pi^*_t}$$ on $$k$$ is used to view it as Markov kernel from $$[N]$$ to $${{\overline{{\mathcal{A}}\times {\mathcal{O}}}}^\omega}$$. It is easy to see that, since the probability of “magic revelation” ever happening is at most $$(N-1)t^{-1/3}$$, we have

$\Pr_{(k,x)\sim\rho_t}[\exists n \in {\mathbb{N}}: \zeta_t(x_{:n}) \ne \zeta^{!k}_t(x_{:n})] \leq (N-1)t^{-1/3}$

Let $$\pi^{!k}$$ be defined exactly as $$\pi^*$$ but with $$\zeta$$ replaced by $$\zeta^{!k}$$. Denote $$\rho^!_t:=\zeta^0\ltimes(\nu_t\bowtie\pi^!_t)$$. From the above, we get

${\operatorname{d}_{\text{tv}}}(\rho_t,\rho_t^!) \leq (N-1)t^{-1/3}$

Given $$k \in [N]$$ and $$h \in \operatorname{hdom}{\bar{{\mathcal{H}}}}$$, we define the set $$h^{!k}$$ by

$h^{!k} := \{(j,x) \in [N] \times {{\overline{{\mathcal{A}}\times {\mathcal{O}}}}^\omega}\mid h \sqsubseteq x,\, \forall m \leq {\lvert h \rvert}:k \in S^k_t(h_{:m}) \land j \in S^j_t(h_{:m}) \lor k \not\in S^k_t(h_{:m}) \land j=k\}$

We have

$\Pr_{(j,x)\sim\rho^!_t}[j = i \mid h^{!k}] = \zeta^{!k}(h)(i)$

It follow that

${\underset{(j,x)\sim\rho^!_t}{\operatorname{E}}}[{\operatorname{H}}(\zeta^{!j}_t(x_{:{\lvert h \rvert}+1})) \mid h^{!k}] = {\operatorname{H}}(\zeta^{!k}_t(h)) -{\underset{(j,x)\sim\rho^!_t}{\operatorname{E}}}[{\operatorname{D}_{\mathrm{KL}}(\zeta^{!j}_t(x_{:{\lvert h \rvert}+1}) \| \tilde{\zeta}^{!j}_t(x_{:{\lvert h \rvert}+1}))}+{\operatorname{D}_{\mathrm{KL}}(\tilde{\zeta}^{!j}_t(x_{:{\lvert h \rvert}+1}) \| \zeta^{!j}_t(x_{:{\lvert h \rvert}}))}\mid h^{!k}]$

${\underset{(j,x)\sim\rho^!_t}{\operatorname{E}}}[{\operatorname{H}}(\zeta^{!j}_t(x_{:{\lvert h \rvert}+1})) \mid h^{!k}] \leq {\operatorname{H}}(\zeta^{!k}_t(h)) -{\underset{(j,x)\sim\rho^!_t}{\operatorname{E}}}[{\operatorname{D}_{\mathrm{KL}}(\tilde{\zeta}^{!j}_t(x_{:{\lvert h \rvert}+1}) \| \zeta^{!j}_t(x_{:{\lvert h \rvert}}))}\mid h^{!k}]$

If $$\pi^{!k}(h)=\bot$$, we can apply Proposition A.3: condition i follows from the definition of $$S^k_t$$ and the observation that $$\beta(t)\epsilon_t = t^{-1/3}$$, condition ii follows from the fact we are in the second case in the definition of $$\pi^{!k}$$ (see definition of $$\pi^*$$) and condition iii follows from the strict $$\beta(t)$$-rationality of $$\alpha^k(t)$$. We conclude

${\underset{(j,x)\sim\rho^!_t}{\operatorname{E}}}[{\operatorname{H}}(\zeta^{!j}_t(x_{:{\lvert h \rvert}+1})) \mid h^{!k}] \leq {\operatorname{H}}(\zeta^{!k}_t(h)) - \frac{(1-e^{-1})^2}{8{\lvert {\mathcal{A}}\rvert}^2} \beta(t)^2 \epsilon_t^2[[\pi^{!k}(h)=\bot]]$

${\underset{(j,x)\sim\rho^!_t}{\operatorname{E}}}[{\operatorname{H}}(\zeta^{!j}_t(x_{:{\lvert h \rvert}+1})) \mid h^{!k}] \leq {\operatorname{H}}(\zeta^{!k}_t(h)) - \frac{(1-e^{-1})^2}{8{\lvert {\mathcal{A}}\rvert}^2} t^{-2/3}[[\pi^{!k}(h)=\bot]]$

Taking $$\rho^!_t$$-expected value over $$(k,h)$$, we conclude that for any $$n \in {\mathbb{N}}$$

${\underset{(k,x)\sim\rho^!_t}{\operatorname{E}}}[{\operatorname{H}}(\zeta^{!k}_t(x_{:n+1}))] \leq {\underset{(k,x)\sim\rho^!_t}{\operatorname{E}}}[{\operatorname{H}}(\zeta^{!k}_t(x_{:n}))] - \frac{(1-e^{-1})^2}{8{\lvert {\mathcal{A}}\rvert}^2} t^{-2/3} \Pr_{(k,x)\sim\rho^!_t}[\pi^{!k}(x_{:n})=\bot]$

By induction, it follows that

${\underset{(k,x)\sim\rho^!_t}{\operatorname{E}}}[{\operatorname{H}}(\zeta^{!k}_t(x_{:n}))] \leq {\operatorname{H}}(\zeta^0) - \frac{(1-e^{-1})^2}{8{\lvert {\mathcal{A}}\rvert}^2} t^{-2/3}{\underset{(k,x)\sim\rho^!_t}{\operatorname{E}}}[{\lvert \{m \in [n] \mid x_m \in \bot \times {\bar{{\mathcal{O}}}}\} \rvert}]$

${\underset{(k,x)\sim\rho^!_t}{\operatorname{E}}}[{\lvert \{m \in [n] \mid x_m \in \bot \times {\bar{{\mathcal{O}}}}\} \rvert}] \leq \frac{8 {\lvert {\mathcal{A}}\rvert}^2 \ln{N}}{(1-e^{-1})^2}t^{2/3}$

${\underset{(k,x)\sim\rho^!_t}{\operatorname{E}}}[{\lvert \{n \in {\mathbb{N}}\mid x_n \in \bot \times {\bar{{\mathcal{O}}}}\} \rvert}] \leq \frac{8 {\lvert {\mathcal{A}}\rvert}^2 \ln{N}}{(1-e^{-1})^2}t^{2/3}$

So, the expected number of delegations is $$O(t^{2/3})$$. For any $$k \in [N]$$, Proposition A.5 yields

${\operatorname{EU}}_{\bar{\upsilon}^k[\alpha^k]}^{*}(t) - {\operatorname{EU}}_{\bar{\upsilon}^k[\alpha^k]}^{\pi^{!k}}(t)=\sum_{n=0}^\infty e^{-n/t} {\underset{x \sim \nu^k\bowtie\pi^{!k}}{\operatorname{E}}}[{\operatorname{V}}^{\upsilon^k[\alpha^k]}_t(x_{:n})-{\operatorname{Q}}^{\upsilon^k[\alpha^k]}_t(x_{:n}\pi^{!k}(x_{:n}))]$

Averaging over $$k$$, we get

$\frac{1}{N}\sum_{k < N}({\operatorname{EU}}_{\bar{\upsilon}^k[\alpha^k]}^{*}(t) - {\operatorname{EU}}_{\bar{\upsilon}^k[\alpha^k]}^{\pi^{!k}}(t))=\sum_{n=0}^\infty e^{-n/t} {\underset{(k,x)\sim\rho^!_t}{\operatorname{E}}}[{\operatorname{V}}^{\upsilon^k[\alpha^k]}_t(x_{:n})-{\operatorname{Q}}^{\upsilon^k[\alpha^k]}_t(x_{:n}\pi^{!k}(x_{:n}))]$

$\frac{1}{N}\sum_{k < N}({\operatorname{EU}}_{\bar{\upsilon}^k[\alpha^k]}^{*}(t) - {\operatorname{EU}}_{\bar{\upsilon}^k[\alpha^k]}^{\pi^{!k}}(t))=\sum_{n=0}^\infty e^{-n/t} {\underset{(k,x)\sim\rho^!_t}{\operatorname{E}}}[{\underset{(j,y)\sim\rho^!_t}{\operatorname{E}}}[{\operatorname{V}}^{\upsilon^j[\alpha^j]}_t(y_{:n})-{\operatorname{Q}}^{\upsilon^j[\alpha^j]}_t(y_{:n}\pi^{!j}(y_{:n})) \mid x_{:n}^{!k}]]$

Using the definitions of $$\zeta^{!k}(x_{:n})$$ and $$x_{:n}^{!k}$$, and observing that $$\pi^{!j}(x_{:n})=\pi^{!k}(x_{:n})$$ for $$(j,y) \in x_{:n}^{!k}$$

$\frac{1}{N}\sum_{k < N}({\operatorname{EU}}_{\bar{\upsilon}^k[\alpha^k]}^{*}(t) - {\operatorname{EU}}_{\bar{\upsilon}^k[\alpha^k]}^{\pi^{!k}}(t))=\sum_{n=0}^\infty e^{-n/t} {\underset{(k,x)\sim\rho^!_t}{\operatorname{E}}}[{\underset{j\sim\zeta^{!k}(x_{:n})}{\operatorname{E}}}[{\operatorname{V}}^{\upsilon^j[\alpha^j]}_t(x_{:n})-{\operatorname{Q}}^{\upsilon^j[\alpha^j]}_t(x_{:n}\pi^{!k}(x_{:n}))]]$

When $$\pi^{!k}(x_{:n}) \ne \bot$$, the expected loss is smaller than $$\epsilon_t$$ (by construction of $$\pi^{!k})$$. When $$\pi^{!k}(x_{:n}) = \bot$$, the expected loss is at most $${\lvert {\mathcal{A}}\rvert}e^{-1}\beta(t)^{-1}$$, by Proposition A.4. We get

$\frac{1}{N}\sum_{k < N}({\operatorname{EU}}_{\bar{\upsilon}^k[\alpha^k]}^{*}(t) - {\operatorname{EU}}_{\bar{\upsilon}^k[\alpha^k]}^{\pi^{!k}}(t)) \leq \sum_{n=0}^\infty e^{-n/t} {\underset{(k,x)\sim\rho^!_t}{\operatorname{E}}}\left[[[\pi^{!k}(x_{:n}) \ne \bot]]\epsilon_t+[[\pi^{!k}(x_{:n}) = \bot]]{\lvert {\mathcal{A}}\rvert}e^{-1}\beta(t)^{-1}\right]$

The contribution of the first term on the right hand side can be bounded using the observation

$\sum_{n=0}^\infty e^{-n/t} = 1 + \sum_{n=1}^\infty e^{-n/t} \leq 1 + \sum_{n=1}^\infty \int_{n-1}^n e^{-s/t} ds = 1 + \int_0^{\infty} e^{-s/t} ds = 1 + t$

The contribution the of second term on the right hand side can be bounded via the expected number of delegations. We get

$\frac{1}{N}\sum_{k < N}({\operatorname{EU}}_{\bar{\upsilon}^k[\alpha^k]}^{*}(t) - {\operatorname{EU}}_{\bar{\upsilon}^k[\alpha^k]}^{\pi^{!k}}(t)) \leq (1+t)\beta(t)^{-1}t^{-1/3}+\frac{8 {\lvert {\mathcal{A}}\rvert}^2 \ln{N}}{(1-e^{-1})^2}t^{2/3}{\lvert {\mathcal{A}}\rvert}e^{-1}\beta(t)^{-1}$

$\frac{1}{N}\sum_{k < N}({\operatorname{EU}}_{\bar{\upsilon}^k[\alpha^k]}^{*}(t) - {\operatorname{EU}}_{\bar{\upsilon}^k[\alpha^k]}^{\pi^{!k}}(t)) \leq \left(\frac{1}{t}+1+\frac{8 {\lvert {\mathcal{A}}\rvert}^3 \ln{N}}{e(1-e^{-1})^2}\right)\frac{t^{2/3}}{\beta(t)}$

We use the relationship between $$\rho_t$$ and $$\rho^!_t$$ on the second term on the left hand side (whose contribution is the $$\rho^!_t$$-expected value of the utility function, whose range lies in $$[0,1]$$), and get

$\frac{1}{N}\sum_{k < N}({\operatorname{EU}}_{\bar{\upsilon}^k[\alpha^k]}^{*}(t) - {\operatorname{EU}}_{\bar{\upsilon}^k[\alpha^k]}^{\pi^{*}}(t)) \leq \left(\frac{1}{t}+1+\frac{8 {\lvert {\mathcal{A}}\rvert}^3 \ln{N}}{e(1-e^{-1})^2}\right)\frac{t^{2/3}}{\beta(t)}+\frac{N-1}{t^{1/3}}$

Using the assumption $$\beta=\omega(t^{2/3})$$, we conclude that for each $$k \in [N]$$

$\lim_{t \rightarrow \infty} ({\operatorname{EU}}_{\bar{\upsilon}^k[\alpha^k]}^{*}(t) - {\operatorname{EU}}_{\bar{\upsilon}^k[\alpha^k]}^{\pi^*}(t)) = 0$

# Proposition A.6

Let $$\upsilon = (\mu,r)$$ be a meta-universe, $$\{{\mathcal{C}}_t \subseteq {({\mathcal{A}}\times {\mathcal{O}})^*}\}_{t \in (0,\infty)}$$ and $$\pi^0$$ a metapolicy. Then, for all $$t \in (0,\infty)$$

${\operatorname{EU}}_{\upsilon^{\mathcal{C}}}^{\pi^0}(t) \leq {\operatorname{EU}}_{\upsilon}^{\pi^0}(t)$

# Proof of Proposition A.6

Denote $$\chi_t := \chi_{{\mathcal{C}}_t}$$ the characteristic function of $${\mathcal{C}}_t$$. For notational convenience, the expression $$\chi_t(x_{:-1})$$ will be understood to mean 0.

${\operatorname{EU}}_{\upsilon^{\mathcal{C}}}^{\pi^0}(t) = (1-e^{-\frac{1}{t}}){\underset{x\sim\mu\bowtie\pi^0}{\operatorname{E}}}[\sum_{n=0}^\infty e^{-\frac{n}{t}} (\sum_{m=0}^{n} (1-\chi_{t}(x_{:m-1}))\chi_t(x_{:m}) {\operatorname{V}}^{-\upsilon}_t(x_{:m})+{(1-\chi_t(x_{:n}))}r_t(x_{:n}))]$

${\operatorname{EU}}_{\upsilon^{\mathcal{C}}}^{\pi^0}(t) \leq (1-e^{-\frac{1}{t}}){\underset{x\sim\mu\bowtie\pi^0}{\operatorname{E}}}[\sum_{n=0}^\infty e^{-\frac{n}{t}} (\sum_{m=0}^{n} (1-\chi_{t}(x_{:m-1}))\chi_t(x_{:m}) {\underset{y\sim\mu\bowtie\pi^0}{\operatorname{E}}}[\frac{\sum_{l=m}^\infty e^{-\frac{l}{t}} r(y_{:l})}{\sum_{l=m}^\infty e^{-\frac{l}{t}}} \mid x_{:m}]+(1-\chi_t(x_{:n}))r_t(x_{:n}))]$

${\operatorname{EU}}_{\upsilon^{\mathcal{C}}}^{\pi^0}(t) \leq (1-e^{-\frac{1}{t}}){\underset{x\sim\mu\bowtie\pi^0}{\operatorname{E}}}[\sum_{n=0}^\infty e^{-\frac{n}{t}} (\sum_{m=0}^{n} (1-\chi_{t}(x_{:m-1}))\chi_t(x_{:m}) \frac{\sum_{l=m}^\infty e^{-\frac{l}{t}} r(x_{:l})}{\sum_{l=m}^\infty e^{-\frac{l}{t}}} +(1-\chi_t(x_{:n}))r_t(x_{:n}))]$

${\operatorname{EU}}_{\upsilon^{\mathcal{C}}}^{\pi^0}(t) \leq (1-e^{-\frac{1}{t}}){\underset{x\sim\mu\bowtie\pi^0}{\operatorname{E}}}[\sum_{n=0}^\infty e^{-\frac{n}{t}} ((1-e^{-\frac{1}{t}})\sum_{m=0}^{n} (1-\chi_{t}(x_{:m-1}))\chi_t(x_{:m}) \sum_{l=m}^\infty e^{-\frac{l-m}{t}} r(x_{:l}) +(1-\chi_t(x_{:n}))r_t(x_{:n}))]$

Regrouping the sum to collect the terms $$r(x_{:n})$$ for the same value of $$n$$, we get

${\operatorname{EU}}_{\upsilon^{\mathcal{C}}}^{\pi^0}(t) \leq (1-e^{-\frac{1}{t}}){\underset{x\sim\mu\bowtie\pi^0}{\operatorname{E}}}[\sum_{n=0}^\infty ((1-e^{-\frac{1}{t}})\sum_{m=0}^n \sum_{l=m}^\infty e^{-\frac{l}{t}}(1-\chi_{t}(x_{:m-1}))\chi_t(x_{:m})e^{-\frac{n-m}{t}} + e^{-\frac{n}{t}}(1-\chi_t(x_{:n})))r(x_{:n})]$

${\operatorname{EU}}_{\upsilon^{\mathcal{C}}}^{\pi^0}(t) \leq (1-e^{-\frac{1}{t}}){\underset{x\sim\mu\bowtie\pi^0}{\operatorname{E}}}[\sum_{n=0}^\infty e^{-\frac{n}{t}}((1-e^{-\frac{1}{t}})\sum_{m=0}^n \frac{e^{-\frac{m}{t}}}{1-e^{-\frac{1}{t}}} (1-\chi_{t}(x_{:m-1}))\chi_t(x_{:m})e^{\frac{m}{t}}+ 1-\chi_t(x_{:n}))r(x_{:n})]$

${\operatorname{EU}}_{\upsilon^{\mathcal{C}}}^{\pi^0}(t) \leq (1-e^{-\frac{1}{t}}){\underset{x\sim\mu\bowtie\pi^0}{\operatorname{E}}}[\sum_{n=0}^\infty e^{-\frac{n}{t}}(\sum_{m=0}^n (1-\chi_{t}(x_{:m-1}))\chi_t(x_{:m})+ 1-\chi_t(x_{:n}))r(x_{:n})]$

${\operatorname{EU}}_{\upsilon^{\mathcal{C}}}^{\pi^0}(t) \leq (1-e^{-\frac{1}{t}}){\underset{x\sim\mu\bowtie\pi^0}{\operatorname{E}}}[\sum_{n=0}^\infty e^{-\frac{n}{t}}(\chi_t(x_{:n}) + 1-\chi_t(x_{:n}))r(x_{:n})]$

${\operatorname{EU}}_{\upsilon^{\mathcal{C}}}^{\pi^0}(t) \leq (1-e^{-\frac{1}{t}}){\underset{x\sim\mu\bowtie\pi^0}{\operatorname{E}}}[\sum_{n=0}^\infty e^{-\frac{n}{t}}r(x_{:n})] = {\operatorname{EU}}_{\upsilon}^{\pi^0}(t)$

# Proposition A.7

Let $$\upsilon$$ be a meta-universe and $${\mathcal{C}}$$ a $$\upsilon$$-avoidable event. Suppose $$\pi^!$$ is a metapolicy that learns $$\upsilon^{\mathcal{C}}$$. Then, $$\pi^!$$ learns $$\upsilon$$ as well.

# Proof of Proposition A.7

It is easy to see for any policy $$\pi$$ and $$t \in (0,\infty)$$

${\operatorname{EU}}_{\upsilon^{\mathcal{C}}}^{\pi^*}(t) \geq (1-\Pr_{x \sim \mu_t\bowtie\pi^*_t}[\exists n \leq D(t): x_{:n} \in {\mathcal{C}}_t]) ({\operatorname{EU}}_{\upsilon}^{\pi^*}(t) - e^{-D(t)/t})$

Let $$\pi^*$$ be as in Definition 4. Proposition A.6 and the above imply that

$\lim_{t \rightarrow \infty} ({\operatorname{EU}}_{\upsilon}^{\pi^*}(t)-{\operatorname{EU}}_{\upsilon^{\mathcal{C}}}^{\pi^*}(t)) = 0$

Using condition i, we get

$\lim_{t \rightarrow \infty} ({\operatorname{EU}}_{\upsilon}^{*}(t)-{\operatorname{EU}}_{\upsilon^{\mathcal{C}}}^{\pi^*}(t)) = 0$

By Proposition A.6, $${\operatorname{EU}}_{\upsilon}^{*}(t) \geq {\operatorname{EU}}_{\upsilon^{\mathcal{C}}}^{*}(t)$$, therefore

$\lim_{t \rightarrow \infty} ({\operatorname{EU}}_{\upsilon^{\mathcal{C}}}^{*}(t)-{\operatorname{EU}}_{\upsilon^{\mathcal{C}}}^{\pi^*}(t)) = 0$

$\lim_{t \rightarrow \infty} ({\operatorname{EU}}_{\upsilon^{\mathcal{C}}}^{\pi^!}(t)-{\operatorname{EU}}_{\upsilon^{\mathcal{C}}}^{\pi^*}(t)) = 0$

$\lim_{t \rightarrow \infty} ({\operatorname{EU}}_{\upsilon^{\mathcal{C}}}^{\pi^!}(t)-{\operatorname{EU}}_{\upsilon}^{*}(t)) = 0$

By Proposition A.6, $${\operatorname{EU}}_{\upsilon}^{\pi^!}(t) \geq {\operatorname{EU}}_{\upsilon^{\mathcal{C}}}^{\pi^!}(t)$$, therefore

$\lim_{t \rightarrow \infty} ({\operatorname{EU}}_{\upsilon}^{\pi^!}(t)-{\operatorname{EU}}_{\upsilon}^{*}(t)) = 0$

# Proof of Theorem

For any $$k \in {\mathbb{N}}$$, let $${\mathcal{C}}^k$$ and $$\alpha^{k*}$$ be for $$\alpha^k$$ as in Definition 5. Denote $$\psi^k:=(\upsilon^k)^{{\mathcal{C}}^k}$$ and

$\bar{{\mathcal{H}}}^{\mathcal{C}}:=\{\bar{\psi}^k[\alpha^{k*}] \mid k \in {\mathbb{N}}\}$

By Lemma A and Proposition 2, $$\bar{{\mathcal{H}}}^{\mathcal{C}}$$ is learnable. Let $$\pi^*$$ be a metapolicy that learns it. For any $$k \in {\mathbb{N}}$$, $$\pi^*$$ learns $$\bar{\psi}^k[\alpha^{k*}]$$. It is easy to see that for any policy $$\pi$$, $${\operatorname{EU}}_{\bar{\psi}^k[\alpha^{k*}]}^\pi(t)={\operatorname{EU}}_{\bar{\psi}^k[\alpha^{k}]}^\pi(t)$$ ($$\psi^k$$ is defined s.t. once the corruption occurs, the reward is constant, so the advisor has no effect after this point), therefore $$\pi^*$$ learns $$\bar{\psi}^k[\alpha^{k}]$$. By Proposition A.7, we conclude that $$\pi^*$$ learns $$\bar{\upsilon}^k[\alpha^k]$$.

# Proof of Corollary 1

Denote $$\upsilon^K:=({\underset{k\in\zeta^K}{\boldsymbol{\operatorname{E}}}}[\mu^k],r^K)$$. By the Theorem, $$\{\bar{\upsilon}^K[\alpha^K]\}_{K \in {\mathbb{N}}}$$ is learnable. Denote $$\psi^{kK}:=(\mu^k,r^K)$$. It is easy to see that for any $${\bar{{\mathcal{I}}}}$$-policy $$\pi$$ and $$t\in(0,\infty)$$,

${\underset{k \in \xi^K}{\operatorname{E}}}[{\operatorname{EU}}_{\psi^{kK}[\alpha^K]}^\pi(t)] = {\operatorname{EU}}_{\bar{\upsilon}^K[\alpha^K]}^\pi(t)$

By the calibration condition

${\underset{(K,k)\sim\zeta}{\operatorname{E}}}[{\operatorname{EU}}_{\psi^{kK}[\alpha^K]}^\pi(t)] = {\underset{(K,k) \in \zeta}{\operatorname{E}}}[{\underset{j \in \xi^K}{\operatorname{E}}}[{\operatorname{EU}}_{\psi^{jK}[\alpha^K]}^\pi(t)]]={\underset{(K,k) \in \zeta}{\operatorname{E}}}[{\operatorname{EU}}_{\bar{\upsilon}^K[\alpha^K]}^\pi(t)]$

Applying Proposition 1, we conclude that $$\pi^\zeta$$ learns $$\{\bar{\upsilon}^K[\alpha^K]\}_{K \in {\mathbb{N}}}$$. That is, for any $$K \in {\mathbb{N}}$$

$\lim_{t \rightarrow \infty} ({\operatorname{EU}}_{\bar{\upsilon}^K[\alpha^K]}^*(t)-{\operatorname{EU}}_{\bar{\upsilon}^K[\alpha^K]}^{\pi^\zeta}(t)) = 0$

$\lim_{t \rightarrow \infty} ({\operatorname{EU}}_{\upsilon^K}^*(t)-{\operatorname{EU}}_{\bar{\upsilon}^K[\alpha^K]}^{\pi^\zeta}(t)) = 0$

$\lim_{t \rightarrow \infty} (\max_{\pi\in\Pi_{{\mathcal{I}}}} {\underset{k\in\xi^K}{\operatorname{E}}}[{\operatorname{EU}}_{\psi^{kK}}^\pi(t)]-{\underset{k\in\xi^K}{\operatorname{E}}}[{\operatorname{EU}}_{\bar{\psi}^{kK}[\alpha^K]}^{\pi^\zeta}(t)]) = 0$

# Proof of Corollary 2

Immediate from Corollary 1 and Proposition A.0.

 by Tom Everitt 130 days ago | link “The use of an advisor allows us to kill two birds with one stone: learning the reward function and safe exploration (i.e. avoiding both the Scylla of “Bayesian paranoia” and the Charybdis of falling into traps).” This sounds quite nice. But how is it possible to achieve this if the advisor is a soft-maximiser? Doesn’t that mean that there is a positive probability that the advisor falls into the trap? reply
 by Vadim Kosoy 129 days ago | link Hi Tom! There is a positive probability that the advisor falls into the trap, but this probability goes to $$0$$ as the time discount parameter $$t$$ goes to $$\infty$$ (which is the limit I study here). This follows from the condition $$\beta(t)=\omega(t^{2/3})$$ in the Theorem. To give a simple example, suppose that $$\mathcal{A}=\{0,1,2\}$$ and the environment is s.t.: When you take action 0, you fall into a trap and get reward 0 forever. When you take action 1, you get reward 0 for the current round and remain in the same state. When you take action 2, you get reward 1 for the current round (unless you are in the trap) and remain in the same state. In this case, our advisor would have to take action 0 with probability $$\exp\left(-\omega\left(t^{2/3}\right)\right)$$ and action 2 has to be more probable than action 1 by a factor of $$\exp\left(\omega\left(t^{-1/3}\right)\right) \approx 1 + \omega\left(t^{-1/3}\right)$$. reply
 by Tom Everitt 128 days ago | link Hi Vadim! So basically the advisor will be increasingly careful as the cost of falling into the trap goes to infinity? Makes sense I guess. What is the incentive for the agent not to always let the advisor choose? Is there always some probability that the advisor saves them from infinite loss, or only in certain situations that can be detected by the agent? reply
 by Vadim Kosoy 128 days ago | link If the agent always delegates to the advisor, it loses a large fraction of the value. Returning again to the simple example above, the advisor on its own is only guaranteed to get expected utility $$1/2 + \omega(t^{-1/3})$$ (because it often takes the suboptimal action 1). On the other hand, for any prior over a countable set of environments that includes this one, the corresponding DIRL agent gets expected utility $$1 - o(1)$$ on this environment (because it will learn to only take action 2). You can also add an external penalty for each delegation, adjusting the proof is straightforward. So, the agent has to exercise judgement about whether to delegate, using its prior + past observations. For example, the policy I construct in Lemma A delegates iff there is no action whose expected loss (according to current beliefs) is less than $$\beta(t)^{-1}t^{-1/3}$$. reply
 by Tom Everitt 127 days ago | link So this requires the agent’s prior to incorporate information about which states are potentially risky? Because if there is always some probability of there being a risky action (with infinitely negative value), then regardless how small the probability is and how large the penalty is for asking, the agent will always be better off asking. (Did you see Owain Evans recent paper about trying to teach the agent to detect risky states.) reply
 by Vadim Kosoy 127 days ago | link The only assumptions about the prior are that it is supported on a countable set of hypotheses, and that in each hypothesis the advisor is $$\beta$$-rational (for some fixed $$\beta(t)=\omega(t^{2/3})$$). There is no such thing as infinitely negative value in this framework. The utility function is bounded because of the geometric time discount (and because the momentary rewards are assumed to be bounded), and in fact I normalize it to lie in $$[0,1]$$ (see the equation defining $$\mathrm{U}$$ in the beginning of the Results section). Falling into a trap is an event associated with $$\Omega(1)$$ loss (i.e. loss that remains constant as $$t$$ goes to $$\infty$$). Therefore, we can risk such an event, as long as the probability is $$o(1)$$ (i.e. goes to $$0$$ as $$t$$ goes to $$\infty$$). This means that as $$t$$ grows, the agent will spend more rounds delegating to the advisor, but for any given $$t$$, it will not delegate on most rounds (even on most of the important rounds, i.e. during the first $$O(t)$$-length “horizon”). In fact, you can see in the proof of Lemma A, that the policy I construct delegates on $$O(t^{2/3})$$ rounds. As a simple example, consider again the toy environment from before. Consider also the environments you get from it by applying a permutation to the set of actions $$\mathcal{A}$$. Thus, you get a hypothesis class of 6 environments. Then, the corresponding DIRL agent will spend $$O(t^{2/3})$$ rounds delegating, observe which action is chosen by the advisor most frequently, and perform this action forevermore. (The phenomenon that all delegations happen in the beginning is specific to this toy example, because it only has 1 non-trap state.) If you mean this paper, I saw it? reply
 by Tom Everitt 118 days ago | link My confusion is the following: Premises (*) and inferences (=>): The primary way for the agent to avoid traps is to delegate to a soft-maximiser. Any action with boundedly negative utility, a soft-maximiser will take with positive probability. Actions leading to traps do not have infinitely negative utility. => The agent will fall into traps with positive probability. If the agent falls into a trap with positive probability, then it will have linear regret. => The agent will have linear regret. So when you say in the beginning of the post “a Bayesian DIRL agent is guaranteed to attain most of the value”, you must mean that in a different sense than a regret sense? reply
 by Vadim Kosoy 117 days ago | link Your confusion is because you are thinking about regret in an anytime setting. In an anytime setting, there is a fixed policy $$\pi$$, we measure the expected reward of $$\pi$$ over a time interval $$t$$ and compare it to the optimal expected reward over the same time interval. If $$\pi$$ has probability $$p > 0$$ to walk into a trap, regret has the linear lower bound $$\Omega(pt)$$. On other hand, I am talking about policies $$\pi_t$$ that explicitly depend on the parameter $$t$$ (I call this a “metapolicy”). Both the advisor and the agent policies are like that. As $$t$$ goes to $$\infty$$, the probability $$p(t)$$ to walk into a trap goes to $$0$$, so $$p(t)t$$ is a sublinear function. A second difference with the usual definition of regret is that I use an infinite sum of rewards with geometric time discount $$e^{-1/t}$$ instead of a step function time discount that cuts off at $$t$$. However, this second difference is entirely inessential, and all the theorems work about the same with step function time discount. reply

### RECENT COMMENTS

This is exactly the sort of
 by Stuart Armstrong on Being legible to other agents by committing to usi... | 0 likes

When considering an embedder
 by Jack Gallagher on Where does ADT Go Wrong? | 0 likes

The differences between this
 by Abram Demski on Policy Selection Solves Most Problems | 0 likes

Looking "at the very
 by Abram Demski on Policy Selection Solves Most Problems | 0 likes

Without reading closely, this
 by Paul Christiano on Policy Selection Solves Most Problems | 1 like

>policy selection converges
 by Stuart Armstrong on Policy Selection Solves Most Problems | 0 likes

Indeed there is some kind of
 by Vadim Kosoy on Catastrophe Mitigation Using DRL | 0 likes

Very nice. I wonder whether
 by Vadim Kosoy on Hyperreal Brouwer | 0 likes

Freezing the reward seems
 by Vadim Kosoy on Resolving human inconsistency in a simple model | 0 likes

Unfortunately, it's not just
 by Vadim Kosoy on Catastrophe Mitigation Using DRL | 0 likes

>We can solve the problem in
 by Wei Dai on The Happy Dance Problem | 1 like

Maybe it's just my browser,
 by Gordon Worley III on Catastrophe Mitigation Using DRL | 2 likes

At present, I think the main
 by Abram Demski on Looking for Recommendations RE UDT vs. bounded com... | 0 likes

In the first round I'm
 by Paul Christiano on Funding opportunity for AI alignment research | 0 likes

Fine with it being shared
 by Paul Christiano on Funding opportunity for AI alignment research | 0 likes

Privacy & Terms