Intelligent Agent Foundations Forumsign up / log in
Improved formalism for corruption in DIRL
discussion post by Vadim Kosoy 35 days ago | discuss

We give a treatment of advisor corruption in DIRL, more elegant and general than our previous formalism.

The following definition replaces the original Definition 5.

Definition

Consider a meta-universe \(\upsilon=(\mu,r)\) and \(\beta:(0,\infty)\rightarrow(0,\infty)\). A metapolicy \(\alpha\) is called \(\beta\)-rational for \(\upsilon\) (as opposed to before, we assume \(\alpha\) is an \({\mathcal{I}}\)-metapolicy rather than an \({{\bar{{\mathcal{I}}}}}\)-metapolicy; this is purely for notational convenience and it is straightforward to generalize the definition) when there exists \(\{L^\alpha_t: \operatorname{hdom}{\mu} \times {\mathcal{A}}\rightarrow [0,\infty]\}_{t \in (0,\infty)}\) s.t.

  1. For any \(h \in \operatorname{hdom}{\mu}\) and \(t \in (0,\infty)\), there is \(a \in {\mathcal{A}}\) s.t. \(L^\alpha_t(ha)=0\).

  2. \(\alpha_t(h)(a)=\exp(-\beta(t)L^\alpha_t(ha)) \max_{a^* \in {\mathcal{A}}} \alpha_t(h)(a^*)\)

  3. For any \(\pi \in \Pi\) and \(t \in (0,\infty)\)

\[\lim_{t \rightarrow \infty}\min({\underset{x\sim\mu_t\bowtie\pi}{\operatorname{E}}}[\sum_{n=0}^\infty e^{-n/t} L^\alpha_t(x_{:n+1/2})]-{\underset{x\sim\mu_t\bowtie\pi}{\operatorname{E}}}[\sum_{n=0}^\infty e^{-n/t}({\operatorname{V}}^\upsilon_t(x_{:n})-{\operatorname{Q}}^\upsilon_t(x_{:n+1/2}))],0)=0\]


In condition ii, \(\exp(-\infty)\) is understood to mean 0. Conditions i+ii can be seen as the definition of \(L^\alpha_t\) given \(\alpha_t\). A notable special case of condition iii is when for any \(x \in {({\mathcal{A}}\times {\mathcal{O}})^\omega}\)

\[\sum_{n=0}^\infty e^{-n/t} L^\alpha_t(x_{:n+1/2}) \geq \sum_{n=0}^\infty e^{-n/t}({\operatorname{V}}^\upsilon_t(x_{:n})-{\operatorname{Q}}^\upsilon_t(x_{:n+1/2}))\]

As a simple example, we can have a set of corrupt states \(\{{\mathcal{C}}_t \subseteq {({\mathcal{A}}\times {\mathcal{O}})^*}\}_{t\in(0,\infty)}\) in which the behavior of the advisor becomes arbitrary, but for each \(h \in {\mathcal{C}}_t\) there is \(g \in {({\mathcal{A}}\times {\mathcal{O}})^*}\times {\mathcal{A}}\) s.t. \(g \sqsubset h\) and \(L^\alpha_t(g)=\infty\) (i.e., to corrupt the advisor one has to take an action that the advisor would never take). As opposed to before, this formalism can also account for partial corruption, e.g. if for each \(h \not\in {\mathcal{C}}_t\) and \(a \in {\mathcal{A}}\), we have \(L^\alpha_t(ha) \geq {\operatorname{V}}^\upsilon_t(h) - {\operatorname{Q}}^\upsilon_t(ha)\) (like in strict \(\beta\)-rationality) whereas for \(h \in {\mathcal{C}}_t\), we only have \(L^\alpha_t(ha) \geq {\operatorname{V}}^\upsilon_t(h) - {\operatorname{Q}}^\upsilon_t(ha) - \delta\) for some constant \(\delta > 0\), then to ensure \(\beta\)-rationality, it is sufficient that for each \(h = a_0o_0a_1o_1 \ldots \in {\mathcal{C}}_t\):

\[\sum_{n=0}^{\max\{m \mid h_{:m} \not\in{\mathcal{C}}_t\}} e^{-n/t}(L^\alpha_t(h_{:n}a_n) - {\operatorname{V}}^\upsilon_t(h_{:n}) - {\operatorname{Q}}^\upsilon_t(h_{:n}a_n)) \geq \frac{\delta e^{-(\max\{m \mid h_{:m} \not\in{\mathcal{C}}_t\}+1)/t}}{1-e^{-1/t}}\]

Theorem

Consider \({\mathcal{H}}= \{\upsilon^k\}_{k \in {\mathbb{N}}}\) a countable family of \({\mathcal{I}}\)-meta-universes and \(\beta: (0,\infty) \rightarrow (0,\infty)\) s.t. \(\beta(t) = \omega(t^{2/3})\). Let \(\{\alpha^k\}_{k \in {\mathbb{N}}}\) be a family of \({\mathcal{I}}\)-metapolicies s.t. for every \(k \in {\mathbb{N}}\), \(\alpha^k\) is \(\beta\)-rational for \(\upsilon^k\). Define \(\bar{{\mathcal{H}}}:=\{\bar{\upsilon}^k[\alpha^k]\}_{k \in {\mathbb{N}}}\). Then, \(\bar{{\mathcal{H}}}\) is learnable.

Proof of Theorem

We don’t spell out the proof in detail, but only the modifications with respect to the original.

As in the proof of the original theorem, we can assume without loss of generality that \({\mathcal{H}}\) is finite. Define \(\pi^*\) the same way as in Lemma A, but with \(L_t\) redefined as

\[L_t(ha):={\underset{k\sim\zeta_t(h)}{\operatorname{E}}}[L^{\alpha^k}_t(ha)]\]

Similarly, define \(\pi^!\) the same way as in the proof of Lemma A, but with \(L_t\) redefined as

\[L_t(ha):={\underset{k\sim\zeta^{!k}_t(h)}{\operatorname{E}}}[L^{\alpha^k}_t(ha)]\]

As in the proof of Lemma A, we have

\[\frac{1}{N}\sum_{k < N}({\operatorname{EU}}_{\bar{\upsilon}^k[\alpha^k]}^{*}(t) - {\operatorname{EU}}_{\bar{\upsilon}^k[\alpha^k]}^{\pi^{!k}}(t))=\sum_{n=0}^\infty e^{-n/t} {\underset{(k,x)\sim\rho^!_t}{\operatorname{E}}}[{\operatorname{V}}^{\upsilon^k[\alpha^k]}_t(x_{:n})-{\operatorname{Q}}^{\upsilon^k[\alpha^k]}_t(x_{:n}\pi^{!k}(x_{:n}))]\]

Using condition iii in the Definition, we conclude that for some function \(\delta:(0,\infty)\rightarrow[0,\infty)\) with \(\lim_{t\rightarrow\infty}\delta(t)=0\)

\[\frac{1}{N}\sum_{k < N}({\operatorname{EU}}_{\bar{\upsilon}^k[\alpha^k]}^{*}(t) - {\operatorname{EU}}_{\bar{\upsilon}^k[\alpha^k]}^{\pi^{!k}}(t)) \leq \sum_{n=0}^\infty e^{-n/t} {\underset{(k,x)\sim\rho^!_t}{\operatorname{E}}}\left[[[\pi^{!k}(x_{:n})\ne\bot]]L^{\alpha^k}_t(\underline{x_{:n}}\pi^{!k}(x_{:n}))+[[\pi^{!k}(x_{:n})=\bot]]{\underset{a\sim\alpha^k(\underline{x_{:n}})}{\operatorname{E}}}[L^{\alpha^k}_t(\underline{x_{:n}}a)]\right]+\delta(t)\]

We can now repeat the same arguments as in the proof of Lemma A to get

\[\frac{1}{N}\sum_{k < N}({\operatorname{EU}}_{\bar{\upsilon}^k[\alpha^k]}^{*}(t) - {\operatorname{EU}}_{\bar{\upsilon}^k[\alpha^k]}^{\pi^{*}}(t)) \leq \left(\frac{1}{t}+1+\frac{8 {\lvert {\mathcal{A}}\rvert}^3 \ln{N}}{e(1-e^{-1})^2}\right)\frac{t^{2/3}}{\beta(t)}+\frac{N-1}{t^{1/3}}+\delta(t)\]

The desired result follows.



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

I have stopped working on
by Scott Garrabrant on Cooperative Oracles: Introduction | 0 likes

The only assumptions about
by Vadim Kosoy on Delegative Inverse Reinforcement Learning | 0 likes

So this requires the agent's
by Tom Everitt on Delegative Inverse Reinforcement Learning | 0 likes

If the agent always delegates
by Vadim Kosoy on Delegative Inverse Reinforcement Learning | 0 likes

Hi Vadim! So basically the
by Tom Everitt on Delegative Inverse Reinforcement Learning | 0 likes

Hi Tom! There is a
by Vadim Kosoy on Delegative Inverse Reinforcement Learning | 0 likes

Hi Alex! I agree that the
by Vadim Kosoy on Cooperative Oracles: Stratified Pareto Optima and ... | 0 likes

That is a good question. I
by Tom Everitt on CIRL Wireheading | 0 likes

Adversarial examples for
by Tom Everitt on CIRL Wireheading | 0 likes

"The use of an advisor allows
by Tom Everitt on Delegative Inverse Reinforcement Learning | 0 likes

If we're talking about you,
by Wei Dai on Current thoughts on Paul Christano's research agen... | 0 likes

Suppose that I, Paul, use a
by Paul Christiano on Current thoughts on Paul Christano's research agen... | 0 likes

When you wrote "suppose I use
by Wei Dai on Current thoughts on Paul Christano's research agen... | 0 likes

> but that kind of white-box
by Paul Christiano on Current thoughts on Paul Christano's research agen... | 0 likes

>Competence can be an
by Wei Dai on Current thoughts on Paul Christano's research agen... | 0 likes

RSS

Privacy & Terms