Intelligent Agent Foundations Forumsign up / log in
Improved formalism for corruption in DIRL
discussion post by Vadim Kosoy 157 days ago | discuss

We give a treatment of advisor corruption in DIRL, more elegant and general than our previous formalism.

The following definition replaces the original Definition 5.

Definition

Consider a meta-universe \(\upsilon=(\mu,r)\) and \(\beta:(0,\infty)\rightarrow(0,\infty)\). A metapolicy \(\alpha\) is called \(\beta\)-rational for \(\upsilon\) (as opposed to before, we assume \(\alpha\) is an \({\mathcal{I}}\)-metapolicy rather than an \({{\bar{{\mathcal{I}}}}}\)-metapolicy; this is purely for notational convenience and it is straightforward to generalize the definition) when there exists \(\{L^\alpha_t: \operatorname{hdom}{\mu} \times {\mathcal{A}}\rightarrow [0,\infty]\}_{t \in (0,\infty)}\) s.t.

  1. For any \(t \in (0,\infty)\) and \(h \in \operatorname{hdom}{\mu_t}\), there is \(a \in {\mathcal{A}}\) s.t. \(L^\alpha_t(ha)=0\).

  2. \(\alpha_t(h)(a)=\exp(-\beta(t)L^\alpha_t(ha)) \max_{a^* \in {\mathcal{A}}} \alpha_t(h)(a^*)\)

  3. For any \(\pi \in \Pi\) and \(t \in (0,\infty)\)

\[\lim_{t \rightarrow \infty}\min({\underset{x\sim\mu_t\bowtie\pi}{\operatorname{E}}}[\sum_{n=0}^\infty e^{-n/t} L^\alpha_t(x_{:n+1/2})]-{\underset{x\sim\mu_t\bowtie\pi}{\operatorname{E}}}[\sum_{n=0}^\infty e^{-n/t}({\operatorname{V}}^\upsilon_t(x_{:n})-{\operatorname{Q}}^\upsilon_t(x_{:n+1/2}))],0)=0\]


In condition ii, \(\exp(-\infty)\) is understood to mean \(0\). Conditions i+ii can be seen as the definition of \(L^\alpha_t\) given \(\alpha_t\). A notable special case of condition iii is when for any \(x \in \operatorname{hdom}{\mu_t}\)

\[\sum_{n=0}^\infty e^{-n/t} L^\alpha_t(x_{:n+1/2}) \geq \sum_{n=0}^\infty e^{-n/t}({\operatorname{V}}^\upsilon_t(x_{:n})-{\operatorname{Q}}^\upsilon_t(x_{:n+1/2}))\]

As a simple example, we can have a set of corrupt states \(\{{\mathcal{C}}_t \subseteq {({\mathcal{A}}\times {\mathcal{O}})^*}\}_{t\in(0,\infty)}\) in which the behavior of the advisor becomes arbitrary, but for each \(h \in {\mathcal{C}}_t\) there is \(g \in {({\mathcal{A}}\times {\mathcal{O}})^*}\times {\mathcal{A}}\) s.t. \(g \sqsubset h\) and \(L^\alpha_t(g)=\infty\) (i.e., to corrupt the advisor one has to take an action that the advisor would never take). As opposed to before, this formalism can also account for partial corruption, e.g. if for each \(h \not\in {\mathcal{C}}_t\) and \(a \in {\mathcal{A}}\), we have \(L^\alpha_t(ha) \geq {\operatorname{V}}^\upsilon_t(h) - {\operatorname{Q}}^\upsilon_t(ha)\) (like in strict \(\beta\)-rationality) whereas for \(h \in {\mathcal{C}}_t\), we only have \(L^\alpha_t(ha) \geq {\operatorname{V}}^\upsilon_t(h) - {\operatorname{Q}}^\upsilon_t(ha) - \delta\) for some constant \(\delta > 0\), then to ensure \(\beta\)-rationality, it is sufficient that for each \(h = a_0o_0a_1o_1 \ldots \in {\mathcal{C}}_t\):

\[\sum_{n=0}^{\max\{m \mid h_{:m} \not\in{\mathcal{C}}_t\}} e^{-n/t}(L^\alpha_t(h_{:n}a_n) - {\operatorname{V}}^\upsilon_t(h_{:n}) - {\operatorname{Q}}^\upsilon_t(h_{:n}a_n)) \geq \frac{\delta e^{-(\max\{m \mid h_{:m} \not\in{\mathcal{C}}_t\}+1)/t}}{1-e^{-1/t}}\]

Theorem

Consider \({\mathcal{H}}= \{\upsilon^k\}_{k \in {\mathbb{N}}}\) a countable family of \({\mathcal{I}}\)-meta-universes and \(\beta: (0,\infty) \rightarrow (0,\infty)\) s.t. \(\beta(t) = \omega(t^{2/3})\). Let \(\{\alpha^k\}_{k \in {\mathbb{N}}}\) be a family of \({\mathcal{I}}\)-metapolicies s.t. for every \(k \in {\mathbb{N}}\), \(\alpha^k\) is \(\beta\)-rational for \(\upsilon^k\). Define \(\bar{{\mathcal{H}}}:=\{\bar{\upsilon}^k[\alpha^k]\}_{k \in {\mathbb{N}}}\). Then, \(\bar{{\mathcal{H}}}\) is learnable.

Proof of Theorem

We don’t spell out the proof in detail, but only the modifications with respect to the original.

As in the proof of the original theorem, we can assume without loss of generality that \({\mathcal{H}}\) is finite. Define \(\pi^*\) the same way as in Lemma A, but with \(L_t\) redefined as

\[L_t(ha):={\underset{k\sim\zeta_t(h)}{\operatorname{E}}}[L^{\alpha^k}_t(ha)]\]

Similarly, define \(\pi^!\) the same way as in the proof of Lemma A, but with \(L_t\) redefined as

\[L_t(ha):={\underset{k\sim\zeta^{!k}_t(h)}{\operatorname{E}}}[L^{\alpha^k}_t(ha)]\]

As in the proof of Lemma A, we have

\[\frac{1}{N}\sum_{k < N}({\operatorname{EU}}_{\bar{\upsilon}^k[\alpha^k]}^{*}(t) - {\operatorname{EU}}_{\bar{\upsilon}^k[\alpha^k]}^{\pi^{!k}}(t))=\sum_{n=0}^\infty e^{-n/t} {\underset{(k,x)\sim\rho^!_t}{\operatorname{E}}}[{\operatorname{V}}^{\upsilon^k[\alpha^k]}_t(x_{:n})-{\operatorname{Q}}^{\upsilon^k[\alpha^k]}_t(x_{:n}\pi^{!k}(x_{:n}))]\]

Using condition iii in the Definition, we conclude that for some function \(\delta:(0,\infty)\rightarrow[0,\infty)\) with \(\lim_{t\rightarrow\infty}\delta(t)=0\)

\[\frac{1}{N}\sum_{k < N}({\operatorname{EU}}_{\bar{\upsilon}^k[\alpha^k]}^{*}(t) - {\operatorname{EU}}_{\bar{\upsilon}^k[\alpha^k]}^{\pi^{!k}}(t)) \leq \sum_{n=0}^\infty e^{-n/t} {\underset{(k,x)\sim\rho^!_t}{\operatorname{E}}}\left[[[\pi^{!k}(x_{:n})\ne\bot]]L^{\alpha^k}_t(\underline{x_{:n}}\pi^{!k}(x_{:n}))+[[\pi^{!k}(x_{:n})=\bot]]{\underset{a\sim\alpha^k(\underline{x_{:n}})}{\operatorname{E}}}[L^{\alpha^k}_t(\underline{x_{:n}}a)]\right]+\delta(t)\]

We can now repeat the same arguments as in the proof of Lemma A to get

\[\frac{1}{N}\sum_{k < N}({\operatorname{EU}}_{\bar{\upsilon}^k[\alpha^k]}^{*}(t) - {\operatorname{EU}}_{\bar{\upsilon}^k[\alpha^k]}^{\pi^{*}}(t)) \leq \left(\frac{1}{t}+1+\frac{8 {\lvert {\mathcal{A}}\rvert}^3 \ln{N}}{e(1-e^{-1})^2}\right)\frac{t^{2/3}}{\beta(t)}+\frac{N-1}{t^{1/3}}+\delta(t)\]

The desired result follows.



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

This is exactly the sort of
by Stuart Armstrong on Being legible to other agents by committing to usi... | 0 likes

When considering an embedder
by Jack Gallagher on Where does ADT Go Wrong? | 0 likes

The differences between this
by Abram Demski on Policy Selection Solves Most Problems | 0 likes

Looking "at the very
by Abram Demski on Policy Selection Solves Most Problems | 0 likes

Without reading closely, this
by Paul Christiano on Policy Selection Solves Most Problems | 1 like

>policy selection converges
by Stuart Armstrong on Policy Selection Solves Most Problems | 0 likes

Indeed there is some kind of
by Vadim Kosoy on Catastrophe Mitigation Using DRL | 0 likes

Very nice. I wonder whether
by Vadim Kosoy on Hyperreal Brouwer | 0 likes

Freezing the reward seems
by Vadim Kosoy on Resolving human inconsistency in a simple model | 0 likes

Unfortunately, it's not just
by Vadim Kosoy on Catastrophe Mitigation Using DRL | 0 likes

>We can solve the problem in
by Wei Dai on The Happy Dance Problem | 1 like

Maybe it's just my browser,
by Gordon Worley III on Catastrophe Mitigation Using DRL | 2 likes

At present, I think the main
by Abram Demski on Looking for Recommendations RE UDT vs. bounded com... | 0 likes

In the first round I'm
by Paul Christiano on Funding opportunity for AI alignment research | 0 likes

Fine with it being shared
by Paul Christiano on Funding opportunity for AI alignment research | 0 likes

RSS

Privacy & Terms