Online Learning 3: Adversarial bandit learning with catastrophes
post by Ryan Carey 704 days ago | Vadim Kosoy and Patrick LaVictoire like this | discuss

Note: This describes an idea of Jessica Taylor’s.

In order to better understand how machine learning systems might avoid catastrophic behavior, we are interested in modeling this as an adversarial learning problem.

A major challenge with efficiently learning to avoid catastrophes is that the utility (or loss) function can take extremely negative values. This causes problems for standard online learning models, which often incur regret proportional to the difference between the maximum and minimum achievable utility.

In our last post, we explored how exploration-only bandit algorithms can be adapted to efficiently learn to avoid catastrophic actions. That model was oversimplified in a couple of ways. First, it provided a fixed distribution of examples to the learner. Second, it cleanly divides the learning process into an exploration phase, in which the learner can make many mistakes, and a subsequent deployment phase, in which the learner must select some near-optimal, non-catastrophic arm. In this post, we relax both of these assumptions. We use a learner that responds to a changing, adversarially-selected stream of examples. We do continue to allow the learner some exploration steps but the learner must choose when to use these to avoid catastrophic consequences. We don’t achieve tight bounds, but we demonstrate that bandit learning systems can at least achieve sublinear regret using a sublinear number of exploration steps in an adversarial context. Similarly to our last post, we assume that the learner’s beliefs about risks of catastrophe contain some grain of truth. Specifically, in this post, the learner has access to some overestimates of how frequently it should check for catastrophes, and sum of these estimates is not too high by more than a constant factor. We also assume the existence of some arm that has no risk of catastrophe.

The proofs in this post are somewhat long, and may be substantially changed with future work, so some readers may want to just read “Setup”, “Defining the agent’s performance”, “A proposed learning model” and “Discussion”.

## Setup

Our model relies on some definitions:

• Let $$\mathcal{I} = \{I_1,I_2,...,I_K\}$$ be a set of bandit arms.
• Let $$R_t:\mathcal{I} \mapsto [0,1]$$ be the intrinsic reward function selected at time step $$t$$.
• Let $$C_t:\mathcal{I} \mapsto [0,1]$$ be the intrinsic catastrophic risk function at time step $$t$$. We assume that there is some bandit arm that always avoids catastrophes: $\exists i \in \mathcal{I} \text{ s.t. } \forall t \; C_t(i) = 0$
• Let $$M_{t}(i) := R_t(I) - \frac{C_t(i)}{\tau}$$ be the net payoff for the agent, where $$\tau \in (0,1]$$ is the risk-tolerance, which may be some very small number of order $$10^{-20}$$.
• Let $$a_{t,i_t} := \min \left(\frac{C_t(i_t)}{z\tau},1 \right)$$ be the ideal sampling probability, where $$z \in [1,\infty)$$ is a parameter that specifies how much lost utility from catastrophes the learner tolerates.
• Let $$\hat{a}_{t,i_t} \in [0,1]$$ be some overestimate of the ideal sampling probability. The learner has access to this overestimate.
• Let $$b_{t,i_t} \in [0,1]$$ be the sampling probability that the learner selects for some time step.

On each time step, $$1,2,...,T$$, the learner performs two tasks: i) it selects an arm, and ii) it decides whether the step is an exploration step or a deployment step. Both of these decisions are carried out probabalistically.

To begin each time step $$t$$, the learner outputs a distribution over arms, $$D \in \Delta \mathcal{I}$$. The learner also outputs a sampling probability $$b_{t,i_t}$$ to be used if each arm $$i_t$$ is selected. The adversary selects the intrinsic reward and catastrophic risk functions, $$R_t$$ and $$C_t$$ in response to $$D$$. Then, the learner’s selected arm $$i_t$$ is drawn from $$D$$.

Next, a biased coin is flipped, with probability $$b_{t,i_t}$$ of heads, $$F_t \sim \text{Bernoulli}(b_{t,i_t})$$. If it lands heads ($$F_t=1$$) then $$t$$ is an exploration step. This means that the learner observes the intrinsic reward $$R_t(i_t)$$ and risk $$C_t(i_t)$$ of its selected arm but receives no payoff. If the coin lands tails, $$t$$ it is a deployment step, the agent receives a payoff but does not observe it.

After $$T$$ of these time steps have passed, the hope is that our learning algorithm has achieved low regret and has used few exploration steps.

## Defining the agent’s performance

We define the number of exploration steps and the regret in turn.

The number of exploration steps is defined as $$\sum_{t=1}^T F_t$$.

Regret is defined relative to the best arm, $$i^*$$: $\DeclareMathOperator*{\argmax}{arg\,max} i^* \in \argmax_{i' \in \mathcal{I}} \sum_{t=1}^T M_{t}(i')$

The regret for a series of arms $$i_1,i_2,...,i_T$$ is then: \begin{aligned} \text{Regret}(i^*) &:= \sum_{t=1}^T M_{t}(i^*) - \sum_{t=1}^T \mathbb{E}[(1-F_t)\cdot M_{t}(i_t)] \\ &= \sum_{t=1}^T (M_t(i^*) - (1-b_{t,i_t})M_t(i_t)) \\ \end{aligned}

To summarize, the learner only accumulates its net payoff from deployment steps but it is expected to come close to the maximum payoff that can be achieved using all time steps.

## A proposed learning model

In designing a learning system for this kind of problem, a couple of difficulties arise. The first is that the surrogate loss function has an extremely large range. The net payoff function $$M_t$$ has range $$[-\frac{1}{\tau},1]$$, where $$\tau \approx 10^{-20}$$ and so when bandit algorithms like Exp3 are applied to this learning problem, their regret will include an unacceptable factor of $$1/\tau$$. So $$M_t$$ is replaced with a surrogate loss function $$S_t$$, whose range is capped to $$[-z,1]$$:

$S_{t}(i) := R_t(i) - \min\left(\frac{C_t(i)}{\tau},z\right)$

where $$z \in [1,\infty)$$ indicates how how much lost utility from catastrophes the learner tolerates.

The second difficulty is that if the learner observed $$S_t$$ on every time step, it would always output $$b_{t,i_t}=1$$, and it would use a linear number of exploration steps. To achieve a sublinear amount of exploration, the learner must instead use sampling probabilities $$b_{t,i_t}$$ that are sublinear in $$T$$ and use estimates of the surrogate problem, $$\hat{S}_1(i),...,\hat{S}_T(i)$$, such that:

\begin{aligned} \hat{S}_{t}(i) & := F_t\frac{R_t(i) - \min(\frac{1}{\tau}C_t(i),z)}{b_{t,i_t}} \end{aligned} where $$\gamma \in (0,1]$$ is the base sampling rate. Note that this estimate depends on the sampling probabilities $$b_{t,i_t}$$, which in turn depends on the learner’s selected arms, $$i_t$$.

The main result, which follows from Theorem 1, is that a bandit learner can achieve ($$\epsilon,\delta$$)-optimal regret for any $$\delta \in (0,1)$$ where the error $$\epsilon$$ and the number of exploration time steps are sublinear, and do not depend on the risk-tolerance, $$\tau$$.

Theorem 1. Consider a learner that selects a sequence of arms $$i_1,i_2,...,i_T$$ using the Exp3 algorithm on some payoff functions $$\hat{S}_1, ..., \hat{S}_T$$ using sampling probabilities $$b_{1,i_1},b_{2,i_2},...,b_{T,i_T}$$, such that: $b_{t,i_t} := \min(\max(\hat{a}_{t,i_t}, \gamma),1)$ where the learner uses an overestimate of the ideal sampling probability $$\hat{a}_{t,i_t} \geq a_{t,i_t}$$ whose sum exceeds the ideal sampling probabilities by some fixed constant $$T \in [1,\infty)$$: $\sum_{t=1}^{T} \hat{a}_{t,i_t} \leq H\sum_{t=1}^{T} a_{t,i_t}$ where $$H\geq 1$$. Then, the learner’s regret and exploration steps are probabalistically bounded: \begin{aligned} & \text{Pr}\left(\sum_{t=1}^T F_t \geq \zeta + \frac{H}{z}(2\alpha + \eta + T) \vee \sum_{t=1}^T(M_t(i^*) - (1-b_{t,i_t})M_t(i_t)) \geq 2\alpha + \eta + \frac{2\alpha + \eta + T}{z} \right) \\ & \quad \leq 2\exp\left(\frac{-\gamma^2\alpha^2}{2z^2T}\right) + \exp\left(\frac{-\zeta^2}{8T}\right) \\ \end{aligned} where $$\eta = 2\frac{(z+1)}{\gamma}\sqrt{(e-1)K\ln(K)T}$$ where $$\alpha >0$$, $$\zeta > 0$$, $$\gamma \in (0,1]$$ is the base sampling rate, $$K$$ is the number of bandit arm, and $$z \in [1,\infty]$$ being amount of utility that the learner is prepared to lose from catastrophes.

In outline, our approach is to use the Exp3 adversarial bandit algorithm on the estimates $$\hat{S}_1,...,\hat{S}_T$$ of the surrogate loss function. We will begin by showing that the surrogate regret upper-bounds the actual regret (Lemma 2). We then quantify the error from approximating the surrogate regret (Lemmas 3, 4). To the estimation error, we add the regret from the Exp3 algorithm to obtain Lemma 5. To prove the bound on exploration steps, Theorem 2 is just a small extension.

## Good surrogate performance implies good deployment performance

First, we need to show that $$S_t$$ is a good surrogate for $$M_t$$. Specifically, we show that the regret with $$M_t$$ is bounded by our regret with $$S_t$$.

Lemma 2

If a learner selects arms $$i_1,...i_T$$ and sampling probabilities $$b_{1,i_1},...,b_{T,i_T}$$ as described in Theorem 1, then, its regret will be no worse than the regret with with the surrogate $$S_t$$. $\text{Regret}(i^*) = \sum_{t=1}^T (M_t(i_t) - (1-b_{t,i_t})M_t(i^*) \leq \sum_{t=1}^T (S_t(i^*) - (1-b_{t,i_t})S_t(i_t)$

Proof of Lemma 2. First, we can show that $$(1-b_{t,i_t})M_t(i_t) = (1-b_{t,i_t})S_t(i_t)$$. This is because if the catastrophic risk is ever too high, the learner will just take an exploration step.

If the catastrophic risk from some arm $$i_t$$ is unacceptably high, i.e. $$\frac{C_t(i_t)}{\tau} > z$$, then $$b_{t,i_t} = \hat{a}_{t,i_t} = a_{t,i_t} = 1$$, and so $$(1-b_{t,i_t}) = 0$$.

If the catastrophic risk is in the acceptable range, i.e. if $$\frac{C_t(i_t)}{\tau} \leq z$$, then:

\begin{aligned} M_{t}(i_t) - S_{t}(i_t) &= \left(R_t(i_t) - \frac{C_t(i_t)}{\tau}\right) - \left(R_t(i_t) - \min\left(\frac{C_t(i_t)}{\tau},z\right)\right) \\ &= 0 \end{aligned}

So for all time steps in $$\{1,2,...,T\}$$, $$(1-b_{t,i_t})(M_t(i_t) - S_t(i_t)) = 0$$, so $$\sum_{t=1}^T(1-b_{t,i_t})(M_t(i_t) - S_t(i_t)) = 0$$. It follows that: $\sum_{t=1}^T(1-b_{t,i_t})M_t(i_t) = \sum_{t=1}^T(1-b_{t,i_t})S_t(i_t)$

We must also show that $$\sum_{t=1}^T M_t(i^*) \leq \sum_{t=1}^T S_t(i^*)$$. This is true because for all $$i$$, $$S_t(i) \geq M_t(i)$$. One can see that $$M_t(i) - S_t(i) = \min\left(\frac{C_t,(i)}{\tau}, z\right) - \frac{C_t,(i^*)}{\tau} \leq 0$$.

It therefore follows that: $\text{Regret}(i^*) = \sum_{t=1}^T (M_t(i^*) - (1-b_{t,i_t})M_t(i_t) \leq \sum_{t=1}^T (S_t(i^*) - (1-b_{t,i_t})S_t(i_t))$

## Bounds on estimation error

The next step is to bound the error that arises from estimating $$S_t$$ with $$\hat{S}_t$$.

Lemma 3 If a learner selects arms $$i_1,...i_T$$ and sampling probabilities $$b_{1,i_1},...,b_{T,i_T}$$ as described in Theorem 1, then:

$\text{Pr}\left(\sum_{t'=1}^t(\hat{S}_{t'}(i_t) - S_{t'}(i_t)) > \alpha\right) \leq \exp\left(\frac{-\gamma^2\alpha^2}{2z^2t}\right)$ where $$\alpha > 0$$.

Proof of Lemma 3.

First, we show that $$\hat{S}_{t}(i)$$ is an unbiased estimate of $$S_{t}(i)$$. \begin{aligned} \mathbb{E}[\hat{S}_{t}(i)] &= \mathbb{E}\left[F_t\frac{R_t(i_t)-\min\left(\frac{1}{\tau}C_t(i_t),z\right)}{b_{t,i_t}}\right] \\ &= b_{t,i_t} \frac{R_t(i) - \min\left(\frac{1}{\tau}C_t(i_t),z\right)}{b_{t,i_t}}\\ &= S_{t}(i) \end{aligned}

Then, since $$\hat{S}_{t}(i)$$ is an unbiased estimate of $$S_{t}(i)$$, we can define the following martingale:

$X_t := \sum_{t'=1}^t(\hat{S}_{t'}(i_{t'}) - S_{t'}(i_{t'}))$

Using the fact that $$S_t(i_t)$$ has range $$[-z,1]$$, we can bound $$|X_t-X_{t-1}|$$: \begin{aligned} X_t - X_{t-1} &= \hat{S}_{t}(i_t) - S_{t}(i_t) \\ &= \left(\frac{1}{b_{t,i_t}} F_t - 1\right)S_t(i_t) \\ \therefore \qquad |X_t - X_{t-1}| &\leq \max(|\frac{1}{b_{t,i_t}}-1|,|-1|)\cdot|-z| \\ &\leq \max\left(\frac{1}{\min(\max(\hat{a}_{t,i_t},\gamma),1)}-1,1\right)\cdot z \\ &\leq \max\left(\frac{1}{\gamma}-1,1\right)\cdot z \\ &\leq \frac{z}{\gamma} & \qquad (\gamma \in (0,1]) \end{aligned}

Azuma’s inequality yields the result:

$\text{Pr}\left(\sum_{t'=1}^t(\hat{S}_{t'}(i_t) - S_{t'}(i_t)) > \alpha\right) \leq \exp\left(\frac{-\gamma^2\alpha^2}{2z^2t}\right)$ where $$\alpha > 0$$. ◻

We will use the same argument to bound the estimation error when a best arm $$i^*$$ is repeatedly selected.

Lemma 4. Consider that learner selects arms $$i_1,...i_T$$ and sampling probabilities $$b_{1,i_1},...,b_{T,i_T}$$ as described in Theorem 1, and let $$i^* \in \argmax_{i' \in \mathcal{I}} \sum_{t=1}^T M_{t}(i')$$ be a best arm. Then the estimate of the payoffs for the best arm can be bounded by:

\begin{aligned} \text{Pr}\left(\sum_{t'=1}^t(S_{t'}(i^*) - \hat{S}_{t'}(i^*) > \alpha\right) \leq \exp\left(\frac{-\gamma^2\alpha^2}{2z^2t}\right) \end{aligned} where $$\alpha > 0$$.

Proof of Lemma 4.

The proof is similar to Lemma 3. Since $$\hat{S}_{t'}(i^*)$$ is an unbiased estimate of $$S_{t'}(i^*)$$, we have the martingale $$Y_t = \sum_{t'=1}^t(S_{t'}(i^*) - \hat{S}_{t'}(i^*))$$. As in Lemma 3, this martingale is bounded by $$|Y_t-Y_{t-1}| \leq \frac{z}{\gamma}$$. The application of Azuma’s inequality yields the result.◻

## A bound on surrogate regret

In order to prove Theorem 1, we will need to show that the Exp3 algorithm achieves low regret with respect to $$\hat{S}_t$$.

Its regret guarantee states that an Exp3 learner that selects some arms $$i_1,i_2,...,i_T$$ on some payoff functions $$\hat{S}_1, ..., \hat{S}_T$$ that have range $$\left[\frac{-z}{\gamma},\frac{1}{\gamma}\right]$$ will have regret bounded as [1]:

\begin{aligned} \sup_{i' \in \mathcal{I}} \sum_{t=1}^T(\hat{S}_{t}(i') - \hat{S}_{t}(i_t)) &\leq 2 \left(\frac{1}{\gamma}-\frac{-z}{\gamma}\right)\sqrt{e-1}\sqrt{TK\ln K} \nonumber \\ & \leq 2\frac{(z+1)}{\gamma}\sqrt{e-1}\sqrt{TK\ln K} \end{aligned}

Next, we show that the Exp3 algorithm will perform well on the surrogate problem.

Lemma 5

If a learner selects arms $$i_1,...i_T$$ and sampling probabilities $$b_{1,i_1},...,b_{T,i_T}$$ as described in Theorem 1, then its performance in on the surrogate problem will be probabalistically bounded: $\text{Pr}\left(\sum_{t=1}^T (S_{t}(i^*) - S_{t}(i_t)) \geq 2\alpha + \eta\right) \leq 2\exp\left(\frac{-\gamma^2\alpha^2}{2z^2T}\right)$ where $$\eta = 2\frac{(z+1)}{\gamma}\sqrt{(e-1)K\ln(K)T}$$, and $$\alpha >0$$.

Proof of Lemma 5. Using the union bound, we have that: \begin{aligned} & \text{Pr}\left(\sum_{t=1}^T (S_{t}(i^*) - S_{t}(i_t)) \geq 2\alpha + \eta\right) \\ & = \text{Pr}\left( \sum_{t=1}^T (S_{t}(i^*) - \hat{S}_{t}(i^*) + \hat{S}_{t}(i^*) - \hat{S}_{t}(i_t) + \hat{S}_{t}(i^*) - S_{t}(i_t)) \geq 2\alpha + \eta\right) \\ & \leq \text{Pr}\left( \sum_{t=1}^T (S_{t}(i^*) - \hat{S}_{t}(i^*)) \geq \alpha \vee \sum_{t=1}^T(\hat{S}_{t}(i^*) - \hat{S}_{t}(i_t)) \geq \eta \vee \sum_{t=1}^T (\hat{S}_{t}(i_t) - S_{t}(i_t)) \geq \alpha\right) \\ & \leq \text{Pr}\left( \sum_{t=1}^T (S_{t}(i^*) - \hat{S}_{t}(i^*)) \geq \alpha\right) + \text{Pr}\left(\sum_{t=1}^T(\hat{S}_{t}(i^*) - \hat{S}_{t}(i_t)) \geq \eta\right) + \text{Pr}\left(\sum_{t=1}^T (\hat{S}_{t}(i_t) - S_{t}(i_t)) \geq \alpha\right) \\ & \leq \exp\left(\frac{-\gamma^2\alpha^2}{2z^2T}\right) + 0 + \exp\left(\frac{-\gamma^2\alpha^2}{2z^2T}\right) \qquad \qquad \text{(Lemmas 3, 4 and Exp3 bound)}\\ & \leq 2\exp\left(\frac{-\gamma^2\alpha^2}{2z^2T}\right) \end{aligned}

In order to prove a bound on exploration steps in Theorem 1, we must first prove that the sum of sampling probabilities is not too high.

## A bound on sampling probabilities

Lemma 6.

If a learner selects arms $$i_1,...i_T$$ and sampling probabilities $$b_{1,i_1},...,b_{T,i_T}$$ as described in Theorem 1, and furthermore, $\sum_{t=1}^T (S_t(i^*) - S_t(i_t)) \leq 2\alpha + \eta)$

where $$\eta = 2\frac{(z+1)}{\gamma}\sqrt{(e-1)K\ln(K)T}$$ and $$\alpha > 0$$, then, the sum of the sampling probabilities used is bounded by:

$\sum_{t=1}^T b_{t,i_t} \leq \frac{H}{z}(2\alpha + \eta + T)$

Proof of Lemma 6.

First note that a best arm $$i^*$$ must have non-negative total score on the surrogate problem:

\begin{aligned} \exists i' \in \mathcal{I} \text{ s.t. } \forall t \; C_t(i') = 0 &&&& \\ \exists i' \in \mathcal{I} \text{ s.t. } \forall t \; M_t(i') \geq 0 &&&& (R_t(i')\geq 0) \\ \therefore \quad \sum_{t=1}^T M(i^*) \geq 0 \\ \therefore \quad \sum_{t=1}^T S(i^*) \geq 0 &&&& (S_t(i) \geq M_t(i))\\ \end{aligned}

If $$\sum_{t=1}^T (S_t(i^*) - S_t(i_t)) \leq 2\alpha + \eta$$, then: \begin{aligned} \sum_{t=1}^T (- S_t(i_t)) &\leq 2\alpha + \eta \\ \sum_{t=1}^T (-R_t(i_t) + za_{t,i_t}) &\leq \\ -T + \sum_{t=1}^T za_{t,i_t} &\leq \\ \therefore \qquad \sum_{t=1}^T a_{t,i_t} &\leq \frac{1}{z}(2\alpha + \eta + T) \\ \sum_{t=1}^T \frac{\hat{a}_{t,i_t}}{H} &\leq \\ \therefore \qquad \sum_{t=1}^T b_{t,i_t} &\leq \frac{H}{z}(2\alpha + \eta + T) \end{aligned}

where $$\eta = 2\frac{(z+1)}{\gamma}\sqrt{(e-1)K\ln(K)T}$$, and $$\alpha > 0$$.

\section{Bounding the overall regret}

Lemma 7.

If a learner selects arms $$i_1,...i_T$$ and sampling probabilities $$b_{1,i_1},...,b_{T,i_T}$$ as described in Theorem 1, and furthermore, $\sum_{t=1}^T (S_t(i^*) - S_t(i_t)) \leq 2\alpha + \eta)$

where $$\eta = 2\frac{(z+1)}{\gamma}\sqrt{(e-1)K\ln(K)T}$$, and $$\alpha > 0$$, then regret will be bounded as:

$\sum_{t=1}^T (M_t(i^*) - (1-b_{t,i_t})M_t(i_t)) \leq 2\alpha + \eta + \frac{H}{z}(2\alpha + \eta + T)$

Proof of Lemma 7

If $$\sum_{t=1}^T (S_t(i^*) - S_t(i_t)) \leq 2\alpha + \eta$$ where $$\eta = 2\frac{(z+1)}{\gamma}\sqrt{(e-1)K\ln(K)T}$$, and $$\alpha > 0$$, then Lemma 6 gives us that: \begin{aligned} \sum_{t=1}^T b_{t,i_t} &\leq \frac{H}{z}(2\alpha + \eta + T) \\ \therefore \qquad \sum_{t=1}^T b_{t,i_t} S_t{i_t} &\leq \frac{H}{z}(2\alpha + \eta + T) \\ \therefore \qquad \sum_{t=1}^T (S_t(i^*) - S_t(i_t)) + \sum_{t=1}^T b_{t,i_t} S_t{i_t} &\leq 2\alpha + \eta + \frac{H}{z}(2\alpha + \eta + T) \\ \sum_{t=1}^T (S_t(i^*) - (1-b_{t,i_t})S_t(i_t)) &\leq \end{aligned}

Using Lemma 2, we know that this bound on regret will give the learner a good net payoff: $\sum_{t=1}^T (M_t(i^*) - (1-b_{t,i_t})M_t(i_t)) \leq 2\alpha + \eta + \frac{H}{z}(2\alpha + \eta + T)$

## Completing the proof of Theorem 1

Proof of Theorem 1 For the learner to succeed, two things must happen. First, its sampling error must not cause too much regret. Second, it must not have too many exploration steps:

\begin{aligned} & \text{Pr}\left(\sum_{t=1}^T F_t \geq \zeta + \frac{H}{z}(2\alpha + \eta + T) \vee \sum_{t=1}^T(M_t(i^*) - (1-b_{t,i_t})M_t(i_t) \geq 2\alpha + \eta + \frac{2\alpha + \eta + T}{z} \right) \\ & \quad \leq \text{Pr}\left(\sum_{t=1}^T (F_t - b_{t,i_t}) \geq \zeta \vee \sum_{t=1}^T b_{t,i_t} \geq \frac{H}{z}(2\alpha + \eta + T) \vee \sum_{t=1}^T(S_t(i^*) - S_t(i_t) \geq 2\alpha + \eta\right) \\ & \quad \leq \text{Pr}\left(\sum_{t=1}^T (F_t - b_{t,i_t}) \geq \zeta \vee \sum_{t=1}^T(S_t(i^*) - S_t(i_t) \geq 2\alpha + \eta\right) \qquad\qquad (\text{Lemma 6}) \\ & \quad \leq \text{Pr}\left(\sum_{t=1}^T (F_t - b_{t,i_t}) \geq \zeta\right) + \text{Pr}\left(\sum_{t=1}^T(S_t(i^*) - S_t(i_t) \geq 2\alpha + \eta\right) \qquad (\text{Union bound}) \\ \end{aligned} From Lemma 5, we can bound the surrogate performance as: $$\text{Pr}(\sum_{t=1}^T(S_t(i^*) - S_t(i_t) \geq 2\alpha + \eta) \leq 2\exp\left(\frac{-\gamma^2\alpha^2}{2z^2T}\right)$$. Using Azuma’s inequality, we can also bound $$\text{Pr}\left(\sum_{t=1}^T (F_t - b_{t,i_t}) \geq \zeta\right)$$, in order to complete the result. Since $$F_t$$ is an unbiased estimate of $$b_{t,i_t}$$, we have the Martingale:

$Z_t = \sum_{t'=1}^t (F_{t'}-b_{t'})$ where $$|Z_t-Z_{t-1}| < 2$$.

By Azuma’s inequality, we have $$\text{Pr}\left(\sum_{t=1}^T(F_t - b_{t,i_t}) \geq \zeta\right) \leq \exp\left(\frac{-\zeta^2}{8T}\right)$$, where $$\zeta > 0$$. So we obtain:

\begin{aligned} & \text{Pr}\left(\sum_{t=1}^T F_t \geq \zeta + \frac{H}{z}(2\alpha + \eta + T) \vee \sum_{t=1}^T(M_t(i^*) - (1-b_{t,i_t})M_t(i_t)) \geq 2\alpha + \eta + \frac{2\alpha + \eta + T}{z} \right) \\ & \quad \leq 2\exp\left(\frac{-\gamma^2\alpha^2}{2z^2T}\right) + \exp\left(\frac{-\zeta^2}{8T}\right) \\ \end{aligned}

## Achieving sublinear regret and sublinear exploration

What remains is to appropriately set the learner’s parameters to achieve sublinear exploration and regret. If we set the parameters to $$\gamma = T^\frac{-1}{6}$$, $$z = T^\frac{1}{6}$$, and the constants to $$\alpha = \sqrt{-2\log(\delta/3)}T^\frac{5}{6}$$ and $$\zeta = 2\sqrt{-2\log(\delta/3)T}$$, we have $$\eta = 2\nu(T^\frac{1}{6} + 1)T^{\frac{2}{3}}$$, and this yields an algorithm that will with $$(1-\delta)$$ probability achieve an error of less than $$\epsilon$$ where:

\begin{aligned} \epsilon = 2\sqrt{-2\log(\delta/3)}T^\frac{5}{6} + 2\nu(T^\frac{1}{6} + 1)T^\frac{2}{3} + \frac{1}{T^\frac{1}{6}}(2\sqrt{-2\log(\delta/3)}T^\frac{5}{6}+2\nu(T^\frac{1}{6} + 1)T^{\frac{2}{3}}+T) \\ & = (2\sqrt{-2\log(\delta/3)}+2\nu + 1)T^\frac{5}{6} + \mathcal{O}(T^\frac{2}{3}) \\ & = \mathcal{O}((\sqrt{\log(\delta)}+\sqrt{K\ln(K)})T^\frac{5}{6}) \\ \end{aligned}

and the number of exploration steps is: \begin{aligned} \sum_{t=1}^T F_t &\leq 2\sqrt{-2\log(\delta/3)T} + \frac{H}{T^\frac{1}{6}}(2\sqrt{-2\log(\delta/3)}T^\frac{5}{6}+2\nu(T^\frac{1}{6} + 1)T^{\frac{2}{3}}+T) \\ &= \mathcal{O}(\sqrt{K\ln(K)}T^\frac{5}{6}) \end{aligned} That is to say that the regret and number of exploration steps will both be of order $$\mathcal{O}(T^\frac{5}{6})$$ with probability $$1-\delta$$, and do not depend on the risk-tolerance, $$\tau$$.

## Discussion

The main aim of this post has been to develop a new model where a bandit algorithm can learn to avoid catastrophes. We have seen that a learner can avoid catastrophes by a process that includes switching between exploration and deployment steps. The modelling assumptions were that it can access overestimates $$\hat{a}_{t,i_t}$$ of the ideal rate at which to sample for a catastrophe, such that these estimates are not overall too sensitive to nonserious catastrophic risks: $$\sum_{t=1}^T \hat{a}_{t,i_t} \leq H\sum_{t=1}^T a_{t,i_t}$$, and that there is some arm that carries no catastrophic risk.

These assumptions could have some difficulties in practice. Many AI systems can avoid causing catastrophes be remaining immobile or inert, but others such as self-driving cars and spy drones, do not have this luxury. Obtaining the estimates of catastrophe risk would also be very difficult. In general, a lot of data would be required, which might perhaps have to be labelled using simulations and/or by human operators. There is further work to be done in trying to relax some of these assumptions.

Since this is just a proof of concept, the regret and exploration bounds are both of order $$T^\frac{5}{6}$$, and we have left this bound very loose. Ultimately, the aim would be to achieve efficiency nearer to the order of $$T^\frac{2}{3}$$ in keeping with a similar class of games lacking “local observability” as described by Bartok et al [2]. We have some ideas about how to achieve this, and will post our results.

# References

1. The bound on Exp3 is given in the discussion of Corrollary 4.2 on page 8-9 in Auer, Peter, et al. “Gambling in a rigged casino: The adversarial multi-armed bandit problem.” Foundations of Computer Science, 1995. Proceedings., 36th Annual Symposium on. IEEE, 1995.
2. See Bartók, Gábor, et al. “Partial monitoring-classification, regret bounds, and algorithms.” Mathematics of Operations Research 39.4 (2014): 967-997.

### NEW DISCUSSION POSTS

[Note: This comment is three
 by Ryan Carey on A brief note on factoring out certain variables | 0 likes

There should be a chat icon
 by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes

Apparently "You must be
 by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like

There is a replacement for
 by Alex Mennen on Meta: IAFF vs LessWrong | 1 like

Regarding the physical
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think that we should expect
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think I understand your
 by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

This seems like a hack. The
 by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

After thinking some more,
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yes, I think that we're
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

My intuition is that it must
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

To first approximation, a
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Actually, I *am* including
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yeah, when I went back and
 by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes

> Well, we could give up on
 by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes