by Vadim Kosoy 344 days ago | link | parent The only assumptions about the prior are that it is supported on a countable set of hypotheses, and that in each hypothesis the advisor is $$\beta$$-rational (for some fixed $$\beta(t)=\omega(t^{2/3})$$). There is no such thing as infinitely negative value in this framework. The utility function is bounded because of the geometric time discount (and because the momentary rewards are assumed to be bounded), and in fact I normalize it to lie in $$[0,1]$$ (see the equation defining $$\mathrm{U}$$ in the beginning of the Results section). Falling into a trap is an event associated with $$\Omega(1)$$ loss (i.e. loss that remains constant as $$t$$ goes to $$\infty$$). Therefore, we can risk such an event, as long as the probability is $$o(1)$$ (i.e. goes to $$0$$ as $$t$$ goes to $$\infty$$). This means that as $$t$$ grows, the agent will spend more rounds delegating to the advisor, but for any given $$t$$, it will not delegate on most rounds (even on most of the important rounds, i.e. during the first $$O(t)$$-length “horizon”). In fact, you can see in the proof of Lemma A, that the policy I construct delegates on $$O(t^{2/3})$$ rounds. As a simple example, consider again the toy environment from before. Consider also the environments you get from it by applying a permutation to the set of actions $$\mathcal{A}$$. Thus, you get a hypothesis class of 6 environments. Then, the corresponding DIRL agent will spend $$O(t^{2/3})$$ rounds delegating, observe which action is chosen by the advisor most frequently, and perform this action forevermore. (The phenomenon that all delegations happen in the beginning is specific to this toy example, because it only has 1 non-trap state.) If you mean this paper, I saw it?

 by Tom Everitt 335 days ago | link My confusion is the following: Premises (*) and inferences (=>): The primary way for the agent to avoid traps is to delegate to a soft-maximiser. Any action with boundedly negative utility, a soft-maximiser will take with positive probability. Actions leading to traps do not have infinitely negative utility. => The agent will fall into traps with positive probability. If the agent falls into a trap with positive probability, then it will have linear regret. => The agent will have linear regret. So when you say in the beginning of the post “a Bayesian DIRL agent is guaranteed to attain most of the value”, you must mean that in a different sense than a regret sense? reply
 by Vadim Kosoy 334 days ago | link Your confusion is because you are thinking about regret in an anytime setting. In an anytime setting, there is a fixed policy $$\pi$$, we measure the expected reward of $$\pi$$ over a time interval $$t$$ and compare it to the optimal expected reward over the same time interval. If $$\pi$$ has probability $$p > 0$$ to walk into a trap, regret has the linear lower bound $$\Omega(pt)$$. On other hand, I am talking about policies $$\pi_t$$ that explicitly depend on the parameter $$t$$ (I call this a “metapolicy”). Both the advisor and the agent policies are like that. As $$t$$ goes to $$\infty$$, the probability $$p(t)$$ to walk into a trap goes to $$0$$, so $$p(t)t$$ is a sublinear function. A second difference with the usual definition of regret is that I use an infinite sum of rewards with geometric time discount $$e^{-1/t}$$ instead of a step function time discount that cuts off at $$t$$. However, this second difference is entirely inessential, and all the theorems work about the same with step function time discount. reply

NEW DISCUSSION POSTS

There should be a chat icon
 by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes

Apparently "You must be
 by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like

There is a replacement for
 by Alex Mennen on Meta: IAFF vs LessWrong | 1 like

Regarding the physical
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think that we should expect
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think I understand your
 by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

This seems like a hack. The
 by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

After thinking some more,
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yes, I think that we're
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

My intuition is that it must
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

To first approximation, a
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Actually, I *am* including
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yeah, when I went back and
 by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes

> Well, we could give up on
 by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

> For another thing, consider
 by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes