Intelligent Agent Foundations Forumsign up / log in
by Tom Everitt 92 days ago | link | parent

My confusion is the following:

Premises (*) and inferences (=>):

  • The primary way for the agent to avoid traps is to delegate to a soft-maximiser.

  • Any action with boundedly negative utility, a soft-maximiser will take with positive probability.

  • Actions leading to traps do not have infinitely negative utility.

=> The agent will fall into traps with positive probability.

  • If the agent falls into a trap with positive probability, then it will have linear regret.

=> The agent will have linear regret.

So when you say in the beginning of the post “a Bayesian DIRL agent is guaranteed to attain most of the value”, you must mean that in a different sense than a regret sense?



by Vadim Kosoy 92 days ago | link

Your confusion is because you are thinking about regret in an anytime setting. In an anytime setting, there is a fixed policy \(\pi\), we measure the expected reward of \(\pi\) over a time interval \(t\) and compare it to the optimal expected reward over the same time interval. If \(\pi\) has probability \(p > 0\) to walk into a trap, regret has the linear lower bound \(\Omega(pt)\).

On other hand, I am talking about policies \(\pi_t\) that explicitly depend on the parameter \(t\) (I call this a “metapolicy”). Both the advisor and the agent policies are like that. As \(t\) goes to \(\infty\), the probability \(p(t)\) to walk into a trap goes to \(0\), so \(p(t)t\) is a sublinear function.

A second difference with the usual definition of regret is that I use an infinite sum of rewards with geometric time discount \(e^{-1/t}\) instead of a step function time discount that cuts off at \(t\). However, this second difference is entirely inessential, and all the theorems work about the same with step function time discount.

reply



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

Unfortunately, it's not just
by Vadim Kosoy on Catastrophe Mitigation Using DRL | 0 likes

>We can solve the problem in
by Wei Dai on The Happy Dance Problem | 1 like

Maybe it's just my browser,
by Gordon Worley III on Catastrophe Mitigation Using DRL | 2 likes

At present, I think the main
by Abram Demski on Looking for Recommendations RE UDT vs. bounded com... | 0 likes

In the first round I'm
by Paul Christiano on Funding opportunity for AI alignment research | 0 likes

Fine with it being shared
by Paul Christiano on Funding opportunity for AI alignment research | 0 likes

I think the point I was
by Abram Demski on Predictable Exploration | 0 likes

(also x-posted from
by Sören Mindermann on The Three Levels of Goodhart's Curse | 0 likes

(x-posted from Arbital ==>
by Sören Mindermann on The Three Levels of Goodhart's Curse | 0 likes

>If the other players can see
by Stuart Armstrong on Predictable Exploration | 0 likes

Thinking about this more, I
by Abram Demski on Predictable Exploration | 0 likes

> So I wound up with
by Abram Demski on Predictable Exploration | 0 likes

Hm, I got the same result
by Alex Appel on Predictable Exploration | 1 like

Paul - how widely do you want
by David Krueger on Funding opportunity for AI alignment research | 0 likes

I agree, my intuition is that
by Abram Demski on Smoking Lesion Steelman III: Revenge of the Tickle... | 0 likes

RSS

Privacy & Terms