Intelligent Agent Foundations Forumsign up / log in
Indifference and compensatory rewards
discussion post by Stuart Armstrong 4 days ago | discuss

It’s occurred to me that there is a framework where we can see all “indifference” results as corrective rewards, both for the utility function change indifference and for the policy change indifference.

Imagine that the agent has reward \(R_0\) and is following policy \(\pi_0\), and we want to change it to having reward \(R_1\) and following policy \(\pi_1\).

Then the corrective reward we need to pay it, so that it doesn’t attempt to resist or cause that change, is simply the difference between the two expected values:

  • \(V(R_0|\pi_0)-V(R_1|\pi_1)\),

where \(V\) is the agent’s own valuation of the expected reward, conditional on the policy.

This explains why off-policy reward-based agents are already safely interruptible: since we change the policy, not the reward, \(R_0=R_1\). And since off-policy agents have value estimates that are indifferent to the policy followed, \(V(R_0|\pi_0)=V(R_1|\pi_1)\), and the compensatory rewards are zero.



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

Why wouldn't it work? The
by Jessica Taylor on True understanding comes from passing exams | 0 likes

It would be weird if the
by Jessica Taylor on Are daemons a problem for ideal agents? | 0 likes

The second AI doesn't get to
by Stuart Armstrong on True understanding comes from passing exams | 0 likes

Fixed the $\varepsilon$,
by Scott Garrabrant on Entangled Equilibria and the Twin Prisoners' Dilem... | 0 likes

I think you meant to divide
by Vadim Kosoy on Entangled Equilibria and the Twin Prisoners' Dilem... | 0 likes

Yup, this isn't robust to
by Patrick LaVictoire on Censoring out-of-domain representations | 0 likes

I don't think "honesty" is
by Paul Christiano on How likely is a random AGI to be honest? | 2 likes

Discussed briefly in Concrete
by Daniel Dewey on Minimizing Empowerment for Safety | 2 likes

I reason as follows: 1.
by David Krueger on Does UDT *really* get counter-factually mugged? | 1 like

I agree... if there are
by David Krueger on Censoring out-of-domain representations | 0 likes

Game-aligned agents aren't
by Vladimir Nesov on Does UDT *really* get counter-factually mugged? | 0 likes

The issue in the OP is that
by Vladimir Nesov on Does UDT *really* get counter-factually mugged? | 0 likes

This seems only loosely
by David Krueger on Does UDT *really* get counter-factually mugged? | 0 likes

OK that makes sense, thanks.
by David Krueger on Does UDT *really* get counter-factually mugged? | 0 likes

It's not the same (but
by David Krueger on Learning Impact in RL | 0 likes

RSS

Privacy & Terms (NEW 04/01/15)