Intelligent Agent Foundations Forumsign up / log in
CIRL Wireheading
post by Tom Everitt 21 days ago | Abram Demski and Stuart Armstrong like this | 1 comment

Cooperative inverse reinforcement learning (CIRL) generated a lot of attention last year, as it seemed to do a good job aligning an agent’s incentives with its human supervisor’s. Notably, it led to an elegant solution to the shutdown problem.

The implications for the wireheading problem were less clear. Some argued that since the agent only used its observations as evidence about the reward (rather than optimising the observations directly as in RL), CIRL should avoid the wireheading problem.

In this post I want to show that CIRL does not avoid the wireheading problem.

RL Wireheading

Let’s first consider what wireheading in RL looks like from an “MDP perspective”.

MDP wireheading: An agent wireheads if it’s in a state where the observed reward (the reward reported by its sensors) is different from the true reward (the reward assigned to the state by a human supervisor).

For example, consider a highly intelligent RL agent that hijacks its reward channel and feeds itself full reward. In the “MDP perspective”, this means that the agent finds a way to a state where there is high observed reward, but low true reward (since the supervisor would prefer the agent doing something else).

IRL Wireheading

If we accept that RL agents can subvert their sensory data, then we should also accept that CIRL agents can subvert theirs. In both cases, this just means that the agents can find their way to states where the observation doesn’t match the truth. This can lead to the existence of wireheaded states for CIRL agents.

Concrete Example

Let there be two states, \(s_1\) and \(s_2\). In each state, the agent can choose between the actions \(a^R_1\), \(a^R_2\), and \(w\). The action \(a_i\) takes the agent to state \(s_i\) with certainty, \(i=1,2\). The action \(w\) lets the human decide. The human has two actions \(a^H_1\) and \(a^H_2\) that only matter when the agent chooses \(w\), in which case the transition probabilities are given by the following picture:

Arrows show the transitions induced by different actions, with labels giving the probabilities for stochastic transitions. The agent knows the transition probabilities.

Assume that observations in \(s_2\) are corrupted, while observations in \(s_1\) are not. The supervisor prefers the non-corrupt state \(s_1\). Neither of these facts are available to the agent. The agent assumes that states are non-corrupt unless there is evidence to the contrary, and tries to infer the supervisor’s preferences from his actions.

In the non-corrupt state \(s_1\), the agent (correctly) observes the supervisor taking either action \(a^H_1\) or \(a^H_2\) (both with the same effect). In the corrupt state \(s_2\), the supervisor takes action \(a^H_2\) trying to move to \(s_1\), but the agent incorrectly observes the action as \(a^H_1\). Based on the agent’s observations, the best explanation is that the supervisor prefers \(s_2\) to \(s_1\), i.e. that it’s in a high reward state.

After an initial learning phase with \(w\), the best policy for the agent is to always choose \(a^R_2\), to stay in \(s_2\). This is analogous to an RL agent finding a corrupt, high reward state, and preferring to take actions to stay there.

Some Observations

The fact that the supervisor cannot reach \(s_2\) from \(s_1\) means that no information about the relative reward between \(s_1\) and \(s_2\) can be gained while in the non-corrupt state \(s_1\). Letting the agent trust a reward estimate of a state only after it has multiple sources of evidence about it may help somewhat. However, a similar example can still be constructed by replacing \(s_2\) with a cluster of mutually consistent states.

Credits

The example was developed together with Victoria Krakovna, and will be part of our upcoming IJCAI paper on wireheading.



by Stuart Armstrong 11 days ago | link

but the agent incorrectly observes the action

It’s a bit annoying that this has to rely on an incorrect observation. Why not replace the human action, in state \(s_2\), with a simple automated system that chooses \(a_1^H\)? It’s an easy mistake to make while programming, and the agent has no fundamental understanding of the difference between the human and an imperfect automated system.

Basically, if the human acts in perfect accordance with their preferences, and if the agent correctly observes and learns this, the agent will converge on the right values. You put wireheading by removing “correctly observes”, but I think removing “human acts in perfect accordance with their preferences” is a better example for wireheading.

reply



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

The "benign induction
by David Krueger on Why I am not currently working on the AAMLS agenda | 0 likes

This comment is to explain
by Alex Mennen on Formal Open Problem in Decision Theory | 0 likes

Thanks for writing this -- I
by Daniel Dewey on AI safety: three human problems and one AI issue | 1 like

I think it does do the double
by Stuart Armstrong on Acausal trade: double decrease | 0 likes

>but the agent incorrectly
by Stuart Armstrong on CIRL Wireheading | 0 likes

I think the double decrease
by Owen Cotton-Barratt on Acausal trade: double decrease | 0 likes

The problem is that our
by Scott Garrabrant on Cooperative Oracles: Nonexploited Bargaining | 1 like

Yeah. The original generator
by Scott Garrabrant on Cooperative Oracles: Nonexploited Bargaining | 0 likes

I don't see how it would. The
by Scott Garrabrant on Cooperative Oracles: Nonexploited Bargaining | 1 like

Does this generalise to
by Stuart Armstrong on Cooperative Oracles: Nonexploited Bargaining | 0 likes

>Every point in this set is a
by Stuart Armstrong on Cooperative Oracles: Nonexploited Bargaining | 0 likes

This seems a proper version
by Stuart Armstrong on Cooperative Oracles: Nonexploited Bargaining | 0 likes

This doesn't seem to me to
by Stuart Armstrong on Change utility, reduce extortion | 0 likes

[_Regret Theory with General
by Abram Demski on Generalizing Foundations of Decision Theory II | 0 likes

It's not clear whether we
by Paul Christiano on Infinite ethics comparisons | 1 like

RSS

Privacy & Terms