Intelligent Agent Foundations Forumsign up / log in
Corrigibility thoughts II: the robot operator
discussion post by Stuart Armstrong 424 days ago | 11 comments

A putative new idea for AI control; index here.

This is the first of three articles about limitations and challenges in the concept of corrigibility (see articles 1 and 3).

The desiderata for corrigibility are:

  1. A corrigible agent tolerates, and preferably assists, its operators in their attempts to alter or shut down the agent.
  2. A corrigible agent does not attempt to manipulate or deceive its operators.
  3. A corrigible agent has incentives to repair safety measures (such as shutdown buttons, tripwires, or containment tools) if they break, or at least notify its operators in the event of a breakage.
  4. A corrigible agent preserves its corrigibility, even as it creates new sub-systems or sub-agents, even if it undergoes significant self-modification.

In this post, I’ll be looking at one aspect of points 1 and 3. I feel that they’re actually pretty much the same point – the AI has to act while preserving the corrigibility infrastructure.

A summary of the result will be:

  • Without a formal definition of what an operator is, corrigibility is vulnerable to the AI replacing the operator with a fake version.

The warning in this post should always be born in mind, of course; it’s possible that we me might find a semi-formal version of operator that does the trick.

Defining the operator or defining the event

Typically the process of correctionof values is defined physically, as being the outcome of specific event (such as a button press, or a specific stochastic event triggered by or triggering a button press). This could also be defined by a digital event (receiving a certain encoded signal, or similar).

In all cases, the idea is the same: there is a very narrow and specific definition of the value correction event, and the physical or digital infrastructure surrounding it plays the role of ensuring the value corrections happen just as we intended. For instance, the physical event could be the button press (or a charge in a wire triggered by the button press) and the button itself could be in a room with an operator, with the operator keeping an eye on the AI and motivated to press the button under the right circumstances.

Such a setup is naturally vulnerable if the AI is motivated to manipulate the operator, or one of its subagents is. But even if we can solve points 2. and 4., there remains another vulnerability: the AI can choose to replace the operator with an artificial one (maybe a robot, maybe a second wire).

Why would the AI be so motivated? Well, for a start, it’s indifferent to doing so. If the operator is not part of the definition of corrigibility, then having a human operator or a robot one that does exactly the same thing will be equivalent to the AI.

Now, there are frictions that might push against this. Replacing the operator might carry an expected cost. Or maybe the operator has private information that the AI wants to know, hence replacing them is a negative as the robot wouldn’t have that information. And maybe the AI has a rough “don’t harm humans” motive, that, while imperfect, at least prevents it from removing the operator in this circumstance.

But there may be motivations pushing the other way. For example, if a learning agent is unbiased but not uninfluenceable, then it will be motivated to replace the operator with a robot with the same expected outcome, but different variance. Or maybe the operator has private information, but it’s much more effective to have a robot read that info from their brain than to rely on the human to reveal that through their behaviour. Or maybe the AI is decently motivated to preserve the corrigibility infrastructure, and is aware of how mortal humans can be, so replacing them with a robot is the prudent thing to do.

All this stems from the fact that the operator is not well defined as part of the corrigibility infrastructure, but their position relies on physical facts about the world, along with a narrow definition of the correction of value event. To combat that, we’d need to define the operator properly, a very tricky challenge, or physically and cognitively secure them, or hope the AI learns early on not to not harm them.

by Jessica Taylor 422 days ago | link

What do you find most unsatisfactory about this proposal for having the AI be motivated to maintain the shutdown circuitry? Here the AI does not benefit from influencing the human. I get that there are problems with this proposal, I’m just not sure which one you’re trying to talk about / solve in this post.


by Stuart Armstrong 418 days ago | link

In that proposal? The AI is motivated to kill the human to prevent any possible tampering with the shutdown circuitry. If we’ve defined the setup so that someone needs to actively press a button at some point, then killing the human and getting an automated button presser will work.

Protect the circuity doesn’t mean protect the human component of it, unless the human component is defined.


by Jessica Taylor 417 days ago | link

Makes sense, thanks for clarifying.


by Paul Christiano 276 days ago | link

If I want my boat to travel with the wind, I have two options:

  1. Add some sensors to detect the direction of the wind, and a motor to propel the boat in that direction.
  2. Add a sail.

I suspect the analog of approach #2 will work much better for corrigibility.


by Stuart Armstrong 270 days ago | link

Not sure what your argument is. Can you develop it?


by Paul Christiano 269 days ago | link

I expect a workable approach will define the operator implicitly as “that thing which has control over the input channel” rather than by giving an explicit definition. This is analogous to the way in which a sail causes your boat to move with the wind: you don’t have to define or measure the wind precisely, you just have to be easily pushed around by it.


by Stuart Armstrong 263 days ago | link

Thus anything that can control the operator becomes defined as the operator? That doesn’t seem safe…


by Paul Christiano 262 days ago | link

The AI defers to anything that can control the operator.

If the operator has physical control over the AI, than any process which controls the operator can replace the AI wholesale. It feels fine to defer to such processes, and certainly it seems much better than the situation where the operator is attempting to correct the AI’s behavior but the AI is paternalistically unresponsive.

Presumably the operator will try to secure themselves in the same way that they try to secure their AI.


by Stuart Armstrong 262 days ago | link

This also means that if the AI can figure out a way of controlling the controller, then it is itself in control form the moment it comes up with a reasonable plan?


by Paul Christiano 261 days ago | link

The AI replacing the operator is certainly a fixed point.

This doesn’t seem any different from the usual situation. Modifying your goals is always a fixed point. That doesn’t mean that our agents will inevitably do it.

An agent which is doing what the operator wants, where the operator is “whatever currently has physical control of the AI,” won’t try to replace the operator—because that’s not what the operator wants.


by Stuart Armstrong 260 days ago | link

An agent which is doing what the operator wants, where the operator is “whatever currently has physical control of the AI,” won’t try to replace the operator—because that’s not what the operator wants.

I disagree (though we may be interpreting that sentence differently). Once the AI has the possibility of subverting the controller, then it is, in effect, in physical control of itself. So it itself becomes the “formal operator”, and, depending on how it’s motivated, is perfectly willing to replace the “human operator”, whose wishes are now irrelevant (because it’s no longer the formal operator).

And this never involves any goal modification at all - it’s the same goal, except that the change in control has changed the definition of the operator.






If you drop the
by Alex Appel on Distributed Cooperation | 0 likes

Cool! I'm happy to see this
by Abram Demski on Distributed Cooperation | 0 likes

Caveat: The version of EDT
by 258 on In memoryless Cartesian environments, every UDT po... | 2 likes

[Delegative Reinforcement
by Vadim Kosoy on Stable Pointers to Value II: Environmental Goals | 1 like

Intermediate update: The
by Alex Appel on Further Progress on a Bayesian Version of Logical ... | 0 likes

Since Briggs [1] shows that
by 258 on In memoryless Cartesian environments, every UDT po... | 2 likes

This doesn't quite work. The
by Nisan Stiennon on Logical counterfactuals and differential privacy | 0 likes

I at first didn't understand
by Sam Eisenstat on An Untrollable Mathematician | 1 like

This is somewhat related to
by Vadim Kosoy on The set of Logical Inductors is not Convex | 0 likes

This uses logical inductors
by Abram Demski on The set of Logical Inductors is not Convex | 0 likes

Nice writeup. Is one-boxing
by Tom Everitt on Smoking Lesion Steelman II | 0 likes

Hi Alex! The definition of
by Vadim Kosoy on Delegative Inverse Reinforcement Learning | 0 likes

A summary that might be
by Alex Appel on Delegative Inverse Reinforcement Learning | 1 like

I don't believe that
by Alex Appel on Delegative Inverse Reinforcement Learning | 0 likes

This is exactly the sort of
by Stuart Armstrong on Being legible to other agents by committing to usi... | 0 likes


Privacy & Terms