Intelligent Agent Foundations Forumsign up / log in
Corrigibility thoughts II: the robot operator
discussion post by Stuart Armstrong 643 days ago | 11 comments

A putative new idea for AI control; index here.

This is the first of three articles about limitations and challenges in the concept of corrigibility (see articles 1 and 3).

The desiderata for corrigibility are:

  1. A corrigible agent tolerates, and preferably assists, its operators in their attempts to alter or shut down the agent.
  2. A corrigible agent does not attempt to manipulate or deceive its operators.
  3. A corrigible agent has incentives to repair safety measures (such as shutdown buttons, tripwires, or containment tools) if they break, or at least notify its operators in the event of a breakage.
  4. A corrigible agent preserves its corrigibility, even as it creates new sub-systems or sub-agents, even if it undergoes significant self-modification.

In this post, I’ll be looking at one aspect of points 1 and 3. I feel that they’re actually pretty much the same point – the AI has to act while preserving the corrigibility infrastructure.

A summary of the result will be:

  • Without a formal definition of what an operator is, corrigibility is vulnerable to the AI replacing the operator with a fake version.

The warning in this post should always be born in mind, of course; it’s possible that we me might find a semi-formal version of operator that does the trick.

Defining the operator or defining the event

Typically the process of correctionof values is defined physically, as being the outcome of specific event (such as a button press, or a specific stochastic event triggered by or triggering a button press). This could also be defined by a digital event (receiving a certain encoded signal, or similar).

In all cases, the idea is the same: there is a very narrow and specific definition of the value correction event, and the physical or digital infrastructure surrounding it plays the role of ensuring the value corrections happen just as we intended. For instance, the physical event could be the button press (or a charge in a wire triggered by the button press) and the button itself could be in a room with an operator, with the operator keeping an eye on the AI and motivated to press the button under the right circumstances.

Such a setup is naturally vulnerable if the AI is motivated to manipulate the operator, or one of its subagents is. But even if we can solve points 2. and 4., there remains another vulnerability: the AI can choose to replace the operator with an artificial one (maybe a robot, maybe a second wire).

Why would the AI be so motivated? Well, for a start, it’s indifferent to doing so. If the operator is not part of the definition of corrigibility, then having a human operator or a robot one that does exactly the same thing will be equivalent to the AI.

Now, there are frictions that might push against this. Replacing the operator might carry an expected cost. Or maybe the operator has private information that the AI wants to know, hence replacing them is a negative as the robot wouldn’t have that information. And maybe the AI has a rough “don’t harm humans” motive, that, while imperfect, at least prevents it from removing the operator in this circumstance.

But there may be motivations pushing the other way. For example, if a learning agent is unbiased but not uninfluenceable, then it will be motivated to replace the operator with a robot with the same expected outcome, but different variance. Or maybe the operator has private information, but it’s much more effective to have a robot read that info from their brain than to rely on the human to reveal that through their behaviour. Or maybe the AI is decently motivated to preserve the corrigibility infrastructure, and is aware of how mortal humans can be, so replacing them with a robot is the prudent thing to do.

All this stems from the fact that the operator is not well defined as part of the corrigibility infrastructure, but their position relies on physical facts about the world, along with a narrow definition of the correction of value event. To combat that, we’d need to define the operator properly, a very tricky challenge, or physically and cognitively secure them, or hope the AI learns early on not to not harm them.



by Jessica Taylor 641 days ago | link

What do you find most unsatisfactory about this proposal for having the AI be motivated to maintain the shutdown circuitry? Here the AI does not benefit from influencing the human. I get that there are problems with this proposal, I’m just not sure which one you’re trying to talk about / solve in this post.

reply

by Stuart Armstrong 637 days ago | link

In that proposal? The AI is motivated to kill the human to prevent any possible tampering with the shutdown circuitry. If we’ve defined the setup so that someone needs to actively press a button at some point, then killing the human and getting an automated button presser will work.

Protect the circuity doesn’t mean protect the human component of it, unless the human component is defined.

reply

by Jessica Taylor 636 days ago | link

Makes sense, thanks for clarifying.

reply

by Paul Christiano 495 days ago | link

If I want my boat to travel with the wind, I have two options:

  1. Add some sensors to detect the direction of the wind, and a motor to propel the boat in that direction.
  2. Add a sail.

I suspect the analog of approach #2 will work much better for corrigibility.

reply

by Stuart Armstrong 489 days ago | link

Not sure what your argument is. Can you develop it?

reply

by Paul Christiano 488 days ago | link

I expect a workable approach will define the operator implicitly as “that thing which has control over the input channel” rather than by giving an explicit definition. This is analogous to the way in which a sail causes your boat to move with the wind: you don’t have to define or measure the wind precisely, you just have to be easily pushed around by it.

reply

by Stuart Armstrong 482 days ago | link

Thus anything that can control the operator becomes defined as the operator? That doesn’t seem safe…

reply

by Paul Christiano 481 days ago | link

The AI defers to anything that can control the operator.

If the operator has physical control over the AI, than any process which controls the operator can replace the AI wholesale. It feels fine to defer to such processes, and certainly it seems much better than the situation where the operator is attempting to correct the AI’s behavior but the AI is paternalistically unresponsive.

Presumably the operator will try to secure themselves in the same way that they try to secure their AI.

reply

by Stuart Armstrong 481 days ago | link

This also means that if the AI can figure out a way of controlling the controller, then it is itself in control form the moment it comes up with a reasonable plan?

reply

by Paul Christiano 480 days ago | link

The AI replacing the operator is certainly a fixed point.

This doesn’t seem any different from the usual situation. Modifying your goals is always a fixed point. That doesn’t mean that our agents will inevitably do it.

An agent which is doing what the operator wants, where the operator is “whatever currently has physical control of the AI,” won’t try to replace the operator—because that’s not what the operator wants.

reply

by Stuart Armstrong 479 days ago | link

An agent which is doing what the operator wants, where the operator is “whatever currently has physical control of the AI,” won’t try to replace the operator—because that’s not what the operator wants.

I disagree (though we may be interpreting that sentence differently). Once the AI has the possibility of subverting the controller, then it is, in effect, in physical control of itself. So it itself becomes the “formal operator”, and, depending on how it’s motivated, is perfectly willing to replace the “human operator”, whose wishes are now irrelevant (because it’s no longer the formal operator).

And this never involves any goal modification at all - it’s the same goal, except that the change in control has changed the definition of the operator.

reply



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

[Note: This comment is three
by Ryan Carey on A brief note on factoring out certain variables | 0 likes

There should be a chat icon
by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes

Apparently "You must be
by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like

There is a replacement for
by Alex Mennen on Meta: IAFF vs LessWrong | 1 like

Regarding the physical
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think that we should expect
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think I understand your
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

This seems like a hack. The
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

After thinking some more,
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yes, I think that we're
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

My intuition is that it must
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

To first approximation, a
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Actually, I *am* including
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yeah, when I went back and
by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes

> Well, we could give up on
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

RSS

Privacy & Terms