Intelligent Agent Foundations Forumsign up / log in
Nearest unblocked strategy versus learning patches
post by Stuart Armstrong 484 days ago | Abram Demski likes this | 9 comments

The nearest unblocked strategy problem (NUS) is the idea that if you program a restriction or a patch into an AI, then the AI will often be motivated to pick a strategy that is as close as possible to the banned strategy, very similar in form, and maybe just as dangerous.

For instance, if the AI is maximising a reward \(R\), and does some behaviour \(B_i\) that we don’t like, we can patch the AI’s algorithm with patch \(P_i\) (‘maximise \(R_0\) subject to these constraints…’), or modify \(R\) to \(R_i\) so that \(B_i\) doesn’t come up. I’ll focus more on the patching example, but the modified reward one is similar.

The problem is that \(B_i\) was probably a high value behaviour according to \(R\)-maximising, simply because the AI was attempting it in the first place. So there are likely to be high value behaviours ‘close’ to \(B_i\), and the AI is likely to follow them.

A simple example

Consider a cleaning robot that rushes through its job an knocks over a white vase.

Then we can add patch \(P_1\): “don’t break any white vases”.

Next time the robot acts, it breaks a blue vase. So we add \(P_2\): “don’t break any blue vases”.

The robots next few run-throughs result in more patches: \(P_3\): “don’t break any red vases”, \(P_4\): “don’t break mauve-turquoise vases”, \(P_5\): “don’t break any black vases with cloisonné enammel”…

Learning the restrictions

Obviously the better thing for the robot to do would be just to avoid breaking vases. So instead of giving the robot endless patches, we could try and instead give it patches \(P_1, P_2, P_3, P_4\)… and have it learn: “what is the general behaviour that these patches are trying to proscribe? Maybe I shouldn’t break any vases.”

Note that even a single \(P_1\) patch would require an amount of learning, as you are trying to proscribe breaking white vases, at all times, in all locations, in all types of lighting, etc…

The idea is similar to that mentioned in the post on emergency learning, trying to have the AI generalise the idea of restricted behaviour from examples (=patches), rather than having to define all the examples.

A complex example

The vase example is obvious, but ideally we’d hope to generalise it. We’d hope to have the AI take patches like:

  1. \(P_1\): “Don’t break vases.”
  2. \(P_2\): “Don’t vacuum the cat.”
  3. \(P_3\): “Don’t use bleach on paintings.”
  4. \(P_4\): “Don’t obey human orders when the human is drunk.”

And then have the AI infer very different restrictions, like “Don’t imprison small children.”

Can this be done? Can we get a sufficient depth of example patches that most other human-desired patches can be learnt or deduced? And can we do this without the AI simply learning “Manipulate the human”? This is one of the big questions for methods like reward learning.

by Jessica Taylor 483 days ago | Ryan Carey likes this | link

Is there a reason why this won’t just converge to maximizing human approval?


by Stuart Armstrong 483 days ago | link

My original idea was to identify actions that increased human approval but were specifically labelled as not legitimate, and having the AI generalise those as a priority, but the idea needs work.


by Jessica Taylor 482 days ago | link

The thing I’m pointing at is that the generalization you should expect by default is “things are legitimate iff a human would label them as legitimate”, which is a kind of approval. It seems like you need some kind of special sauce if you want the generalization to be something other than that.


by Stuart Armstrong 480 days ago | Patrick LaVictoire likes this | link

How about something like this? I don’t expect this to work as stated, but it may suggest certain possibilities:

There is a familiarity score \(F\) which labels how close a situation is to one where humans have full and rapid understanding of what’s going on. In situations of high \(F\), the human reward signals are take as accurate. There are examples of situations of medium \(F\) where humans, after careful deliberation, conclude that the reward signals were wrong. The prior is that for low \(F\), there will be reward signals that are wrong but which even careful human deliberation cannot discern. The job of the learning algorithm is to deduce what these are by extending the results from medium \(F\).

This should not converge merely onto human approval, since human approval is explicitly modelled to be false here.


by Jessica Taylor 480 days ago | link

This seems pretty similar to this proposal, does that seem right to you?

I think my main objection is the same as the main objection to the proposal I linked to: there has to be a good prior over “what the correct judgments are” such that when this prior is updated on data, it correctly generalizes to cases where we can’t get feedback even in principle. It’s not even clear what “correct judgments” means (you can’t put a human in a box and have them think for 500 years).


by Stuart Armstrong 479 days ago | link

No exactly that. What I’m trying to get at is that we know some of the features that failure would have (eg edge cases of utility maximalisation, seductive-seeming or seductively-presented answer), so we should be able to use that knowledge somehow.


by Paul Christiano 482 days ago | link

You might be able to sample pairs \((a, f(a))\) without being able to actually evaluate \(a \mapsto f(a)\).


by Jessica Taylor 481 days ago | link

I’m not sure why this would change things? If it samples a bunch of \((a, f(a))\) pairs until one of them has a high \(f(a)\) value, then that’s going to have the same effect as sampling a bunch of \(a\) values until one of them has a high \(f(a)\) value.


by Paul Christiano 480 days ago | Jessica Taylor likes this | link

I meant that I may be able to sample pairs from some attack distribution without being able to harden my function against the attack distribution.

Suppose that I have a program \(\widetilde{f} \in [0, 1]\) which implements my desired reward function, except that it has a bunch of vulnerabilities \(\widetilde{a}_i\) on which it mistakenly outputs 1 (when it really should output 0). Suppose further that I am able to sample vulnerabilities roughly as effectively as my AI.

Then I can sample vulnerabilities \(\widetilde{a}\) and provide the pairs \((\widetilde{a}, -1)\) to train my reward function, along with a bunch of pairs \((a, \widetilde{f}(a))\) for actions \(a\) produced by the agent. This doesn’t quite work as stated but you could imagine learning \(f\) despite having no access to it.

(This is very similar to adversarial training / red teams).






I found an improved version
by Alex Appel on A Loophole for Self-Applicative Soundness | 0 likes

I misunderstood your
by Sam Eisenstat on A Loophole for Self-Applicative Soundness | 0 likes

Caught a flaw with this
by Alex Appel on A Loophole for Self-Applicative Soundness | 0 likes

As you say, this isn't a
by Sam Eisenstat on A Loophole for Self-Applicative Soundness | 1 like

Note: I currently think that
by Jessica Taylor on Predicting HCH using expert advice | 0 likes

Counterfactual mugging
by Jessica Taylor on Doubts about Updatelessness | 0 likes

What do you mean by "in full
by David Krueger on Doubts about Updatelessness | 0 likes

It seems relatively plausible
by Paul Christiano on Maximally efficient agents will probably have an a... | 1 like

I think that in that case,
by Alex Appel on Smoking Lesion Steelman | 1 like

Two minor comments. First,
by Sam Eisenstat on No Constant Distribution Can be a Logical Inductor | 1 like

A: While that is a really
by Alex Appel on Musings on Exploration | 0 likes

> The true reason to do
by Jessica Taylor on Musings on Exploration | 0 likes

A few comments. Traps are
by Vadim Kosoy on Musings on Exploration | 1 like

I'm not convinced exploration
by Abram Demski on Musings on Exploration | 0 likes

Update: This isn't really an
by Alex Appel on A Difficulty With Density-Zero Exploration | 0 likes


Privacy & Terms