Intelligent Agent Foundations Forumsign up / log in
by Jessica Taylor 387 days ago | Ryan Carey likes this | link | parent

Is there a reason why this won’t just converge to maximizing human approval?

by Stuart Armstrong 386 days ago | link

My original idea was to identify actions that increased human approval but were specifically labelled as not legitimate, and having the AI generalise those as a priority, but the idea needs work.


by Jessica Taylor 385 days ago | link

The thing I’m pointing at is that the generalization you should expect by default is “things are legitimate iff a human would label them as legitimate”, which is a kind of approval. It seems like you need some kind of special sauce if you want the generalization to be something other than that.


by Stuart Armstrong 383 days ago | Patrick LaVictoire likes this | link

How about something like this? I don’t expect this to work as stated, but it may suggest certain possibilities:

There is a familiarity score \(F\) which labels how close a situation is to one where humans have full and rapid understanding of what’s going on. In situations of high \(F\), the human reward signals are take as accurate. There are examples of situations of medium \(F\) where humans, after careful deliberation, conclude that the reward signals were wrong. The prior is that for low \(F\), there will be reward signals that are wrong but which even careful human deliberation cannot discern. The job of the learning algorithm is to deduce what these are by extending the results from medium \(F\).

This should not converge merely onto human approval, since human approval is explicitly modelled to be false here.


by Jessica Taylor 383 days ago | link

This seems pretty similar to this proposal, does that seem right to you?

I think my main objection is the same as the main objection to the proposal I linked to: there has to be a good prior over “what the correct judgments are” such that when this prior is updated on data, it correctly generalizes to cases where we can’t get feedback even in principle. It’s not even clear what “correct judgments” means (you can’t put a human in a box and have them think for 500 years).


by Stuart Armstrong 382 days ago | link

No exactly that. What I’m trying to get at is that we know some of the features that failure would have (eg edge cases of utility maximalisation, seductive-seeming or seductively-presented answer), so we should be able to use that knowledge somehow.


by Paul Christiano 385 days ago | link

You might be able to sample pairs \((a, f(a))\) without being able to actually evaluate \(a \mapsto f(a)\).


by Jessica Taylor 384 days ago | link

I’m not sure why this would change things? If it samples a bunch of \((a, f(a))\) pairs until one of them has a high \(f(a)\) value, then that’s going to have the same effect as sampling a bunch of \(a\) values until one of them has a high \(f(a)\) value.


by Paul Christiano 383 days ago | Jessica Taylor likes this | link

I meant that I may be able to sample pairs from some attack distribution without being able to harden my function against the attack distribution.

Suppose that I have a program \(\widetilde{f} \in [0, 1]\) which implements my desired reward function, except that it has a bunch of vulnerabilities \(\widetilde{a}_i\) on which it mistakenly outputs 1 (when it really should output 0). Suppose further that I am able to sample vulnerabilities roughly as effectively as my AI.

Then I can sample vulnerabilities \(\widetilde{a}\) and provide the pairs \((\widetilde{a}, -1)\) to train my reward function, along with a bunch of pairs \((a, \widetilde{f}(a))\) for actions \(a\) produced by the agent. This doesn’t quite work as stated but you could imagine learning \(f\) despite having no access to it.

(This is very similar to adversarial training / red teams).






If you drop the
by Alex Appel on Distributed Cooperation | 0 likes

Cool! I'm happy to see this
by Abram Demski on Distributed Cooperation | 0 likes

Caveat: The version of EDT
by 258 on In memoryless Cartesian environments, every UDT po... | 2 likes

[Delegative Reinforcement
by Vadim Kosoy on Stable Pointers to Value II: Environmental Goals | 1 like

Intermediate update: The
by Alex Appel on Further Progress on a Bayesian Version of Logical ... | 0 likes

Since Briggs [1] shows that
by 258 on In memoryless Cartesian environments, every UDT po... | 2 likes

This doesn't quite work. The
by Nisan Stiennon on Logical counterfactuals and differential privacy | 0 likes

I at first didn't understand
by Sam Eisenstat on An Untrollable Mathematician | 1 like

This is somewhat related to
by Vadim Kosoy on The set of Logical Inductors is not Convex | 0 likes

This uses logical inductors
by Abram Demski on The set of Logical Inductors is not Convex | 0 likes

Nice writeup. Is one-boxing
by Tom Everitt on Smoking Lesion Steelman II | 0 likes

Hi Alex! The definition of
by Vadim Kosoy on Delegative Inverse Reinforcement Learning | 0 likes

A summary that might be
by Alex Appel on Delegative Inverse Reinforcement Learning | 1 like

I don't believe that
by Alex Appel on Delegative Inverse Reinforcement Learning | 0 likes

This is exactly the sort of
by Stuart Armstrong on Being legible to other agents by committing to usi... | 0 likes


Privacy & Terms