Intelligent Agent Foundations Forumsign up / log in
Learning values versus learning knowledge
discussion post by Stuart Armstrong 457 days ago | 5 comments

I just thought I’d clarify the difference between learning values and learning knowledge. There are some more complex posts about the specific problems with learning values, but here I’ll just clarify why there is a problem with learning values in the first place.

Consider the term “chocolate bar”. Defining that concept crisply would be extremely difficult. But nevertheless it’s a useful concept. An AI that interacted with humanity would probably learn that concept to a sufficient degree of detail. Sufficient to know what we meant when we asked it for “chocolate bars”. Learning knowledge tends to be accurate.

Contrast this with the situation where the AI is programmed to “create chocolate bars”, but with the definition of “chocolate bar” left underspecified, for it to learn. Now it is motivated by something else than accuracy. Before, knowing exactly what a “chocolate bar” was would have been solely to its advantage. But now it must act on its definition, so it has cause to modify the definition, to make these “chocolate bars” easier to create. This is basically the same as Goodhart’s law - by making a definition part of a target, it will no longer remain an impartial definition.

What will likely happen is that the AI will have a concept of “chocolate bar”, that it created itself, especially for ease of accomplishing its goals (“a chocolate bar is any collection of more than one atom, in any combinations”), and a second concept, “Schocolate bar” that it will use to internally designate genuine chocolate bars (which will still be useful for it to do). When we programmed it to “create chocolate bars, here’s an incomplete definition D”, what we really did was program it to find the easiest thing to create that is compatible with D, and designate them “chocolate bars”.

This is the general counter to arguments like “if the AI is so smart, why would it do stuff we didn’t mean?” and “why don’t we just make it understand natural language and give it instructions in English?”



by Jessica Taylor 455 days ago | link

It seems like this won’t happen with the value learning method that seems most natural to me (and consistent with IRL/CIRL): have the true utility function, definition of chocolate, etc be “historical” facts that are not in the AI’s future. In this case, there is no incentive to manipulate the definition of chocolate, since according to the AI’s model, this definition has already been decided.

So I’m curious about what model you’re using; it seems like in your model, it is natural to place the definition of chocolate in the AI’s future.

reply

by Paul Christiano 452 days ago | link

I think the other natural approach is to simply make decisions based on the current estimated preferences, but to learn instrumental preferences of the user (including desire for the agent to learn more), as described here. Of course this also doesn’t have the problem from the OP.

reply

by Jessica Taylor 452 days ago | link

Yeah, this seems like the most natural way to deal with things like “chocolate” that aren’t yet well-defined. In this case, the instrumental preferences themselves will be treated as historical facts (it’s assumed that they’re already well-defined enough to learn).

reply

by Stuart Armstrong 451 days ago | link

have the true utility function, definition of chocolate, etc be “historical” facts that are not in the AI’s future.

The whole point of stratification (which is a kind of counterfactual reasoning) is to achieve this. Most value learning suggestions that I’ve seen do not.

reply

by Paul Christiano 451 days ago | link

Most value learning suggestions that I’ve seen do not.

What are you thinking of here? Could you point to an example?

reply



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

This is exactly the sort of
by Stuart Armstrong on Being legible to other agents by committing to usi... | 0 likes

When considering an embedder
by Jack Gallagher on Where does ADT Go Wrong? | 0 likes

The differences between this
by Abram Demski on Policy Selection Solves Most Problems | 0 likes

Looking "at the very
by Abram Demski on Policy Selection Solves Most Problems | 0 likes

Without reading closely, this
by Paul Christiano on Policy Selection Solves Most Problems | 1 like

>policy selection converges
by Stuart Armstrong on Policy Selection Solves Most Problems | 0 likes

Indeed there is some kind of
by Vadim Kosoy on Catastrophe Mitigation Using DRL | 0 likes

Very nice. I wonder whether
by Vadim Kosoy on Hyperreal Brouwer | 0 likes

Freezing the reward seems
by Vadim Kosoy on Resolving human inconsistency in a simple model | 0 likes

Unfortunately, it's not just
by Vadim Kosoy on Catastrophe Mitigation Using DRL | 0 likes

>We can solve the problem in
by Wei Dai on The Happy Dance Problem | 1 like

Maybe it's just my browser,
by Gordon Worley III on Catastrophe Mitigation Using DRL | 2 likes

At present, I think the main
by Abram Demski on Looking for Recommendations RE UDT vs. bounded com... | 0 likes

In the first round I'm
by Paul Christiano on Funding opportunity for AI alignment research | 0 likes

Fine with it being shared
by Paul Christiano on Funding opportunity for AI alignment research | 0 likes

RSS

Privacy & Terms