Intelligent Agent Foundations Forumsign up / log in
by Jessica Taylor 639 days ago | link | parent

This significantly clarifies things, thanks for writing this up!

I still don’t think this is “manipulating human preferences to make them easier to satisfy”, though (and not just in a semantic sense; I think we disagree about what behavior results from this model).

In this model, you consider “compound” utility functions of the form “if AI administers heroin, then \(U_1\), else \(U_2\)”. Since the human doesn’t make decisions about whether the AI administers heroin to them, the AI is unable to distinguish the compound utility function “if AI administers heroin, then \(U_1\), else \(U_2\)” from the compound utility function “if AI administers heroin, then \(U_1 - 1000\), else \(U_2\)”; both compound utility functions make identical predictions about human behavior. If usually \(U_1 > U_2\) then the AI will administer heroin in the first case and not the second. But the AI could easily learn either compound utility function, depending on its prior. So we get undefined behavior here.

We could consider a different case where there isn’t undefined behavior. Say the AI incidentally causes some event X whenever administering heroin. Then perhaps the compound utility functions are of the form “if X then \(U_1\) else \(U_2\)”. How do we distinguish “if X then \(U_1\) else \(U_2\)” from “if X then \(U_1 - 1000\) else \(U_2\)”? If the human is rational, then in the first case they will try to make X true, while in the second case they will try to make X false.

If the human doesn’t seek to manipulate X either way, then perhaps the conclusion is that both parts of the compound utility function are approximately as easy to satisfy (e.g. it’s “if X then \(U_1\) - 10 else \(U_2\)”, and \(U_1\) is generally 10 higher than \(U_2\). In this case there is no incentive to affect X, since the compound utility function values both sides equally.

So I don’t see a way of setting this up such that the AI’s behavior looks anything like “actively manipulating human preferences to make them easier to satisfy”.



by Stuart Armstrong 639 days ago | link

This was initially setup in the formalism of reward signals, with the idea that the AI could estimate the magnitude of the reward by subsequent human behaviour. The strong human behaviour for searching out heroin after \(F\) is therefore taken as evidence that the utility along that branch is higher than along the other one.

reply

by Jessica Taylor 639 days ago | link

  1. Can we agree that this is “manipulating the human to cause them to have reward-seeking behavior” and not “manipulating the human so their preferences are easy to satisfy”? The second brings to mind things like making the human want the speed of light to be above 100 m/s, and we don’t have an argument for why this does that.

  2. Why is reward-seeking behavior evidence for getting high rewards when getting heroin, instead of evidence for getting negative rewards when not getting heroin?

reply

by Stuart Armstrong 638 days ago | link

  1. I don’t really see the relevant difference here. If the human has their hard-to-satisfy preferences about, eg art and meaning, replaced by a single desire for heroin, this seems like it’s making them easier to satisfy.

  2. That’s a good point

reply

by Jessica Taylor 638 days ago | link

Re 1: There are cases where it makes the human’s preferences harder to satisfy. For example, perhaps heroin addicts demand twice as much heroin as the AI can provide, making their preferences harder to satisfy. Yet they will still seek reward strongly and often achieve it, so you might predict that the AI gives them heroin.

I think my real beef with saying this “manipulates the human’s preferences to make them easier to satisfy” is that, when most people hear this phrase, they think of a specific technical problem that is quite different from this (in terms of what we would predict the AI to do, not necessarily the desirability of the end result). Specifically, the most obvious interpretation is naive wireheading (under which the AI wants the human to want the speed of light to be above 100m/s), and this is quite a different problem at a technical level.

reply

by Stuart Armstrong 638 days ago | link

Wireheading the human is the ultimate goal of the AI. I chose heroin as the first step along those lines, but that’s where the human would ultimately end at.

For instance, once the human’s on heroin, the AI could ask it “is your true reward function \(r\)? If you answer yes, you’ll get heroin.” Under the assumption that the human is rational and the heroin offered is short term, this allows the AI to conclude the human’s reward function is any given \(r\).

reply

by Jessica Taylor 638 days ago | link

I strongly predict that if you make your argument really precise (as you did in the main post), it will have a visible flaw in it. In particular, I expect the fact that r and r-1000 are indistinguishable to prevent the argument from going through (though it’s hard to say exactly how this applies without having access to a sufficiently mathematical argument).

reply

by Stuart Armstrong 635 days ago | link

Ok, I think we need to distinguish several things:

  1. In general, \(U\) vs \(V\) or \(U - 1000\) vs \(V\) is a problem when comparing utility functions; there should be some sort of normalisation process before any utility functions are compared.

  2. Within a compound utility function, the AI is exactly choosing the branch where the utility is easiest to satisfy.

  3. Is there some normalisation procedure that would also normalise between branches of compound utility functions? If we pick a normalisation for comparing distinct utilities, it might also allow normalisation between branches of compound utilities.

reply

by Jessica Taylor 633 days ago | link

  1. Note that IRL is invariant to translating a possible utility function by a constant. So this kind of normalization doesn’t have to be baked into the algorithm.
  2. This is true.
  3. The most natural normalization procedure is to look at how the human is trying or not trying to affect the event X (as I said in the second part of my comment). If the human never tries to affect X either way, then the AI will normalize the utility functions so that the AI has no incentive to affect X either.

reply



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

I found an improved version
by Alex Appel on A Loophole for Self-Applicative Soundness | 0 likes

I misunderstood your
by Sam Eisenstat on A Loophole for Self-Applicative Soundness | 0 likes

Caught a flaw with this
by Alex Appel on A Loophole for Self-Applicative Soundness | 0 likes

As you say, this isn't a
by Sam Eisenstat on A Loophole for Self-Applicative Soundness | 1 like

Note: I currently think that
by Jessica Taylor on Predicting HCH using expert advice | 0 likes

Counterfactual mugging
by Jessica Taylor on Doubts about Updatelessness | 0 likes

What do you mean by "in full
by David Krueger on Doubts about Updatelessness | 0 likes

It seems relatively plausible
by Paul Christiano on Maximally efficient agents will probably have an a... | 1 like

I think that in that case,
by Alex Appel on Smoking Lesion Steelman | 1 like

Two minor comments. First,
by Sam Eisenstat on No Constant Distribution Can be a Logical Inductor | 1 like

A: While that is a really
by Alex Appel on Musings on Exploration | 0 likes

> The true reason to do
by Jessica Taylor on Musings on Exploration | 0 likes

A few comments. Traps are
by Vadim Kosoy on Musings on Exploration | 1 like

I'm not convinced exploration
by Abram Demski on Musings on Exploration | 0 likes

Update: This isn't really an
by Alex Appel on A Difficulty With Density-Zero Exploration | 0 likes

RSS

Privacy & Terms