Intelligent Agent Foundations Forumsign up / log in
Optimisation in manipulating humans: engineered fanatics vs yes-men
discussion post by Stuart Armstrong 146 days ago | discuss

A putative new idea for AI control; index here.

One of the ways in which humans aren’t agents is that we can be manipulated into having any (or almost any) set of values - through drugs, brain surgery, extreme propaganda, and other methods.

If an AI is tasked with “satisfy human preferences”, and if the AI can affect the definition of “human preferences” by changing humans, this is what we would expect it to do.

This could be combated by my making the definition of human preferences counterfactual. But it’s not fully clear how to define counterfactual preferences, and it would be interesting to see whether we can constrain the AI from manipulating humans in other ways.

One idea is to look at what we informally call optimisation power. If the AI would prefer a human to be a \(u\)-maximising agent, then it presumably has to work hard at transforming the human values into that. And small changes in the definition of the AI’s preference would mean that they would prefer transforming the human into a \(v\)-maximiser.

Thus, though optimisation power is hard to define, this would seem to fit the bill: an “honest” reward learning process is one where the final human values don’t depend sensitively on the AI’s initial values. And a manipulable one is where it does.

Let’s dig into this a bit more. Why would an AI prefer \(u\) or \(v\)? Well, the generic failure mode for “satisfy human preferences” is to see it as the sum over all utilities \(u\) in some set \(U\) of “maximise \(u\) if the human agrees to maximise \(u\)”. Then what the AI wants is for the human agree to maximise a \(v\) where \(v\) is the utility function the AI finds easiest to reach a high value on.

But then we can individually translate or scale the various utilities in \(U\), to make different ones easier or harder to reach high values on. This would make the AI prefer the human to agree to a different maximisation.

So if \(R\) is the reward function that encodes “satisfy human preferences”, there are many \(R\) that are equivalent if the AI cannot influence the human’s values, but that are very different if the AI can. Looking for something like that could be a sign that something is wrong in the system.

Yes-men are a problem

And this is true, if the only option of the AI was to turn the human into an engineered fanatic with certain values. But it might also seek to turn the human into a yes-man, agreeing to anything the AI suggests. And this is something that an AI would do for a wide variety of different \(R\)’s. So our optimisation idea flounders at this point.



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

What does the Law of Logical
by Alex Appel on Smoking Lesion Steelman III: Revenge of the Tickle... | 0 likes

To quote the straw vulcan:
by Stuart Armstrong on Hyperreal Brouwer | 0 likes

I intend to cross-post often.
by Scott Garrabrant on Should I post technical ideas here or on LessWrong... | 1 like

I think technical research
by Vadim Kosoy on Should I post technical ideas here or on LessWrong... | 2 likes

I am much more likely to miss
by Abram Demski on Should I post technical ideas here or on LessWrong... | 1 like

Note that the problem with
by Vadim Kosoy on Open Problems Regarding Counterfactuals: An Introd... | 0 likes

Typos on page 5: *
by Vadim Kosoy on Open Problems Regarding Counterfactuals: An Introd... | 0 likes

Ah, you're right. So gain
by Abram Demski on Smoking Lesion Steelman | 0 likes

> Do you have ideas for how
by Jessica Taylor on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

I think I understand what
by Wei Dai on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

>You don’t have to solve
by Wei Dai on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

Your confusion is because you
by Vadim Kosoy on Delegative Inverse Reinforcement Learning | 0 likes

My confusion is the
by Tom Everitt on Delegative Inverse Reinforcement Learning | 0 likes

> First of all, it seems to
by Abram Demski on Smoking Lesion Steelman | 0 likes

> figure out what my values
by Vladimir Slepnev on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

RSS

Privacy & Terms