Intelligent Agent Foundations Forumsign up / log in

[Note: This comment is three years later than the post]

The “obvious idea” here unfortunately seems not to work, because it is vulnerable to so-called “infinite improbability drives”. Suppose \(B\) is a shutdown button, and \(P(b|e)\) gives some weight to \(B=pressed\) and \(B=unpressed\). Then, the AI will benefit from selecting a Q such that it always chooses an action \(a\), in which it enters a lottery, and if it does not win, then it the button B is pushed. In this circumstance, \(P(b|e)\) is unchanged, while both \(P(c|b=pressed,a,e)\) and \(P(c|b=unpressed,a,e)\) allocate almost all of the probability to great \(C\) outcomes. So the approach will create an AI that wants to exploit its ability to determine \(B\).

reply


I can think of two problems:

  1. Let’s generously suppose that \(q\) is some fixed distribution of questions that we want the AI system to ask humans. Some manipulative action may only change the answers on \(q\) by a little bit but may yet change the consequences of acting on those responses by a lot.
  2. Consider an AI system that optimizes a utility function that includes this kind of term for regularizing against manipulation. The actions that best fulfill this utility function may be ones that manipulate humans a lot (and repurposes their resources for some other function) and coerces them into answering questions in a “natural way”. i.e. maybe impact is more like distance traveled (i.e. a path integral) than displacement.

reply

by Patrick LaVictoire 825 days ago | link

Re #1, an obvious set of questions to include in \(q\) are questions of approval for various aspects of the AI’s policy. (In particular, if we want the AI to later calculate a human’s HCH and ask it for guidance, then we would like to be sure that HCH’s answer to that question is not manipulated.)

reply

by Patrick LaVictoire 825 days ago | link

Re #2, I think this is an important objection to low-impact-via-regularization-penalty in general.

reply


This result features in the paper by Piccione and Rubeinstein that introduced the absent-minded driver problem [1].

Philosophers like decision theories that self-ratify, and this is indeed a powerful self-ratification principle.

This self-ratification principle does however rely on SIA probabilities assuming the current policy. We have shown that conditioning on your current policy, you will want to continue on with your current policy. i.e. the policy will be a Nash Equilibrium. There can be Nash Equilibria for other policies \(\pi'\) however. The UDT policy will by definition equal or beat these from the ex ante point of view. However, others can achieve higher expected utility conditioning on the initial observation i.e. higher \(SIA_{\pi'}(s|o)Q_{\pi'}(s,a)\). This apparent paradox is discussed in [2] [3], and seems to reduce to disagreement over counterfactual mugging.

So why do we like the UDT solution over solutions that are more optimal locally, and that also locally self-ratify? Obviously we want to avoid resorting so circular reasoning (i.e. it gets the best utility ex ante). I think there are some okay reasons:

  1. it is reflectively stable (i.e. will not self-modify, will not hide future evidence) and
  2. it makes sense assuming modal realism or many worlds interpretation (then we deem it parochial to focus on any reference frame other than equal weighting across the whole wavefunction/universe)
  3. it makes sense if we assume that self-location somehow does not
  4. it’s simpler (utility function given weighting 1 across all worlds). In principle, UDT can also include the locally optimal
  5. it transfers better to scenarios without randomization as in Nate + Ben Levenstein’s forthcoming [4].

I imagine there are more good arguments that I don’t yet know.

  1. p19 Piccione, Michele, and Ariel Rubinstein. “On the interpretation of decision problems with imperfect recall.” Games and Economic Behavior 20.1 (1997): 3-24.
  2. Schwarz, Wolfgang. “Lost memories and useless coins: revisiting the absentminded driver.” Synthese 192.9 (2015): 3011-3036.
  3. http://lesswrong.com/lw/3dy/has_anyone_solved_psykoshs_nonanthropic_problem/
  4. Cheating Death in Damascus / Nate Soares and Ben Levenstein / Forthcoming

reply


I noticed that CEE is already named in philosophy. Conservation of expected ethics is roughly what what Artnzenius calls Weak Desire Reflection. He calls Conservation of expected evidence Belief Reflection. [1]

  1. Arntzenius, Frank. “No regrets, or: Edith Piaf revamps decision theory.” Erkenntnis 68.2 (2008): 277-297. http://www.kennyeaswaran.org/readings/Arntzenius08.pdf

reply


I’m thinking of modelling this as classical moral uncertainty over plausive value/reward functions in a set R={Ri}, but assuming that the probability of a given Ri is never assumed to go below a certain probability.

It’s surprising to me that you would want your probabilities of each reward function to not approach zero, even asymptotically. In regular bandit problems, if your selection of some action never asymptotes toward zero, then you will necessarily keep making some kinds of mistakes forever, incurring linear regret. The same should be true for some suitable definition of regret if you stubbornly continue to behave according to some “wrong” moral theory.

reply

by Stuart Armstrong 917 days ago | link

But I’m arguing that using these moral theories to assess regret, is the wrong thing to do.

reply


There is a decent-sized literature that is relevant to the concept of vector-valued machine learners, though perhaps less so to the issues around self modification, that self-describes as “pareto-based multiobjective machine learning” and “multi-criteria reinforcement learning”.

reply


Thanks!!

Additionally to Jessica’s comments: uniformly calling the selections ‘arms’ seems good, as does clarifying what is meant by ‘red teams’. I’ve corrected these, and likewise the definition of \(q_i\).

reply


Given that this is my first post, critical feedback is especially welcome.

reply


Constraining the output of an AI seems like a reasonable option to explore to me. I agree that generating a finite set of humanlike answers (with a chatbot or otherwise) might be a sensible way to do this. An AI could perform gradient descent over the solution space then pick the nearest proposed behaviour (it could work like relaxation in integer programming).

The multiple choice AI (with human-suggested options) is the most obvious option for avoiding unhumanlike behaviour. Paul has said in some medium comments that he thinks his more elaborate approach of combining mimicry and optimisation [1] would work better though.

  1. https://medium.com/ai-control/mimicry-maximization-and-meeting-halfway-c149dd23fc17

reply

NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

[Note: This comment is three
by Ryan Carey on A brief note on factoring out certain variables | 0 likes

There should be a chat icon
by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes

Apparently "You must be
by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like

There is a replacement for
by Alex Mennen on Meta: IAFF vs LessWrong | 1 like

Regarding the physical
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think that we should expect
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think I understand your
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

This seems like a hack. The
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

After thinking some more,
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yes, I think that we're
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

My intuition is that it must
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

To first approximation, a
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Actually, I *am* including
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yeah, when I went back and
by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes

> Well, we could give up on
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

RSS

Privacy & Terms