Intelligent Agent Foundations Forumsign up / log in
by Stuart Armstrong 667 days ago | link | parent

For example, if you assume “Anything humans say about their preferences is true,” that’s basically giving up on the Bayesian approach as usually imagined

More formally, what I mean by that is “assume humans are perfectly rational, and fit a reward/utility function given those assumptions”. This is a perfectly Bayesian approach, and will always produce a (over-complicated) utility function that fits with the observed behaviour.

In the usual Bayesian setting, “humans are perfectly reliable” corresponds to believing that human utterances correctly track (fixed) human preferences, i.e. believing that it is impossible to influence those utterances.

Yes and no. Under the assumption that humans are perfectly reliable, influencing human preferences and utterances is impossible. But this leads to behaviour that resembles influencing human utterances under other assumptions.

eg if you threaten a human with a gun and ask them to report they are maximally happy, a sensible model of human preferences will say they are lying. But the “humans are rational” model will simply conclude that humans really like being threatened in this way.



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

There should be a chat icon
by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes

Apparently "You must be
by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like

There is a replacement for
by Alex Mennen on Meta: IAFF vs LessWrong | 1 like

Regarding the physical
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think that we should expect
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think I understand your
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

This seems like a hack. The
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

After thinking some more,
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yes, I think that we're
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

My intuition is that it must
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

To first approximation, a
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Actually, I *am* including
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yeah, when I went back and
by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes

> Well, we could give up on
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

> For another thing, consider
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

RSS

Privacy & Terms