Intelligent Agent Foundations Forumsign up / log in
by Jessica Taylor 671 days ago | Patrick LaVictoire and Stuart Armstrong like this | link | parent

Of course, given a diverse enough prior, a correct model of human irrationality will be included, but the human remains underspecified.

More specifically, it seems like the biggest problem with having a diverse prior is that the correct (utility function, irrationality model) pair might not be able to be learned from any amount of data. For example, perhaps humans like apples, or perhaps they don’t but act like they do, due to irrationality; either way they behave the same. See also Paul’s post on this.



by Stuart Armstrong 670 days ago | Patrick LaVictoire likes this | link

Thanks - Paul’s post is useful, and I’m annoyed I didn’t know about it, it would have avoided me rediscovering the same ideas. That’s a failure of communication; what should I do to avoid these in future (simply reading all of Paul’s and MIRI’s stuff seems unfeasible). Maybe talk with people from MIRI more often?

reply

by Paul Christiano 668 days ago | link

If you don’t read everything I write, then you certainly can’t know everything I’ve written :)

The normal approach is to talk with people about a particular question before spending time on it. Someone can hopefully point you to relevant things that have been written.

That said, I think it takes less than 10 minutes a day to read basically everything that gets written about AI control, so it seems like we should all probably just do that. Does it seem infeasible because of the time requirement, or for some other reason? Am I missing some giant body of sensible writing on this topic?

reply

by Patrick LaVictoire 659 days ago | link

Stuart did make it easier for many of us to read his recent ideas by crossposting them here. I’d like there to be some central repository for the current set of AI control work, and I’m hoping that the forum could serve as that.

Is there a functionality that, if added here, would make it trivial to crosspost when you wrote something of note?

reply



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

There should be a chat icon
by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes

Apparently "You must be
by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like

There is a replacement for
by Alex Mennen on Meta: IAFF vs LessWrong | 1 like

Regarding the physical
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think that we should expect
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think I understand your
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

This seems like a hack. The
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

After thinking some more,
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yes, I think that we're
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

My intuition is that it must
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

To first approximation, a
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Actually, I *am* including
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yeah, when I went back and
by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes

> Well, we could give up on
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

> For another thing, consider
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

RSS

Privacy & Terms