Intelligent Agent Foundations Forumsign up / log in
by Wei Dai 812 days ago | link | parent

217/PDV (I assume you’re the same person?), I agree with much of what you wrote, but do you have your own ideas for how to achieve Friendly AI? It seems like most of the objections against Paul’s ideas also apply to other people’s (such as MIRI’s). The fact that humans aren’t benign (or can’t be determined to be benign) under a sufficiently large set of environments/inputs, suffer from value drift, have unknown/unpatchable security holes all pose similar problems for CEV, for instance, which nobody has proposed a plausible way to solve, AFAIK.

In a way, I guess Paul has actually done more to explicitly acknowledge these problems than just about anyone else, even if I think (as you do) that he is too optimistic about the prospect of solving them using the ideas he has sketched out.



by Jacob Kopczynski 811 days ago | Daniel Dewey likes this | link

(Yes, same person.)

I agree that no one else has solved the problem or made much progress. I object to Paul’s approach here because it’s coupling the value problem more closely to other problems in architecture and value stability. I would much prefer holding off on attacking it for the moment, rather than this approach, which - to my reading - takes for granted that the problem is not hard and rests further work on top of it. Holding off at least gets room for other pieces nearby to be carved out and provide a better idea of what properties a solution would have; this approach seems to be based on the solution looking vastly simpler than I think is true.

I also have a general intuitive prior that reinforcement learning approaches are untrustworthy and are “building on sand”, but that’s neither precise nor persuasive so I’m not writing it up except on questions like this where it’s more solid. I’ve put much less work into this field than Paul or others, so I don’t want to challenge things except where I’m confident.

reply



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

[Note: This comment is three
by Ryan Carey on A brief note on factoring out certain variables | 0 likes

There should be a chat icon
by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes

Apparently "You must be
by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like

There is a replacement for
by Alex Mennen on Meta: IAFF vs LessWrong | 1 like

Regarding the physical
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think that we should expect
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think I understand your
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

This seems like a hack. The
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

After thinking some more,
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yes, I think that we're
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

My intuition is that it must
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

To first approximation, a
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Actually, I *am* including
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yeah, when I went back and
by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes

> Well, we could give up on
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

RSS

Privacy & Terms