Intelligent Agent Foundations Forumsign up / log in
by Paul Christiano 815 days ago | link | parent

(I replied last weekend, but the comment is awaiting moderation.)



by Jacob Kopczynski 814 days ago | Daniel Dewey likes this | link

Apologies, I stopped getting moderation emails at some point and haven’t fixed it properly.

reply

by Daniel Dewey 814 days ago | link

I also commented there last week and am awaiting moderation. Maybe we should post our replies here soon?

reply

by Wei Dai 813 days ago | link

(I’m replying to your comment here since I don’t trust personal blogs to stay alive and I don’t want my comments to disappear with them.)

Your point about not giving up too easily seems a good one. There could well be some ideas that are counterintuitive (to most people) but ultimately workable after a lot of effort, like public-key crypto in another area that I’m familiar with. I also think you’re overly optimistic, but that’s not necessarily a bad thing if it helps you explore some areas that others wouldn’t. But I’m worried that unlike typical CS fields, where it’s relatively easy to define technical concepts (and then prove theorems about them) and run algorithms to test/debug them, the analogous things in AI alignment will be many times harder, so we can’t achieve high confidence that something works even if it actually does, or narrow down the precise right idea from the neighborhood that it sits in. Even in crypto, it took decades to refine the idea of “security” into things like “indistinguishability under adaptive chosen ciphertext attack” and then find actually secure algorithms. All of the earliest public-key crypto algorithms deployed were in fact broken, even though they formed the basis for later algorithms. If ideas about AI alignment evolve in a similar way (but on an even longer timescale due to concepts being even harder to define and experiments being harder to run), it’s hard to see how things will turn out well. If the best we can achieve in the relevant time-frame are plausible AI alignment ideas or algorithms that are “in the right neighborhood”, that could even make things worse (than not having them at all) by causing people to feel safer to pursue/deploy AI capability or not invest as much in other ways of preventing AI risk.

reply



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

[Note: This comment is three
by Ryan Carey on A brief note on factoring out certain variables | 0 likes

There should be a chat icon
by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes

Apparently "You must be
by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like

There is a replacement for
by Alex Mennen on Meta: IAFF vs LessWrong | 1 like

Regarding the physical
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think that we should expect
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think I understand your
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

This seems like a hack. The
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

After thinking some more,
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yes, I think that we're
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

My intuition is that it must
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

To first approximation, a
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Actually, I *am* including
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yeah, when I went back and
by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes

> Well, we could give up on
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

RSS

Privacy & Terms