Intelligent Agent Foundations Forumsign up / log in
Meta: IAFF vs LessWrong
discussion post by Vadim Kosoy 79 days ago | Jessica Taylor likes this | 5 comments

It seems that currently many or even most researchers started posting on LessWrong.com instead of IAFF. I am troubled by this for two reasons.

  1. I hoped that eventually IAFF will draw in all of the AI alignment research community rather than only the MIRI-sphere. Moving to LessWrong seems like a step in the opposite direction. However, if everyone else think differently, I will go along with it.

  2. More critically, LessWrong.com is currently unusable for this purpose from my point of view. This is because there is no way to see only the posts relevant to AI alignment. For me, it is not practical to follow all of the new posts, and reading all the AI alignment relevant posts soon after they appear. Instead, I occasionally take a break from my own work and catch up with what everyone else is doing. But now it became impossible without combing through a huge list of irrelevant things. I would be surprised if I am the only one with this problem.

Let’s discuss the situation, please. I wish that either we get some solution for seeing only the AI alignment posts on LessWrong really soon, or that people continue at least posting the links on IAFF.



by Alex Mennen 67 days ago | Vadim Kosoy likes this | link

There is a replacement for IIAF now: https://www.alignmentforum.org/

reply

by Jessica Taylor 67 days ago | Vadim Kosoy likes this | link

Apparently “You must be approved by an admin to comment on Alignment Forum”, how do I do this?

Also is this officially the successor to IAFF? If so it would be good to make that more clear on this website.

reply

by Alex Mennen 66 days ago | link

There should be a chat icon on the bottom-right of the screen on Alignment Forum that you can use to talk to the admins (unless only people who have already been approved can see this?). You can also comment on LW (Alignment Forum posts are automatically crossposted to LW), and ask the admins to make it show up on Alignment Forum afterwards.

reply

by Jessica Taylor 79 days ago | link

Strongly agree that all AI alignment research should at least be linked from here.

reply

by Vladimir Slepnev 77 days ago | link

I’ve kinda switched to the view that we should have the whole party happening on LW, so people from different backgrounds can mix and get exposed to each other’s interests. But yeah, it would be nice if new LW had tags, like old LW.

reply



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

There should be a chat icon
by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes

Apparently "You must be
by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like

There is a replacement for
by Alex Mennen on Meta: IAFF vs LessWrong | 1 like

Regarding the physical
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think that we should expect
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think I understand your
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

This seems like a hack. The
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

After thinking some more,
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yes, I think that we're
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

My intuition is that it must
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

To first approximation, a
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Actually, I *am* including
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yeah, when I went back and
by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes

> Well, we could give up on
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

> For another thing, consider
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

RSS

Privacy & Terms