Intelligent Agent Foundations Forumsign up / log in
by Vladimir Slepnev 399 days ago | link | parent

figure out what my values actually are / should be

I think many human ideas are like low resolution pictures. Sometimes they show simple things, like a circle, so we can make a higher resolution picture of the same circle. That’s known as formalizing an idea. But if the thing in the picture looks complicated, figuring out a higher resolution picture of it is an underspecified problem. I fear that figuring out my values over all possible futures might be that kind of problem.

So apart from hoping to define a “full resolution picture” of human values, either by ourselves or with the help of some AI or AI-human hybrid, it might be useful to come up with approaches that avoid defining it. That was my motivation for this post, which directly uses our “low resolution” ideas to describe some particular nice future without considering all possible ones. It’s certainly flawed, but there might be other similar ideas.

Does that make sense?



by Wei Dai 397 days ago | link

I think I understand what you’re saying, but my state of uncertainty is such that I put a lot of probability mass on possibilities that wouldn’t be well served by what you’re suggesting. For example, the possibility that we can achieve most value not through the consequences of our actions in this universe, but through their consequences in much larger (computationally richer) universes simulating this one. Or that spreading hedonium is actually the right thing to do and produces orders of magnitude more value than spreading anything that resembles human civilization. Or that value scales non-linearly with brain size so we should go for either very large or very small brains.

While discussing the VR utopia post, you wrote “I know you want to use philosophy to extend the domain, but I don’t trust our philosophical abilities to do that, because whatever mechanism created them could only test them on normal situations.” I have some hope that there is a minimal set of philosophical abilities that would allow us to eventually solve arbitrary philosophical problems, and we already have this. Otherwise it seems hard to explain the kinds of philosophical progress we’ve made, like realizing that other universes probably exist, and figuring out some ideas about how to make decisions when there are multiple copies of us in this universe and others.

Of course it’s also possible that’s not the case, and we can’t do better than to optimize the future using our current “low resolution” values, but until we’re a lot more certain of this, any attempt to do this seems to constitute a strong existential risk.

reply



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

There should be a chat icon
by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes

Apparently "You must be
by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like

There is a replacement for
by Alex Mennen on Meta: IAFF vs LessWrong | 1 like

Regarding the physical
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think that we should expect
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think I understand your
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

This seems like a hack. The
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

After thinking some more,
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yes, I think that we're
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

My intuition is that it must
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

To first approximation, a
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Actually, I *am* including
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yeah, when I went back and
by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes

> Well, we could give up on
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

> For another thing, consider
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

RSS

Privacy & Terms