Intelligent Agent Foundations Forumsign up / log in
Ontology, lost purposes, and instrumental goals
discussion post by Stuart Armstrong 635 days ago | discuss

A putative new idea for AI control; index here.

An underdefined idea connected with the challenge of getting an AI to safely move a strawberry onto a plate.

Now, specifying something in the physical world like that is a great challenge; you have to define ontologies and similar. But imagine that the AI had a goal – any goal – and that it had to program a subagent to protect itself while it was accomplishing that goal.

Then the subagent will certainly be programmed with a firm grasp of the physical world, and some decent bridging laws should it have an ontology change (if, for instance, quantum mechanics turns out to be incomplete).

This is just an illustration of a general fact: even if its goal is not properly grounded, the instrumental goals will include strongly grounded goals, resilient to ontology change.

This feels related to the fact that even AI’s that are given goals in badly programmed natural language concepts (“Make humans* happy*“, with the asterix denoting the poor grounding) will still need well-grounded concepts for”human“, just to function.

So, is there a way to exploit this instrumental ideal? To somehow set human* equal to human in the motivation? I’m not sure, but it seems there might be something possible there… Will think more.





NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

There should be a chat icon
by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes

Apparently "You must be
by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like

There is a replacement for
by Alex Mennen on Meta: IAFF vs LessWrong | 1 like

Regarding the physical
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think that we should expect
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think I understand your
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

This seems like a hack. The
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

After thinking some more,
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yes, I think that we're
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

My intuition is that it must
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

To first approximation, a
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Actually, I *am* including
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yeah, when I went back and
by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes

> Well, we could give up on
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

> For another thing, consider
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

RSS

Privacy & Terms