Intelligent Agent Foundations Forumsign up / log in
by Vadim Kosoy 81 days ago | link | parent

Note that the problem with exploration already arises in ordinary reinforcement learning, without going into “exotic” decision theories. Regarding the question of why humans don’t seem to have this problem, I think it is a combination of

  • The universe is regular (which is related to what you said about “we can’t see any plausible causal way it could happen”), so a Bayes-optimal policy with a simplicity prior has something going for it. On the other hand, sometimes you do need to experiment, so this can’t be the only explanation.

  • Any individual human has parents that teach em things, including things like “touching a hot stove is dangerous.” Later in life, ey can draw on much of the knowledge accumulated by human civilization. This tunnels the exploration into safe channels, analogously to the role of the advisor in my recent posts.

  • One may say that the previous point only passes the recursive buck, since we can consider all of humanity to be the “agent”. From this perspective, it seems that the universe just happens to be relatively safe, in the sense that it’s pretty hard for an individual human to do something that will irreparably damage all of humanity… or at least it was the case during most of human history.

  • In addition, we have some useful instincts baked in by evolution (e.g. probably some notion of existing in a three dimensional space with objects that interact mechanically). Again, you could zoom further out and say evolution works because it’s hard to create a species that will wipe out all life.



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

Unfortunately, it's not just
by Vadim Kosoy on Catastrophe Mitigation Using DRL | 0 likes

>We can solve the problem in
by Wei Dai on The Happy Dance Problem | 1 like

Maybe it's just my browser,
by Gordon Worley III on Catastrophe Mitigation Using DRL | 2 likes

At present, I think the main
by Abram Demski on Looking for Recommendations RE UDT vs. bounded com... | 0 likes

In the first round I'm
by Paul Christiano on Funding opportunity for AI alignment research | 0 likes

Fine with it being shared
by Paul Christiano on Funding opportunity for AI alignment research | 0 likes

I think the point I was
by Abram Demski on Predictable Exploration | 0 likes

(also x-posted from
by Sören Mindermann on The Three Levels of Goodhart's Curse | 0 likes

(x-posted from Arbital ==>
by Sören Mindermann on The Three Levels of Goodhart's Curse | 0 likes

>If the other players can see
by Stuart Armstrong on Predictable Exploration | 0 likes

Thinking about this more, I
by Abram Demski on Predictable Exploration | 0 likes

> So I wound up with
by Abram Demski on Predictable Exploration | 0 likes

Hm, I got the same result
by Alex Appel on Predictable Exploration | 1 like

Paul - how widely do you want
by David Krueger on Funding opportunity for AI alignment research | 0 likes

I agree, my intuition is that
by Abram Demski on Smoking Lesion Steelman III: Revenge of the Tickle... | 0 likes

RSS

Privacy & Terms