Intelligent Agent Foundations Forumsign up / log in
Three Oracle designs
post by Stuart Armstrong 214 days ago | Patrick LaVictoire likes this | discuss

A putative new idea for AI control; index here.

An initial draft looking at three ways of getting information out of Oracles, information that’s useful and safe - in theory.

One thing I may need to do, is find slightly better names for them ^_^

Good and safe uses of AI Oracles

Abstract:


An Oracle is a design for potentially high power artificial intelligences (AIs), where the AI is made safe by restricting it to only answer questions. Unfortunately most designs cause the Oracle to be motivated to manipulate humans with the contents of their answers. The second challenge is to get the AI to provide accurate and useful answers. This paper presents three Oracle designs that get around the manipulation and accuracy problems in different ways: the Counterfactually Unread Agent, the Verified Selective Agent, and the Virtual-world Time-bounded Agent. It demonstrates how each design is safe (given that humans stick with the protocols), and allows different types of questions and answers. Finally, it investigates what happens when the implementation is slightly imperfect, concluding the first two agent designs are robust to this, but not the third.

Images of the three designs:

Counterfactually Unread Agent:

Verified Selective Agent:

Virtual-world Time-bounded Agent:



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

Why wouldn't it work? The
by Jessica Taylor on True understanding comes from passing exams | 0 likes

It would be weird if the
by Jessica Taylor on Are daemons a problem for ideal agents? | 0 likes

The second AI doesn't get to
by Stuart Armstrong on True understanding comes from passing exams | 0 likes

Fixed the $\varepsilon$,
by Scott Garrabrant on Entangled Equilibria and the Twin Prisoners' Dilem... | 0 likes

I think you meant to divide
by Vadim Kosoy on Entangled Equilibria and the Twin Prisoners' Dilem... | 0 likes

Yup, this isn't robust to
by Patrick LaVictoire on Censoring out-of-domain representations | 0 likes

I don't think "honesty" is
by Paul Christiano on How likely is a random AGI to be honest? | 2 likes

Discussed briefly in Concrete
by Daniel Dewey on Minimizing Empowerment for Safety | 2 likes

I reason as follows: 1.
by David Krueger on Does UDT *really* get counter-factually mugged? | 1 like

I agree... if there are
by David Krueger on Censoring out-of-domain representations | 0 likes

Game-aligned agents aren't
by Vladimir Nesov on Does UDT *really* get counter-factually mugged? | 0 likes

The issue in the OP is that
by Vladimir Nesov on Does UDT *really* get counter-factually mugged? | 0 likes

This seems only loosely
by David Krueger on Does UDT *really* get counter-factually mugged? | 0 likes

OK that makes sense, thanks.
by David Krueger on Does UDT *really* get counter-factually mugged? | 0 likes

It's not the same (but
by David Krueger on Learning Impact in RL | 0 likes

RSS

Privacy & Terms (NEW 04/01/15)