Intelligent Agent Foundations Forumsign up / log in
Three Oracle designs
post by Stuart Armstrong 246 days ago | Patrick LaVictoire likes this | discuss

A putative new idea for AI control; index here.

An initial draft looking at three ways of getting information out of Oracles, information that’s useful and safe - in theory.

One thing I may need to do, is find slightly better names for them ^_^

Good and safe uses of AI Oracles

Abstract:


An Oracle is a design for potentially high power artificial intelligences (AIs), where the AI is made safe by restricting it to only answer questions. Unfortunately most designs cause the Oracle to be motivated to manipulate humans with the contents of their answers. The second challenge is to get the AI to provide accurate and useful answers. This paper presents three Oracle designs that get around the manipulation and accuracy problems in different ways: the Counterfactually Unread Agent, the Verified Selective Agent, and the Virtual-world Time-bounded Agent. It demonstrates how each design is safe (given that humans stick with the protocols), and allows different types of questions and answers. Finally, it investigates what happens when the implementation is slightly imperfect, concluding the first two agent designs are robust to this, but not the third.

Images of the three designs:

Counterfactually Unread Agent:

Verified Selective Agent:

Virtual-world Time-bounded Agent:



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

I don't know which open
by Jessica Taylor on Some problems with making induction benign, and ap... | 0 likes

KWIK learning is definitely
by Vadim Kosoy on Some problems with making induction benign, and ap... | 0 likes

I should have said "reliably
by Patrick LaVictoire on HCH as a measure of manipulation | 0 likes

I think that one can argue
by Vadim Kosoy on Generalizing Foundations of Decision Theory | 0 likes

"Having a well-calibrated
by Jessica Taylor on HCH as a measure of manipulation | 0 likes

Re #2, I think this is an
by Patrick LaVictoire on HCH as a measure of manipulation | 0 likes

Re #1, an obvious set of
by Patrick LaVictoire on HCH as a measure of manipulation | 0 likes

There's the additional
by Patrick LaVictoire on HCH as a measure of manipulation | 0 likes

I agree it's not a complete
by David Krueger on An idea for creating safe AI | 0 likes

I spoke with Huw about this
by David Krueger on An idea for creating safe AI | 0 likes

Both of your conjectures are
by Alex Mennen on Generalizing Foundations of Decision Theory | 0 likes

I can think of two problems:
by Ryan Carey on HCH as a measure of manipulation | 0 likes

Question that I haven't seen
by Patrick LaVictoire on All the indifference designs | 0 likes

Agree that IRL doesn't solve
by Jessica Taylor on Some problems with making induction benign, and ap... | 0 likes

Designing an agent which is
by Vadim Kosoy on An idea for creating safe AI | 0 likes

RSS

Privacy & Terms