Intelligent Agent Foundations Forumsign up / log in
by Paul Christiano 72 days ago | link | parent

Suppose that I, Paul, use a toaster or SAT solver or math textbook.

I’m happy to drop the normatively correct reasoning assumption if the counterfactual begs the question. The important points are:

  • I’m happy trusting future Paul’s reasoning (in particular I do not consider it a top altruistic priority to find a way to avoid trusting future Paul’s reasoning)
  • That remains true even though future Paul would happily use an opaque toaster or textbook (under the conditions described).

I’m not convinced that any of your arguments would be sufficient to trust a toaster / textbook / SAT solver:

and it should be easy to show that there is no influence

Having new memories will by default change the output of deliberation, won’t it?

For the SAT solver, the AI should be able to argue that it’s safe to use it for certain purposes, because it can verify the answer that the solver gives

Satisfying instances produced by an arbitrarily powerful adversary don’t seem safe for anything.

and for the relativity textbook, it may be able to directly verify that the textbook doesn’t contain anything that can manipulate or bias its outputs

I don’t see how this would fit into your framework, without expanding it far enough that it could contain the kind of argument I’m gesturing at (by taking bad === “manipulating or biasing its outputs.”)



by Wei Dai 72 days ago | link

If we’re talking about you, Paul, then what’s different is that since you don’t have a good understanding of what normatively correct reasoning is, you can only use black-box type reasoning to conclude that certain things are safe to do. We’d happily use the opaque toaster or textbook because we have fairly strong empirical evidence that doing so doesn’t change the distribution of outcomes much. Using a toaster might change a particular outcome vs not using it, but there seems to be enough stochasticity in a human deliberation process that it wouldn’t make a significant difference to the overall distribution of outcomes. With a textbook, you reason that with enough time you’d reproduce its contents yourself, and whatever actual differences between reading the textbook and figuring out relativity by yourself is again lost in the overall noise of the deliberative process. (We have fairly strong empirical evidence that reading such a textbook written by another human is unlikely to derail our deliberative process in a way that’s not eventually recoverable.)

One reply to this might be that we can hope to gather an amount of empirical evidence about meta-execution that would be comparable to the evidence we have about toasters and textbooks. I guess my concern there is that we’ll need much stronger assurances if we’re going to face other superintelligent AIs in our environment. For example that textbook might contain subtle mistakes that cause you to reason incorrectly about certain questions (analogous to edge case questions where your meta-execution would give significantly different answers than your reflective equilibrium), but there is no one in your current environment who can exploit such errors.

ETA: Another reason to be worried is that, compared to humans using things produced by other humans, it seems reasonable to suspect (have a high prior) that meta-execution’s long run safety can’t be extrapolated well from what it does in the short term, since meta-execution is explicitly built out of a component that emphasizes imitation of short-term human behavior while throwing away internal changes that might be very relevant to long-run outcomes. (Again this may be missing your point about not needing to reproduce values-upon-reflection but I just don’t understand how your alternative approach to understanding deliberation would work if you tried to formalize it.)

Satisfying instances produced by an arbitrarily powerful adversary don’t seem safe for anything.

Not sure if this is still relevant to the current interpretation of your question, but couldn’t you use it to safely break encryption schemes, at least?

reply



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

What does the Law of Logical
by Alex Appel on Smoking Lesion Steelman III: Revenge of the Tickle... | 0 likes

To quote the straw vulcan:
by Stuart Armstrong on Hyperreal Brouwer | 0 likes

I intend to cross-post often.
by Scott Garrabrant on Should I post technical ideas here or on LessWrong... | 1 like

I think technical research
by Vadim Kosoy on Should I post technical ideas here or on LessWrong... | 2 likes

I am much more likely to miss
by Abram Demski on Should I post technical ideas here or on LessWrong... | 1 like

Note that the problem with
by Vadim Kosoy on Open Problems Regarding Counterfactuals: An Introd... | 0 likes

Typos on page 5: *
by Vadim Kosoy on Open Problems Regarding Counterfactuals: An Introd... | 0 likes

Ah, you're right. So gain
by Abram Demski on Smoking Lesion Steelman | 0 likes

> Do you have ideas for how
by Jessica Taylor on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

I think I understand what
by Wei Dai on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

>You don’t have to solve
by Wei Dai on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

Your confusion is because you
by Vadim Kosoy on Delegative Inverse Reinforcement Learning | 0 likes

My confusion is the
by Tom Everitt on Delegative Inverse Reinforcement Learning | 0 likes

> First of all, it seems to
by Abram Demski on Smoking Lesion Steelman | 0 likes

> figure out what my values
by Vladimir Slepnev on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

RSS

Privacy & Terms