Intelligent Agent Foundations Forumsign up / log in
by Paul Christiano 223 days ago | link | parent

Suppose that I, Paul, use a toaster or SAT solver or math textbook.

I’m happy to drop the normatively correct reasoning assumption if the counterfactual begs the question. The important points are:

  • I’m happy trusting future Paul’s reasoning (in particular I do not consider it a top altruistic priority to find a way to avoid trusting future Paul’s reasoning)
  • That remains true even though future Paul would happily use an opaque toaster or textbook (under the conditions described).

I’m not convinced that any of your arguments would be sufficient to trust a toaster / textbook / SAT solver:

and it should be easy to show that there is no influence

Having new memories will by default change the output of deliberation, won’t it?

For the SAT solver, the AI should be able to argue that it’s safe to use it for certain purposes, because it can verify the answer that the solver gives

Satisfying instances produced by an arbitrarily powerful adversary don’t seem safe for anything.

and for the relativity textbook, it may be able to directly verify that the textbook doesn’t contain anything that can manipulate or bias its outputs

I don’t see how this would fit into your framework, without expanding it far enough that it could contain the kind of argument I’m gesturing at (by taking bad === “manipulating or biasing its outputs.”)

by Wei Dai 223 days ago | link

If we’re talking about you, Paul, then what’s different is that since you don’t have a good understanding of what normatively correct reasoning is, you can only use black-box type reasoning to conclude that certain things are safe to do. We’d happily use the opaque toaster or textbook because we have fairly strong empirical evidence that doing so doesn’t change the distribution of outcomes much. Using a toaster might change a particular outcome vs not using it, but there seems to be enough stochasticity in a human deliberation process that it wouldn’t make a significant difference to the overall distribution of outcomes. With a textbook, you reason that with enough time you’d reproduce its contents yourself, and whatever actual differences between reading the textbook and figuring out relativity by yourself is again lost in the overall noise of the deliberative process. (We have fairly strong empirical evidence that reading such a textbook written by another human is unlikely to derail our deliberative process in a way that’s not eventually recoverable.)

One reply to this might be that we can hope to gather an amount of empirical evidence about meta-execution that would be comparable to the evidence we have about toasters and textbooks. I guess my concern there is that we’ll need much stronger assurances if we’re going to face other superintelligent AIs in our environment. For example that textbook might contain subtle mistakes that cause you to reason incorrectly about certain questions (analogous to edge case questions where your meta-execution would give significantly different answers than your reflective equilibrium), but there is no one in your current environment who can exploit such errors.

ETA: Another reason to be worried is that, compared to humans using things produced by other humans, it seems reasonable to suspect (have a high prior) that meta-execution’s long run safety can’t be extrapolated well from what it does in the short term, since meta-execution is explicitly built out of a component that emphasizes imitation of short-term human behavior while throwing away internal changes that might be very relevant to long-run outcomes. (Again this may be missing your point about not needing to reproduce values-upon-reflection but I just don’t understand how your alternative approach to understanding deliberation would work if you tried to formalize it.)

Satisfying instances produced by an arbitrarily powerful adversary don’t seem safe for anything.

Not sure if this is still relevant to the current interpretation of your question, but couldn’t you use it to safely break encryption schemes, at least?






If you drop the
by Alex Appel on Distributed Cooperation | 0 likes

Cool! I'm happy to see this
by Abram Demski on Distributed Cooperation | 0 likes

Caveat: The version of EDT
by 258 on In memoryless Cartesian environments, every UDT po... | 2 likes

[Delegative Reinforcement
by Vadim Kosoy on Stable Pointers to Value II: Environmental Goals | 1 like

Intermediate update: The
by Alex Appel on Further Progress on a Bayesian Version of Logical ... | 0 likes

Since Briggs [1] shows that
by 258 on In memoryless Cartesian environments, every UDT po... | 2 likes

This doesn't quite work. The
by Nisan Stiennon on Logical counterfactuals and differential privacy | 0 likes

I at first didn't understand
by Sam Eisenstat on An Untrollable Mathematician | 1 like

This is somewhat related to
by Vadim Kosoy on The set of Logical Inductors is not Convex | 0 likes

This uses logical inductors
by Abram Demski on The set of Logical Inductors is not Convex | 0 likes

Nice writeup. Is one-boxing
by Tom Everitt on Smoking Lesion Steelman II | 0 likes

Hi Alex! The definition of
by Vadim Kosoy on Delegative Inverse Reinforcement Learning | 0 likes

A summary that might be
by Alex Appel on Delegative Inverse Reinforcement Learning | 1 like

I don't believe that
by Alex Appel on Delegative Inverse Reinforcement Learning | 0 likes

This is exactly the sort of
by Stuart Armstrong on Being legible to other agents by committing to usi... | 0 likes


Privacy & Terms