Intelligent Agent Foundations Forumsign up / log in

I think the most plausible view is: what we call intelligence is a collection of a large number of algorithms and innovations each of which slightly increases effectiveness in a reasonably broad range of tasks.

To see why both view A and B seem strange to me, consider the analog for physical tasks. You could say that there is a simple core to human physical manipulation which allows us to solve any problem in some very broad natural domain. Or you could think that we just have a ton of tricks for particular manipulation tasks. But neither of those seems right, there is no simple core to the human body plan but at the same time it contains many features which are helpful across a broad range of tasks.

reply

by Vanessa Kosoy 348 days ago | link

I think that your view is plausible enough, however, if we focus only on qualitative performance metrics (e.g. time complexity up to a polynomial, regret bound up to logarithmic factors), then this collection probably includes only a small number of innovations that are important.

reply

by Vanessa Kosoy 342 days ago | link

Regarding the physical manipulation analogy: I think that there actually is a simple core to the human body plan. This core is, more or less: a spine, two arms with joints in the middle, two legs with joints in the middle, feet and arms with fingers. This is probably already enough to qualitatively solve more or less all physical manipulation problems humans can solve. All the nuances are needed to make it quantitatively more efficient and deal with the detailed properties of biological tissues, biological muscles et cetera (the latter might be considered analogous to the detailed properties of computational hardware and input/output channels for brains/AGIs).

reply


It seems relatively plausible that it’s “daemons all the way down,” and that a sophisticated agent from the daemon-distribution accepts this as the price of doing business (it loses value from being overtaken by its daemons, but gains the same amount of value on average from overtaking others). The main concern of such an agent would be defecting daemons that building anti-daemon immune systems, so that they can increase their influence by taking over parents but avoid being taken over themselves. However, if we have a sufficiently competitive internal environment then those defectors will be outcompeted anyway.

In this case, if we also have fractal immune systems causing log(complexity) overhead, then the orthogonality thesis is probably not true. The result would be that agents end up pursuing a “grand bargain” of whatever distribution of values efficient daemons have, rather than including a large component in the bargain for values like ours, and there would be no way for humans to subvert this directly (we may be able to subvert it indirectly by coordinating and then trading, i.e. only building an efficient but daemon-prone agent after confirming that daemon-values pay us enough to make it worth our while. But this kind of thing seems radically confusing and is unlikely to be sorted out by humans.) The process of internal value shifting amongst daemons would continue in some abstract sense, though they would eventually end up pursuing the convergent bargain of their values (in the same way that a hyperbolic discounter ends up behaving consistently after reflection).

I think this is the most likely way the orthogonality thesis could fail. When there was an arbital poll on this question a few years ago, I had by far the lowest probability on the orthogonality thesis and was quite surprised by other commenters’ confidence.

Fortunately, even if there is logarithmic overhead, it currently looks quite unlikely to me that the constants are bad enough for this to be an unrecoverable problem for us today. But as you say, it would be a dealbreaker for any attempt to prove asymptotic efficiency.

reply


Without reading closely, this seems very close to UDT2. Is there a problem that this gets right which UDT2 gets wrong (or for which there is ambiguity about the specification of UDT2?)

Without thinking too carefully, I don’t believe the troll bridge argument. We have to be super careful about “sufficiently large,” and about Lob’s theorem. To see whether the proof goes through, it seems instructive to consider the case where a trader with 90% of the initial mass really wants to cross the bridge. What happens when they try?

reply

by Abram Demski 563 days ago | Paul Christiano likes this | link

The differences between this and UDT2:

  1. This is something we can define precisely, whereas UDT2 isn’t.
  2. Rather than being totally updateless, this is just mostly updateless, with the parameter \(f\) determining how updateless it is.

I don’t think there’s a problem this gets right which we’d expect UDT2 to get wrong.

If we’re using the version of logical induction where the belief jumps to 100% as soon as something gets proved, then a weighty trader who believes crossing the bridge is good will just get knocked out immediately if the theorem prover starts proving that crossing is bad (which helps that step inside the Löbian proof go through). (I’d be surprised if the analysis turns out much different for the kind of LI which merely rapidly comes to believe things which get proved, but I can see how that distinction might block the proof.) But certainly it would be good to check this more thoroughly.

reply


In the first round I’m planning to pay:

  • $10k to Ryan Carey
  • $10k to Chris Pasek
  • $20k to Peter Scheyer

I’m excited to see what comes of this! Within a few months I’ll do another round of advertising + making decisions.

I want to emphasize that given the evaluation process, this definitely shouldn’t be read as a strong negative judgment (or endorsement) of anyone’s application.

reply


Fine with it being shared broadly.

reply


defending against this type of technology does seem to require solving hard philosophical problems

Why is this?

The case you describe seems clearly contrary to my preferences about how I should reflect. So a system which helped me implement my preferences would help me avoid this situation (in the same way that it would help me avoid being shot, or giving malware access to valuable computing resources).

It seems quite plausible that we’ll live to see a world where it’s considered dicey for your browser to uncritically display sentences written by an untrusted party.

reply

by Wei Dai 667 days ago | link

It seems quite plausible that we’ll live to see a world where it’s considered dicey for your browser to uncritically display sentences written by an untrusted party.

How would your browser know who can be trusted, if any of your friends and advisers could be corrupted at any given moment (or just their accounts taken over by malware and used to spread optimized disinformation)?

The case you describe seems clearly contrary to my preferences about how I should reflect.

How would an automated system help you avoid it, aside from blocking off all outside contact? (I doubt I’d be able to ever figure out what my values actually are / should be, if I had to do it without talking to other humans.) If you’re thinking of some sort of meta-execution-style system to help you analyze arguments and distinguish between correct arguments and merely convincing ones, I think that involves solving hard philosophical problems. My understanding is that Jessica agrees with me on that, so I was asking why she doesn’t think the same problem applies in the non-autopoietic automation scenario.

reply

by Vladimir Slepnev 667 days ago | link

figure out what my values actually are / should be

I think many human ideas are like low resolution pictures. Sometimes they show simple things, like a circle, so we can make a higher resolution picture of the same circle. That’s known as formalizing an idea. But if the thing in the picture looks complicated, figuring out a higher resolution picture of it is an underspecified problem. I fear that figuring out my values over all possible futures might be that kind of problem.

So apart from hoping to define a “full resolution picture” of human values, either by ourselves or with the help of some AI or AI-human hybrid, it might be useful to come up with approaches that avoid defining it. That was my motivation for this post, which directly uses our “low resolution” ideas to describe some particular nice future without considering all possible ones. It’s certainly flawed, but there might be other similar ideas.

Does that make sense?

reply

by Wei Dai 664 days ago | link

I think I understand what you’re saying, but my state of uncertainty is such that I put a lot of probability mass on possibilities that wouldn’t be well served by what you’re suggesting. For example, the possibility that we can achieve most value not through the consequences of our actions in this universe, but through their consequences in much larger (computationally richer) universes simulating this one. Or that spreading hedonium is actually the right thing to do and produces orders of magnitude more value than spreading anything that resembles human civilization. Or that value scales non-linearly with brain size so we should go for either very large or very small brains.

While discussing the VR utopia post, you wrote “I know you want to use philosophy to extend the domain, but I don’t trust our philosophical abilities to do that, because whatever mechanism created them could only test them on normal situations.” I have some hope that there is a minimal set of philosophical abilities that would allow us to eventually solve arbitrary philosophical problems, and we already have this. Otherwise it seems hard to explain the kinds of philosophical progress we’ve made, like realizing that other universes probably exist, and figuring out some ideas about how to make decisions when there are multiple copies of us in this universe and others.

Of course it’s also possible that’s not the case, and we can’t do better than to optimize the future using our current “low resolution” values, but until we’re a lot more certain of this, any attempt to do this seems to constitute a strong existential risk.

reply


  1. A competitive system can use a very large number of human hours in the future, as long as it uses relatively few human hours today.

  2. By “lack of philosophical understanding isn’t a big risk” I meant: “getting object-level philosophy questions wrong in the immediate future, like how to trade off speed vs. safety or how to compromise amongst different values, doesn’t seem to destroy too much value in expectation.” We may or may not need to solve philosophical problems to build aligned AGI. (I think Wei Dai believes that object-level philosophical errors destroy a lot of value in expectation.)

  3. I think autopoietic is a useful category and captures half of what is interesting about “recursively self-improving AGI.” There is a slightly different economic concept, of automation that can be scaled up using fixed human inputs, without strongly diminishing returns. This would be relevant because it changes the character and pace of economic growth. It’s not clear whether this is equivalent to autopoiesis. For example, Elon Musk seems to hope for technology which is non-autopoeitic but has nearly the same transformative economic impact. (Your view in this post is similar to my best guess at Elon Musk’s view, though more clearly articulated / philosophically crisp.)

reply

by Jessica Taylor 668 days ago | link

  1. That makes sense.

  2. OK, it seems like I misinterpreted your comment on philosophy. But in this post you seem to be saying that we might not need to solve philosophical problems related to epistemology and agency?

  3. That concept also seems useful and different from autopoiesis as I understand it (since it requires continual human cognitive work to run, though not very much).

reply

by Paul Christiano 667 days ago | link

  1. I think that we can avoid coming up with a good decision theory or priors or so on—there are particular reasons that we might have had to solve philosophical problems, which I think we can dodge. But I agree that we need or want to solve some philosophical problems to align AGI (e.g. defining corrigibility precisely is a philosophical problem).

reply


Why does Paul think that learning needs to be “aligned” as opposed to just well-understood and well-behaved, so that it can be safely used as part of a larger aligned AI design that includes search, logic, etc.?

I mostly think it should be benign / corrigible / something like that. I think you’d need something like that whether you want to apply learning directly or to apply it as part of a larger system.

If Paul does not think ALBA is a realistic design of an entire aligned AI (since it doesn’t include search/logic/etc.) what might a realistic design look like, roughly?

You can definitely make an entire AI out of learning alone (evolution / model-free RL), and I think that’s currently the single most likely possibility even though it’s not particularly likely.

The alternative design would integrate whatever other useful techniques are turned up by the community, which will depend on what those techniques are. One possibility is search/planning. This can be integrated in a straightforward way into ALBA, I think the main obstacle is security amplification which needs to work for ALBA anyway and is closely related to empirical work on capability amplification. On the logic side it’s harder to say what a useful technique would look like other than “run your agent for a while,” which you can also do with ALBA (though it requires something like these ideas).

which makes it seem like his approach is an alternative to MIRI’s

My hope is to have safe and safely composable versions of each important AI ingredient. I would caricature the implicit MIRI view as “learning will lead to doom, so we need to develop an alternative approach that isn’t doomed,” which is a substitute in the sense that it’s also trying to route around the apparent doomedness of learning but in a quite different way.

reply

by Wei Dai 698 days ago | link

Thanks, so to paraphrase your current position, you think once we have aligned learning it doesn’t seem as hard to integrate other AI components into the design, so aligning learning seems to be the hardest part. MIRI’s work might help with aligning other AI components and integrating them into something like ALBA, but you don’t see that as very hard anyway, so it perhaps has more value as a substitute than a complement. Is that about right?

One possibility is search/planning. This can be integrated in a straightforward way into ALBA

I don’t understand ALBA well enough to easily see extensions to the idea that are obvious to you, and I’m guessing others may be in a similar situation. (I’m guessing Jessica didn’t see it for example, or she wouldn’t have said “ALBA competes with adversaries who use only learning” without noting that there’s a straightforward extension that does more.) Can you write a post about this? (Or someone else please jump in if you do see what the “straightforward way” is.)

reply


I mostly agree with this post’s characterization of my position.

Places where I disagree with your characterization of my view:

  • I don’t assume that powerful actors can’t coordinate, and I don’t think that assumption is necessary. I would describe the situation as: over the course of time influence will necessarily shift sometimes due to forces we endorse—like deliberation or reconciliation—and sometimes due to forces we don’t endorse—like compatibility with an uncontrolled expansion strategy. Even if powerful actors can form perfectly-coordinated coalitions, a “weak” actor positioned to benefit from competitive expansion would simply decline to participate in that coalition unless offered extremely generous terms. I don’t see how the situation changes unless the strong actors use force. I do think that’s reasonably likely, I would more describe alignment as a first line of defense or a complement to approaches like regulation. I generally agree that good coordination can substitute for technical weakness.
  • I don’t think I rely on or even implicitly use a unidimensional model of power. I do use a concept like “total wealth” or “total influence,” which seems almost but not quite tautologically well-defined (as the output of a competitive/bargaining dynamic) and in particular is compatible with knowledge vs. resources vs. whatever. Being “competitive” seems to make sense in very complex worlds, when I say something like “win in a fistfight” I mean to quantify over all possible fistfights (including science, economic competition, persuasion, war, etc. etc.).
  • I have strong intuitions about my approach being workable, and either the approach will succeed or I at least will feel that I have learned something substantial. I expect many more pivots and adjustments to be necessary, but don’t expect to get stuck with plausibility arguments that are nearly as weak as the current arguments.

Place where I disagree with your view:

  • I agree that there are many drivers of AI other than learning. However, I think that learning is (a) currently the dominant component of powerful AI and so both more urgent and easier to study, (b) poses a much harder safety problems than other AI techniques under discussion, and (c) appears to be the “hard part” of analyzing procedures like evolution, fine-tuning brain-inspired architectures, or analyzing reasoning (it’s where I run into a wall when trying to analyze these other alternatives).
  • I think that all of capability amplification, informed oversight, and red teams / adversarial training are amenable to theoretical analysis with realistic amounts of philosophical progress. For example, I think that it will be possible to analyze these schemes using only abstractions like optimization power, without digging into models of bounded rationality at all. I may have understated my optimism on this (for capability amplification in particular) in our last discussion—I do believe that we won’t have a formal argument, but I think we should aim for an argument that is based on plausible empirical assumptions plus very good evidence for those assumptions.
  • Altruism as in “concern for future generations” does not fall out of coordination strategies, and seems more like a spandrel to me. But I do agree that many parts of altruism are more like coordination and this gives some prima facie reason for optimism about getting to some Pareto frontier.

Take-aways that I agree with:

  • We will need to have a better understanding of deliberation in order to be confident in any alignment scheme. (I prefer a more surgical approach than most MIRI folk, trying to figure exactly what we need to know rather than trying to have an expansive understanding of what good reasoning looks like.)
  • It is valuable for people to step back from particular approaches to alignment and to try to form a clearer understanding of the problem, explore completely new approaches, etc.

reply

by Wei Dai 699 days ago | link

Given that ALBA was not meant to be a realistic aligned AI design in and of itself, but just a way to get insights into how to build a realistic aligned AI (which I hadn’t entirely understood until now), I wonder if it makes sense to try to nail down all the details and arguments for it before checking to see if you generated any such insights. If we assume that aligned learning roughly looks like ALBA, what does that tell you about what a more realistic aligned AI looks like? It seems worth asking this, in case you, for example, spend a lot of time figuring out exactly how capability amplification could work, and then it ends up that capability amplification isn’t even used in the final aligned AI design, or in case designing aligned AI out of individual AI components doesn’t actually give you much insight into how to design more realistic aligned AI.

reply

by Wei Dai 686 days ago | link

We will need to have a better understanding of deliberation in order to be confident in any alignment scheme. (I prefer a more surgical approach than most MIRI folk, trying to figure exactly what we need to know rather than trying to have an expansive understanding of what good reasoning looks like.)

I can see two kinds of understanding of deliberation that could help us achieve confidence in alignment schemes:

  1. white-box understanding, where we understand what good reasoning is, in each of philosophical/mathematical/algorithmic levels, i.e., we can formally define what ideal reasoning is, have a clear understanding of why the formal definition is correct/normative and why some specific algorithm is a good approximation of the mathematical ideal.
  2. black box understanding, where we define “ideal reasoning” as the distribution of outcomes induced by placing a group of humans in an ideally safe and supportive environment and given a question to deliberate over, and show that a specific practical implementation of deliberation (e.g., meta-execution) induces a distribution that is acceptably similar to the ideal.

(Note that I’m using the same white-box/black-box terminology as here, but the meaning is a bit different since I’m now applying the terms to understandings of deliberation as opposed to implementations of deliberation.)

The problem with 2, which I see you as implicitly advocating (since I don’t see how else you hope to eventually be confident in your alignment scheme), is that I have a low prior for any specific implementation of deliberation (such as meta-execution) producing distributions of outcomes acceptably close to the ideal (unless it’s just a very close direct approximation of the ideal process like using a group of highly accurate uploads), and I don’t currently see what kind of arguments or evidence we can hope to produce in a relevant timeframe that would make me update enough to become confident. (Aside from something like achieving a white-box understanding of deliberation and then concluding that both the black-box definition of “ideal reasoning” and the actual implementation would be able to approximate the white-box definition of “ideal reasoning”, but presumably that’s not what you have in mind.)

Perhaps you think empirical work would help, but even if you’re able to gather a lot of data on what black-box ideal reasoning eventually produces (which you can’t for certain types of reasoning, e.g., philosophical reasoning) and are able to compare that with the AI alignment scheme, how would you rule out the possibility of edge cases where the two don’t match?

Another possibility is that you think black-box ideal reasoning would first decide to follow a set of rules and procedures before doing any further deliberation, and that would make it easier for an AI alignment scheme to approximate the ideal. But A) the group of humans would likely spend a lot of time exploring different alternative for how to do further deliberation, and B) whatever rules/procedures they end up adopting would likely include ways to back out of those rules and/or adopt new rules. For 2, you would need to predict with justifiably high confidence what rules/procedures they eventually converge upon (if they in fact converge instead of, e.g., diverge depending on what question they face or which humans we start with), and again I don’t see how you hope to do that, within the time we likely have available.

reply

by Wei Dai 685 days ago | link

I want to add that I think meta-execution, in particular, will have problems with deliberation for the same reason that it will have problems with learning: when you hear an argument or explanation (like when you learn), your mind is changed in ways that are hard or impossible to articulate. If every 10 minutes (or 1 day, or what have you) you throw away the part of what your brain does that it can’t write down, it seems highly plausible that in many cases you won’t be able to reproduce what the brain does over a longer period of time, especially if you’re trying to match its natural trajectory, as opposed to trying to hit some objectively measurable benchmark.

reply

by Paul Christiano 684 days ago | link

An aligned AI doesn’t need to share human preferences-on-reflection. It just needs to (a) be competent, and (b) help humans remain in control of the AI while carrying out whatever reflective process they prefer (including exploration of different approaches, reconciliation of different perspectives, etc.).

So all I’m hoping is to show something about how deliberation can (a) be smarter, while (b) avoiding introducing “bad” (incorrigible?) optimization. I don’t think this requires either your #1 or #2.

reply

by Wei Dai 683 days ago | link

So all I’m hoping is to show something about how deliberation can (a) be smarter, while (b) avoiding introducing “bad” (incorrigible?) optimization. I don’t think this requires either your #1 or #2.

If you try to formalize (a) and (b), what does that look like, and how would you reach the conclusion that an AI actually has (a) and (b)? My #1 and #2 are things I can come up with when I try to think how we might come to have justified confidence that an AI has good reasoning, and I’m not seeing what other solutions pop up if we say an aligned AI doesn’t need to share human preferences-on-reflection but only needs to be competent and help humans remain in control.

reply

by Paul Christiano 682 days ago | link

Competence can be an empirical claim, so (a) seems much more straightforward.

Is there some sense in which “argue that a process is normatively correct” is more of a solution than “argue that a process doesn’t optimize for something ‘bad’”? I agree that both of of the properties are hard to formalize or achieve, the second one currently looks easier to me (and may even be a subproblem of the first one—e.g. my current best guess is that good reasoners need a cognitive immune system).

reply

by Wei Dai 682 days ago | link

Competence can be an empirical claim, so (a) seems much more straightforward.

Once again I’m having trouble seeing something that you think is straightforward. If an AI can’t determine my values-upon-reflection, that seems like a kind of incompetence. If it can’t do that, it seems likely there are other things it can’t do. Perhaps you can define “competence” in a way that excludes that class of things and argue that’s good enough, but I’m not sure how you’d do that.

Is there some sense in which “argue that a process is normatively correct” is more of a solution than “argue that a process doesn’t optimize for something ‘bad’”?

I think we might eventually be able to argue that a process is normatively correct by understanding it at each of philosophical/mathematical/algorithmic levels, but that kind of white-box understanding does not seem possible if your process incorporates an opaque object (e.g., a machine-learned imitation of human behavior), so I think the best you can hope to achieve in that case is a black-box understanding where you show that your process induces the same (or close enough) distribution of outcomes as a group of humans in an ideal environment.

If your “argue that a process doesn’t optimize for something ‘bad’” is meant to be analogous to my white-box understanding, it seems similarly inapplicable due to presence of the opaque object in your process. If it’s meant to be analogous to my black-box understanding, I don’t see what the analogy is. In other words, what are you hoping to show instead of “induces the same (or close enough) distribution of outcomes as a group of humans in an ideal environment”?

reply

by Paul Christiano 682 days ago | link

but that kind of white-box understanding does not seem possible if your process incorporates an opaque object (e.g., a machine-learned imitation of human behavior), so I think the best you can hope to achieve in that case is a black-box understanding where you show that your process induces the same (or close enough) distribution of outcomes as a group of humans in an ideal environment

Suppose I use normatively correct reasoning, but I also use a toaster designed by a normal engineer. The engineer is less capable than I am in every respect, and I watched them design and build the toaster to verify that they didn’t do anything tricky. Then I verified that the toaster does seem to toast toast. But I have no philosophical or mathematical understanding of the toaster-design-process. Your claim seems to be that there are no rational grounds for accepting use of the toaster, other than to argue that accepting it doesn’t change the distribution of outcomes (which probably isn’t true, since it e.g. slightly changes the relative influence of different internal drives by changing what food I eat). Is that right?

What if they designed a SAT solver for me? Or wrote a relativity textbook? Do I need to be sure that nothing like that happens in my deliberative process, in order to have confidence in it?

If those cases don’t seem analogous, can you be more clear about what you mean by “opaque,” or what quantitative factors make an opaque object problematic? So far your argument doesn’t seem rely on any properties of the opaque object at all.

(This case isn’t especially analogous to the deliberative process I’m interested in. I’m bringing it up because I don’t think I yet understand your intuitive dichotomy.)

reply

by Wei Dai 680 days ago | link

When you wrote “suppose I use normatively correct reasoning” did you mean suppose you, Paul, use normatively correct reasoning, or suppose you are an AI who uses normatively correct reasoning? I’ll assume the latter for now.

Generally, the AI would use its current reasoning process to decide whether or not to incorporate new objects into itself. I’m not sure what that reasoning process will do exactly, but presumably it would involve something like looking for and considering proofs/arguments/evidence to the effect that incorporating the new object in some specified way will allow it to retain its normatively correct status, or the distribution of outcomes will be sufficiently unchanged.

If an AI uses normatively correct reasoning, the toaster shouldn’t change the distribution of outcomes, since letting food influence its reasoning process is obviously not a normative thing to do, and it should be easy to show that there is no influence. For the SAT solver, the AI should be able to argue that it’s safe to use it for certain purposes, because it can verify the answer that the solver gives, and for the relativity textbook, it may be able to directly verify that the textbook doesn’t contain anything that can manipulate or bias its outputs.

I guess by “opaque” I meant a complex object that wasn’t designed to be easily reasoned about, so it’s very hard to determine whether it has a given property that would be relevant to showing that it can be used safely as part of an AI’s reasoning process. For example a typical microprocessor is an opaque object because it may come with hard to detect flaws and backdoors, whereas a microprocessor designed to be provably correct would be a transparent object.

(Does that help?)

reply

by Paul Christiano 680 days ago | link

Suppose that I, Paul, use a toaster or SAT solver or math textbook.

I’m happy to drop the normatively correct reasoning assumption if the counterfactual begs the question. The important points are:

  • I’m happy trusting future Paul’s reasoning (in particular I do not consider it a top altruistic priority to find a way to avoid trusting future Paul’s reasoning)
  • That remains true even though future Paul would happily use an opaque toaster or textbook (under the conditions described).

I’m not convinced that any of your arguments would be sufficient to trust a toaster / textbook / SAT solver:

and it should be easy to show that there is no influence

Having new memories will by default change the output of deliberation, won’t it?

For the SAT solver, the AI should be able to argue that it’s safe to use it for certain purposes, because it can verify the answer that the solver gives

Satisfying instances produced by an arbitrarily powerful adversary don’t seem safe for anything.

and for the relativity textbook, it may be able to directly verify that the textbook doesn’t contain anything that can manipulate or bias its outputs

I don’t see how this would fit into your framework, without expanding it far enough that it could contain the kind of argument I’m gesturing at (by taking bad === “manipulating or biasing its outputs.”)

reply

by Wei Dai 679 days ago | link

If we’re talking about you, Paul, then what’s different is that since you don’t have a good understanding of what normatively correct reasoning is, you can only use black-box type reasoning to conclude that certain things are safe to do. We’d happily use the opaque toaster or textbook because we have fairly strong empirical evidence that doing so doesn’t change the distribution of outcomes much. Using a toaster might change a particular outcome vs not using it, but there seems to be enough stochasticity in a human deliberation process that it wouldn’t make a significant difference to the overall distribution of outcomes. With a textbook, you reason that with enough time you’d reproduce its contents yourself, and whatever actual differences between reading the textbook and figuring out relativity by yourself is again lost in the overall noise of the deliberative process. (We have fairly strong empirical evidence that reading such a textbook written by another human is unlikely to derail our deliberative process in a way that’s not eventually recoverable.)

One reply to this might be that we can hope to gather an amount of empirical evidence about meta-execution that would be comparable to the evidence we have about toasters and textbooks. I guess my concern there is that we’ll need much stronger assurances if we’re going to face other superintelligent AIs in our environment. For example that textbook might contain subtle mistakes that cause you to reason incorrectly about certain questions (analogous to edge case questions where your meta-execution would give significantly different answers than your reflective equilibrium), but there is no one in your current environment who can exploit such errors.

ETA: Another reason to be worried is that, compared to humans using things produced by other humans, it seems reasonable to suspect (have a high prior) that meta-execution’s long run safety can’t be extrapolated well from what it does in the short term, since meta-execution is explicitly built out of a component that emphasizes imitation of short-term human behavior while throwing away internal changes that might be very relevant to long-run outcomes. (Again this may be missing your point about not needing to reproduce values-upon-reflection but I just don’t understand how your alternative approach to understanding deliberation would work if you tried to formalize it.)

Satisfying instances produced by an arbitrarily powerful adversary don’t seem safe for anything.

Not sure if this is still relevant to the current interpretation of your question, but couldn’t you use it to safely break encryption schemes, at least?

reply

by Paul Christiano 712 days ago | Scott Garrabrant likes this | link | parent | on: Smoking Lesion Steelman

I like this line of inquiry; it seems like being very careful about the justification for CDT will probably give a much clearer sense of what we actually want out of “causal” structure for logical facts.

reply

Older

NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

[Note: This comment is three
by Ryan Carey on A brief note on factoring out certain variables | 0 likes

There should be a chat icon
by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes

Apparently "You must be
by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like

There is a replacement for
by Alex Mennen on Meta: IAFF vs LessWrong | 1 like

Regarding the physical
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think that we should expect
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think I understand your
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

This seems like a hack. The
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

After thinking some more,
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yes, I think that we're
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

My intuition is that it must
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

To first approximation, a
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Actually, I *am* including
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yeah, when I went back and
by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes

> Well, we could give up on
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

RSS

Privacy & Terms