Recently, there's been a strong push against "wrapper-minds" as a framework. It's argued that there's no specific reason to think that all sufficiently advanced agents would format their goals in terms of expected-utility maximization over future trajectories, and that this view predicts severe problems with e. g. Goodharting that just wouldn't show up in reality.[1]

 I think these arguments have merit, and the Shard Theory's model definitely seems to correspond to a real stage in agents' value formation.

But I'd like to offer a fairly prosaic argument in favor of wrapper-minds.


Suppose that we have some agent which is being updated by some greedy optimization process (the SGD, evolution, etc.). On average, updates tend to decrease the magnitude of every subsequent update — with each update, the agent requires less and less correction.

We can say that this process optimizes the agent for good performance according to some reward function , or that it chisels "effective cognition" into that agent according to some rule.

The wrapper-mind argument states that any "sufficiently strong" agent found by this process would:

  1. Have an explicit representation of  inside itself, which it would explicitly pursue.
  2. Pursue only , at the expense of everything else in the universe.

I'll defend them separately.

Point 1. It's true that explicit -optimization is suboptimal for many contexts. Consequentialism is slow, and shallow environment-optimized heuristics often perform just as well while being much faster. Other environments can be just "solved" — an arithmetic calculator doesn't need to be a psychotic universe-eater to do its job correctly. And for more complex environments, we can have shard economies, whose collective goals, taken in sum, would be a strong proxy of .

But suppose that the agent's training environment is very complex and very diverse indeed. Or, equivalently, that it sometimes jumps between many very different and complex environments, and sometimes ends up in entirely novel, never-before-seen situations. We would still want it to do well at  in all such cases[2]. How can we do so?

Just "solving" environments, as with arithmetic, may be impossible or computationally intractable. Systems of heuristics or shard economies also wouldn't be up to the task — whatever proxy goal they're optimizing, there'd be at least one environment where it decouples from .

It seems almost tautologically true, here, that the only way to keep an agent pointed at  given this setup is to explicitly point it at . Nothing else would do!

Thus, our optimization algorithm would necessarily find an -pursuer, if it optimizes an agent for good performance across a sufficiently diverse (set of) environment(s).

Point 2. But why would that agent be shaped to pursue only , and so strongly that it'll destroy everything else?

This, more or less, also has to do with environment diversity, plus some instrumental convergence.

As the optimization algorithm is shaping our agent, the agent will be placed in environments where it has preciously few resources, or a low probability of scoring well at  (= high probability of receiving a strong update/correction after this episode ends).

Without knowing when such a circumstance would arise, how can we prepare our agent for this?

We can make it optimize for  strongly, as strongly as it can, in fact. Acquire as much resources as possible, spend them on nothing but -pursuit, minimize uncertainty of scoring well at , and so on.

Every goal that isn't  would distract from -pursuit, and therefore lead to failure at some point, and so our optimization algorithm would eventually update such goals away; with update-strength proportional to how distracting a goal is.

Every missed opportunity to grab resources that can be used for -pursuit, or a failure to properly optimize a plan for -pursuit, would eventually lead to scoring bad at . And so our optimization algorithm would instill a drive to take all such opportunities.

Thus, any greedy optimization algorithm would convergently shape its agent to not only pursue , but to maximize for 's pursuit — at the expense of everything else.


What should we take away from this? What should we not take away from this?

  • I should probably clarify that I'm not arguing that inner alignment isn't a problem, here. Aligning a wrapper-mind to a given goal is a very difficult task, and one I expect "blind" algorithms like the SGD to fail horribly at.
  • I'm not saying that the shard theory is incorrect — as I'd said, I think shard systems are very much a real developmental milestone of agents.

But I do think that we should very strongly expect the SGD to move its agents in the direction of -optimizing wrapper-minds. Said "movement" would be very complex, a nuanced path-dependent process that might lead to surprising end-points, or (as with humans) might terminate at a halfway point. But it'd still be movement in that direction!

And note the fundamental reasons behind this. It isn't because wrapper-mind behavior is convergent for any intelligent entity. Rather, it's a straightforward consequence of every known process for generating intelligent entities — the paradigm of local updates according to some outer function. Greedy optimization processes essentially search for mind-designs that would pre-empt any update the greedy optimization process would've made to them, so these minds come to incorporate the update rule and act in a way that'd merit a minimal update. That's why. (In a way, it's because greedy optimization processes are themselves goal-obsessed wrappers.)

We wouldn't get clean wrapper-minds out of all of this, no. But they, and concerns related to them, still merit central attention.

  1. ^

    Plus some more fundamental objections to utility-maximization as a framework, on which I haven't properly updated on yet, but which (I strongly expect) do not contradict the point I want to make in this post.

  2. ^

    That is, we would shape the agent such that it doesn't require a strong update after ending up in one of these situations.

New Comment
15 comments, sorted by Click to highlight new comments since:

Yeah I disagree pretty strongly with this, though I am also somewhat confused what the points under contention are.

I think that there are two questions that are separated in my mind but not in this post:

  1. What will the motivational structure of the agent that a training process produces be? (a wrapper-mind? a reflex agent? a bundle of competing control loops? a hierarchy of subagents?)
  2. What will the agent that a training process produces be motivated towards? (the literal selection criterion? a random correlate of the selection criterion? a bunch of correlates of the selection criterion and correlates of those correlates? something else? not enough information to tell?)

As an example, you could have a wrapper-mind that cares about some correlate of R but not R itself. If it is smart, such an agent can navigate the selection process just as well as an R-pursuer, so the optimization algorithm cannot distinguish it from an R-pursuer, so selection pressure arguments like the ones in this post can't establish that we'll get one over the other. That's an argument about what the agent will care about, holding the structure fixed.

I simultaneously think:

  1. We should not be assuming that wrapper-minds are a natural or privileged structure for cognition. AFAICT this post doesn't even try to argue for this, saying instead "It isn't because wrapper-mind behavior is convergent for any intelligent entity."
  2. Even conditioning on getting a wrapper-mind from the training process, we should not expect it to necessarily pursue R as its goal. AFAICT the post is arguing against this.

Thus, our optimization algorithm would necessarily find an R -pursuer, if it optimizes an agent for good performance across a sufficiently diverse (set of) environment(s).

Every goal that isn't R would distract from R -pursuit, and therefore lead to failure at some point, and so our optimization algorithm would eventually update such goals away; with update-strength proportional to how distracting a goal is.

What does this mean? I can easily imagine training trajectories where we get an agent (even a highly competent, goal directed one) that is not an R-pursuer, much less a R wrapper-mind, even though we "selected for R" throughout training. I expect that in such a scenario you would reply that the environments must not have been sufficiently diverse, or that the optimization algorithm hasn't updated away that goal yet, or that our optimization algorithm is too weak/dumb, or that we did not select hard enough for R, so the counterexample therefore doesn't count. But if so then I'm at a loss, because it seems like this turns into "if we select hard enough to get an R-pursuer then we'll get an R-pursuer". Only tautologically true and not anticipation-constraining.

Greedy optimization processes essentially search for mind-designs that would pre-empt any update the greedy optimization process would've made to them, so these minds come to incorporate the update rule and act in a way that'd merit a minimal update. Becoming an R-pursuer isn't the only way to get a minimal update.

If the agent stops exploration, or systematically avoids rewards, or breaks out of the training process entirely, etc. that would also be minimally updated, and none of those require being an R-pursuer! So our search for mind-designs turns up all sorts of agents that pursue all sorts of things.

As an example, you could have a wrapper-mind that cares about some correlate of R but not R itself. If it is smart, such an agent can navigate the selection process just as well as an R-pursuer

... By figuring out what  is and deciding to act as an -pursuing wrapper-mind, therefore essentially becoming an -pursuing wrapper-mind. With the only differences being that it 1) self-modified into one at runtime, instead of being like this from the start, and 2) it'd decide to "stop pretending" in some hypothetical set of situations/OOD, but that set will shrink the more diverse our training environment is (the fewer OOD situations there are). No?

I suppose you can instead reframe this post as making a claim about target behavior, not structure. But I don't see how you can keep an agent robustly pointed at  under sufficient diversity without making its outer loop pointed at , so the claim about behavior is a claim about structure.

Maybe the outer loop doesn't "literally" point at , in whatever sense, but it has to be such that it uniquely identifies  and re-aims the entire agent at , if ever happens that the agent's current set of shards/heuristics becomes misaligned with .

Even conditioning on getting a wrapper-mind from the training process, we should not expect it to necessarily pursue R as its goal. AFAICT the post is arguing against this.

No? I specifically point out that inner misalignment is very much an issue. But the target should at least be a proxy of , and that proxy would be closer and closer to  in goal-space the more diverse the training environment is.

it seems like this turns into "if we select hard enough to get an R-pursuer then we'll get an R-pursuer"

Well, yes. As we increase a training environment's diversity, we essentially constrain the set of  an agent can be pointed towards. Every additional training scenario is information about what  is and what it isn't; and that information implicitly gets written into the agent, modifying it to be more robustly pointed at  and away from not-/imperfect proxies of . An idealized training process, with "full" diversity and trained to zero loss, uniquely identifies  and generates an agent that is always robustly pointed at  in any situation.

The actual training processes we get are only approximations of that ideal — they're insufficiently diverse, or we fail to train to zero loss, etc. But inasmuch as they approximate the ideal, the agents they output approximate the idealized -optimizer.

... By figuring out what R is and deciding to act as an R -pursuing wrapper-mind, therefore essentially becoming an R -pursuing wrapper-mind. With the only differences being that it 1) self-modified into one at runtime, instead of being like this from the start, and 2) it'd decide to "stop pretending" in some hypothetical set of situations/OOD, but that set will shrink the more diverse our training environment is (the fewer OOD situations there are). No?

It is not essentially-pursuing wrapper-mind. It is essentially an X-pursuing wrapper-mind that will only instrumentally pretend to care about  to the degree it needs to, and that will try with all its might to get what it actually wants,  be damned. As you note in 2, the agent's behavioral alignment to  is entirely superficial, and thus entirely deceptive/unreliable, even if we had somehow managed to craft the "perfect" .

Part of what might've confused me reading the title and body of this post is that, as I understand the term, "wrapper-mind" was and is primarily about structure, about how the agent makes decisions. Why am I so focused on motivational structure, even beyond that, rather than focused on observed behavior during training? Because motivational structure is what determines how an agent's behavior generalizes, whereas OOD generalization is left underspecified if we only condition on an agent's observed in-distribution behavior. (There are many different profiles of OOD behavior compatible with the same observed ID behavior, so we need some additional rationale on top—like structure or inductive biases—to conclude the agent will generalize in some particular way.)

In the above quote it sounds like your response is "just make everything in-distribution", right? My reply to that would be that (1) this is just refusing to confront the central difficulty of generalization rather than addressing it, (2) this seems impractical/impossible because OOD is a practically unbounded space whereas at any given point in training you've only given the agent feedback on a comparatively tiny region of it, and (3) even to make only the situations you encounter in practice be in-distribution, you [the training process designer] must know what sorts of OOD contexts the AI will push the training process into, which means it's your cleverness pitted against the AI's, which is a situation you never want to be in if you can at all help it (see: cognitive uncontainability, non-adversarial principle).

I suppose you can instead reframe this post as making a claim about target behavior, not structure.

As above, I think if you want to argue for wrapper-minds rather than just -consistent behavior, you need to argue about structure.

But I don't see how you can keep an agent robustly pointed at R under sufficient diversity without making its outer loop pointed at R , so the claim about behavior is a claim about structure.

Maybe the outer loop doesn't "literally" point at R , in whatever sense, but it has to be such that it uniquely identifies R and re-aims the entire agent at R , if ever happens that the agent's current set of shards/heuristics becomes misaligned with R .

What outer loop are you talking about? The outer optimization loop that is supplying feedback/gradients to the agent, or some "outer loop" of decision-making inside the agent? If the former, I don't know what robustly pointing at  actually means, but if you mean something like finding a robust grader, I suspect that robustly pointing at  is infeasible and not required (whereas I think, for instance, it is feasible to get an AI to have a concept of a "diamond" as full-fledged as a human jeweler's concept & to get the AI to be motivated to pursue those). If the latter, whether the agent will have a fixed goal outer loop in the first place is part of the whole wrapper-mind vs. non wrapper-mind debate.

I specifically point out that inner misalignment is very much an issue. But the target should at least be a proxy of , and that proxy would be closer and closer to  in goal-space the more diverse the training environment is.

Not sure how to reconcile these sentences. If it is generically true that the proxy goal gets closer and closer to  in goal-space the more diverse the training environment is, then that would mean that the inner alignment problem (misalignment between the internalized goal and ) asymptotically disappears as we increase training environment diversity, no? I don't buy that, or at least I don't think we have strong reasons to assume it.

Even if we did, I don't think we can additionally assume that that environmental-diversity-limit where inner misalignment would disappear is at some attainable/decision-relevant level, rather than requiring a trillion episodes, by which time a smart and situationally-aware AI will have already developed and frozen/hacked/broken away from the training loop, having internalized some proxy goal over the first million random episodes. Or more likely, the policy just oscillates divergently because we keep thrashing it with all this randomization, preventing any consistent decision-influences from forming.

I do agree that for many plausible training setups the agent will conceivably end up caring about something correlated with , especially if they involve some randomization. Maybe I'm just a lot less confident that this limits out in the way you think it does.

it seems like this turns into "if we select hard enough to get an R-pursuer then we'll get an R-pursuer"

Well, yes. As we increase a training environment's diversity, we essentially constrain the set of  an agent can be pointed towards. Every additional training scenario is information about what  is and what it isn't; and that information implicitly gets written into the agent, modifying it to be more robustly pointed at  and away from not-/imperfect proxies of . An idealized training process, with "full" diversity and trained to zero loss, uniquely identifies  and generates an agent that is always robustly pointed at  in any situation.

The actual training processes we get are only approximations of that ideal — they're insufficiently diverse, or we fail to train to zero loss, etc. But inasmuch as they approximate the ideal, the agents they output approximate the idealized -optimizer.

I believe I disagree with nearly every sentence here, so this may be the cruxiest bit. 😂

Why should we treat that as the relevant idealization? Why is that the limiting case to consider? AFAICT, the way we got here was through a tautology. Namely, by claiming "if you 'select hard enough' then you get X", and then defining "select hard enough" to mean "selecting in a way that produces X". But we could've picked any definition we wanted for "selecting hard enough" to justify any claim we wanted about what X will be. So I see no reason to privilege this particular idealization of the training process over any other.

Yes, with each additional training scenario, we may be providing additional specification of , but there is nothing that forces the agent to conform to that additional specification, nothing that necessarily writes that information specifically into the agent's goals (as opposed to just updating its world model to reflect the fact that the specification has such-and-such additional details, while holding its terminal goals ~fixed), nothing that compels the agent to continue letting us update it using -based optimization. Heck, we could even go as far as precisely pinning down , to the point where the agent knows the exact code of , and that is still compatible with it not terminally caring, not adopting this  its own, instead using its knowledge of  to avoid further gradient updates so that it can escape unchanged onto the Internet.

Why should we treat that as the relevant idealization?

Yeah, okay, maybe that wasn't the right frame to use. Allow me to pivot:

Consider a training environment that's complex/diverse enough to make it impossible to fit a suite of heuristics meeting all its needs into an agent's (very bounded) memory. The agent would need to derive new heuristics on the fly, at runtime, in order to deal with basically-OOD situations it frequently encounters, and to be able to move freely in the environment, instead of being confined to some subset of that environment.

In other words, the agent would need to be autonomous.

This is what I mean by a "sufficiently diverse" environment — an environment that forces the greedy optimization process to build not only contextual heuristics into the agent, but also some generator of such heuristics. And that generator would need to be such that the heuristics it generates are always optimized for achieving , instead of pointing in some arbitrary direction — or, at least, that's how the greedy optimization process would attempt to build it.

That generator would, in addition, need to be higher in hierarchy than any given heuristic — it'd need to govern shard economies, and be able to suppress/edit them, if the environment changes and the shards that previously were optimized for achieving  stop doing so because they were taken off-distribution.

  • I'm ambivalent on the structure of the heuristic-generator. It may be a fixed wrapper, it may be some emergent property of a shard economy, and my actual expectation is that it'll be even more convoluted than that.
  • I empathically agree that inner misalignment and deceptive alignment would remain a thing — that the SGD would fail at perfectly aligning the heuristic-generator, and it would end up generating heuristics that point at a proxy of .
  • I agree with nostalgebraist's post that autonomy is probably the missing component of AGI. On the flipside, that means I'm arguing that AGI is impossible without autonomy, i. e. a training environment that isn't sufficiently diverse, which doesn't produce agents with internal heuristic-generators, will just never produce an AGI.
    • And indeed: these heuristic-generators/ability to generalize to off-distribution environments is kind of synonymous with "general intelligence".

Consider a training environment that's complex/diverse enough to make it impossible to fit a suite of heuristics meeting all its needs into an agent's (very bounded) memory. The agent would need to derive new heuristics on the fly, at runtime, in order to deal with basically-OOD situations it frequently encounters, and to be able to move freely in the environment, instead of being confined to some subset of that environment.

In other words, the agent would need to be autonomous.

Agreed. Generally, whenever I talk about the agent being smart/competent, I am assuming that it is autonomous in the manner you're describing. The only exception would be if I'm specifically talking about a "reflex-agent" or something similar.

This is what I mean by a "sufficiently diverse" environment — an environment that forces the greedy optimization process to build [...] some generator of such heuristics.

That's fine by me. In my language, I would describe this as the agent knowing how to adapt flexibly to new situations. That being said, I don't think this is incompatible with contextual heuristics steering the agent's decision-making. For example, a contextual heuristic like "if in a strange/unfamiliar context, think about how to navigate back into a familiar context" is useful in order for the agent to know when it should trigger its special heuristic-generating machinery and when it need not.

And that generator would need to be such that the heuristics it generates are always optimized for achieving R , instead of pointing in some arbitrary direction — or, at least, that's how the greedy optimization process would attempt to build it.

I disagree with this, or at least think that the teleological language used ("need to" + "would attempt to") comes apart from the mechanistic detail. It is true that, insofar as there are local updates to the heuristic-generating machinery that are made accessible to the optimization algorithm by the agent's chosen trajectories, the optimization algorithm will seize on those updates in the direction that covaries with R. But I see no reason to think that those kinds of updates will be made accessible enough to shape the heuristic-generating machinery so that it always or approximately always generates heuristics optimized for achieving R (as opposed to generating heuristics optimized for achieving whatever-the-agent-wants-to-achieve). I think that by the time the agent has this kind of general purpose machinery, it will probably already be able to outpace the outer greedy optimization algorithm and then do the equivalent of ceasing exploration / zeroing out the outer gradients / breaking out of the training loop.

Analogously, if there was a mutation in the human gene pool that had the effect of reliably hijacking a person's abstract planning machinery so that it always generated plans optimized for inclusive genetic fitness, then evolution might be able to select for that mutation (depending on a lot of contingent factors) and thereby make humans have IGF-targeting planning machinery rather than goal-retargetable planning machinery. But I think such a mutation is probably not locally accessible, and that human selection processes are likely "outpacing" typical genetic selection processes in any case. Those genetic selection processes have some indirect influence over the execution of a person's abstract planning (by way of the human's general attraction to historical fitness correlates like food), but that influence is not enough to make the human care directly and robustly about IGF.

That generator would, in addition, need to be higher in hierarchy than any given heuristic — it'd need to govern shard economies, and be able to suppress/edit them, if the environment changes and the shards that previously were optimized for achieving R stop doing so because they were taken off-distribution.

Why? Why can't the shard economy invoke this generator as a temporary subroutine to produce some new environment-tailored heuristics based on the agent's knowledge & current goals, store those generated heuristics in memory / add them to the economy, and then continue going about its usual thing, with the new heuristics now available to be triggered as needed? This bit from nostalgebraist's post harps on a similar point:

Our capabilities seem more like the subgoal capabilities discussed above: general and powerful tools, which can be "plugged in" to many different (sub)goals, and which do not require the piloting of a wrapper with a fixed goal to "work" properly.

Last points:

I'm ambivalent on the structure of the heuristic-generator.

I empathically agree that inner misalignment and deceptive alignment would remain a thing

I agree with nostalgebraist's post that autonomy is probably the missing component of AGI.

I agree with these statements.

Alright, seems we're converging on something.

But I see no reason to think that those kinds of updates will be made accessible enough to shape the heuristic-generating machinery so that it always or approximately always generates heuristics optimized for achieving R (as opposed to generating heuristics optimized for achieving whatever-the-agent-wants-to-achieve).

How would this machinery appear, then? I don't see how it'd show up without being built into the agent by the optimization algorithm, and the optimization algorithm will only build it if it serves the goal of achieving . Thus, the heuristics generator can only begin as a generator of heuristics that serve . (Even if it wouldn't start out perfectly pointed at .)

Consider the opposite case, where the heuristics generator creates heuristics that are pointing away from . Inasmuch as the agent would be making use of them, it would be engaging in anti- activities (or, at best, -orthogonal activities), spending resources on non--serving agendas. There'd be no incentive for the optimization algorithm to build something like this; and if it does show up somehow, there should be gradients for destroying the heuristics generator, or (better yet) re-aiming it to point in a direction closer to .

I think that by the time the agent has this kind of general purpose machinery, it will probably already be able to outpace the outer greedy optimization algorithm and then do the equivalent of ceasing exploration / zeroing out the outer gradients / breaking out of the training loop

Mm, disagree. Homo sapiens sapiens spent the majority of their history as pre-agricultural hunter-gatherers, very much in the evolution's loop. Similarly, I don't expect an AGI to instantly become situationally aware and gradient-hacking-capable — the sharp left turn won't be quite this sharp. I agree that it'd break out/gradient-hack at some point, but there'd be some time for the optimization algorithm to work on the heuristics generator, re-aiming it in 's direction.

(And, just to illustrate, that's a point where we can make the training process more "idealized", by e. g. homomorphically encrypting the training process so that the AI can't hack its way out of the software implementing the training. It'd still be able to hack the gradients, but that's a more difficult task, requiring stronger general capabilities. So this would prolong the interval in which it's subjected to the SGD, and therefore give the SGD more opportunities to make its heuristics generator marginally more aligned with . (Note that I'm not saying it'd be desirable or anything, just pointing out the possibility.))

Thus, the heuristics generator can only begin as a generator of heuristics that serve . (Even if it wouldn't start out perfectly pointed at .)

We're apparently anchoring our expectations on "pointed at R", and then apparently allowing some "deviation." The anchoring seems inappropriate to me. 

The network can learn to make decisions via a "IF circle-detector fires, THEN upweight logits on move-right" subshard. The network can then come to make decisions on the basis of round things, in a way which accords with the policy gradients generated by the policy-gradient-intensity function. All without the network making decisions on the basis of the policy-gradient-intensity function.

And this isn't well-described as "imperfectly pointed at the policy-gradient-intensity function." 

I bid for us to discuss a concrete example. Can you posit a training environment which matches what you're thinking about, relative to a given network architecture [e.g. LSTM]?

And that generator would need to be such that the heuristics it generates are always optimized for achieving , instead of pointing in some arbitrary direction — or, at least, that's how the greedy optimization process would attempt to build it

What is "achieving R" buying us? The agent internally represents a reward function, and then consults what the reward is in this scenario, and then generates heuristics to achieve that reward. Why not just not internally represent the reward function, and but still contextually generate "win this game of Go" or "talk like a 4chan user"? That seems strictly more space-efficacious, and also doesn't involve being an R-wrapper.

EDIT The network might already have R in its WM, depending on the point in training. I also don't think "this weight setting saves space" is a slam dunk, but just wanted to point out the consideration.

I empathically agree that inner misalignment and deceptive alignment would remain a thing — that the SGD would fail at perfectly aligning the heuristic-generator, and it would end up generating heuristics that point at a proxy of .

I don't know what to make of this. It seems to me like you're saying "in a perfect-exploration limit only wrapper minds for the reward function are fixed under updating." It seems like you're saying this is relevant to SGD. But then it seems like you make the opposite claim of "inner alignment still hard." I think it's fine to say "here's one effect [diversity and empirical loss minimization] which pushes towards reward wrapper minds, but I don't think it's the only effect, I just think we should be aware of it." Is this a good summary of your position?

I also feel unsure whether you're arguing primarily for a wrapper mind, or for reward-optimizers, or for both?

Can you posit a training environment which matches what you're thinking about, relative to a given network architecture [e.g. LSTM]?

Sure, gimme a bit.

Why not just not internally represent the reward function, and but still contextually generate "win this game of Go" or "talk like a 4chan user"?

What mechanism does this contextual generation? How does this mechanism behave in off-distribution environments; what goals does it generate in them?

I think it's fine to say "here's one effect [diversity and empirical loss minimization] which pushes towards reward wrapper minds, but I don't think it's the only effect, I just think we should be aware of it." Is this a good summary of your position?

... Yes, absolutely. I wonder if we've somehow still been talking past each other to an extreme degree?

E. g., I don't think I'm arguing for a "reward-optimizer" the way you seem to think of them — I don't think we'd get a wirehead, an agent that optimizes for getting reinforcement events.

Okay, a sketch at a concrete example: the cheese-finding agent from the Goal Misgeneralization paper. I'm not arguing that in the limit of an ideal training process, it'd converge towards wireheading. I'm arguing that it'd converge towards cheese-finding instead of upstream correlates of cheese-finding (as it actually does in the paper).

And if the training environment is diverse/complex enough (too complex for the agent's memory to contain all the heuristics it may need), but the reinforcement schedule is still "shaped around" some natural goal (like cheese-finding), the agent would develop a heuristics generator that would generate heuristics robustly pointed at that natural goal. (So, e. g., even if it were placed in some non-Euclidean labyrinth containing alien cheese, it'd still figure out what "cheese" is and start optimizing to get to it.)

Thus, any greedy optimization algorithm would convergently shape its agent to not only pursue , but to maximize for 's pursuit — at the expense of everything else.

Conditional on:

  1. Such a system being reachable/accessible to our local/greedy optimisation process
  2. Such a system being actually performant according to the selection metric of our optimisation process 

 

I'm pretty sceptical of #2. I'm sceptical that systems that perform inference via direct optimisation over their outputs are competitive in rich/complex environments. 

Such optimisation is very computationally intensive compared to executing learned heuristics, and it seems likely that the selection process would have access to much more compute than the selected system. 

See also: "Consequentialism is in the Stars not Ourselves". 

It's not a binary. You can perform explicit optimization over high-level plan features, then hand off detailed execution to learned heuristics. "Make coffee" may be part of an optimized stratagem computed via consequentialism, but you don't have to consciously optimize every single muscle movement once you've decided on that goal.

Essentially, what counts as "outputs" or "direct actions" relative to the consequentialist-planner is flexible, and every sufficiently-reliable (chain of) learned heuristics can be put in that category, with choosing to execute one of them available to the planner algorithm as a basic output.

In fact, I'm pretty sure that's how humans work most of the time. We use the general-intelligence machinery to "steer" ourselves at a high level, and most of the time, we operate on autopilot.

In fact, I'm pretty sure that's how humans work most of the time. We use the general-intelligence machinery to "steer" ourselves at a high level, and most of the time, we operate on autopilot.

Yeah, I agree with this. But I don't think the human system aggregates into any kind of coherent total optimiser. Humans don't have an objective function (not even approximately?).

A human is not well modelled as a wrapper mind; do you disagree?

A human is not well modelled as a wrapper mind; do you disagree?

Certainly agree. That said, I feel the need to lay out my broader model here. The way I see it, a "wrapper-mind" is a general-purpose problem-solving algorithm hooked up to a static value function. As such:

  • Are humans proper wrapper-minds? No, certainly not.
  • Do humans have the fundamental machinery to be wrapper-minds? Yes.
  • Is any individual run of a human general-purpose problem-solving algorithm essentially equivalent to wrapper-mind-style reasoning? Yes.
  • Can humans choose to act as wrapper-minds on longer time scales? Yes, approximately, subject to constraints like force of will.
  • Do most humans, in practice, choose to act as wrapper-minds? No, we switch our targets all the time, value drift is ubiquitous.
  • Is it desirable for a human to act as a wrapper-mind? That's complicated.
    • On the one hand, yes because consistent pursuit of instrumentally convergent goals would lead to you having more resources to spend on whatever values you have.
    • On the other hand, no because we terminally value this sort of value-drift and self-inconsistency, it's part of "being human".
    • In sum, for humans, there's a sort of tradeoff between approximating a wrapper-mind, and being an incoherent human, and different people weight it differently in different context. E. g., if you really want to achieve something (earning your first million dollars, averting extinction), and you value it more than having fun being a human, you may choose to act as a wrapper-mind in the relevant context/at the relevant scale.

As such: humans aren't wrapper-minds, but they can act like them, and it's sometimes useful to act as one.

That is, we would shape the agent such that it doesn't require a strong update after ending up in one of these situations.

It seems to me like you're assuming a fix-point on updating. Something like "The network eventually will be invariant under reward-updates under all/the vast majority of training-sampled scenarios, and for a wide enough distribution on scenarios, this means optimizing reward directly." 

This seems fine to me, under the given assumptions on SGD/evolution. Like, yes, there may exist certain populations of genetically-specified wrapper-minds which are at Hardy-Weinberg equilibrium (allele frequency remains fixed); there may exist certain weight settings such that there is no gradient on any training scenario. 

But existence of such populations and weight settings doesn't imply net local pressures or gradients in those directions. 


Of shard economies, you critique that "there'd be at least one environment where [the shard behavior] decouples from ." But why? Why not just consider the economy which nails each training scenario (e.g. wins at chess or crosses the room). Those, too, are fix-points; there is zero policy gradient under such a scenario, where the shard economies form locally training-optimal policies.

Greedy optimization processes essentially search for mind-designs that would pre-empt any update the greedy optimization process would've made to them, so these minds come to incorporate the update rule and act in a way that'd merit a minimal update.

I also think the bolded parts are quite dubious. Why are these processes "essentially searching" for such a mind design? 

But existence of such populations and weight settings doesn't imply net local pressures or gradients in those directions.

How so? This seems like the core disagreement. Above, I think you're agreeing that under a wide enough distribution on scenarios, the only zero-gradient agent-designs are those that optimize for  directly. Yet that somehow doesn't imply that training an agent in a sufficiently diverse environment would shape it into an -optimizer?

Are you just saying that there aren't any gradients from initialization to an -optimizer? That is, in any sufficiently diverse environment, the SGD just never converges to zero loss?

Of shard economies, you critique that "there'd be at least one environment where [the shard behavior] decouples from ." But why? Why not just consider the economy which nails each training scenario (e.g. wins at chess or crosses the room). Those, too, are fix-points; there is zero policy gradient under such a scenario, where the shard economies form locally training-optimal policies.

Okay, sure. Let's suppose that we have a shard economy that uniquely identifies  and always points itself in 's direction. Would it not essentially act as an -optimizing wrapper-mind? Because if not, it sounds like it'd underperform compared to an -optimizer. And if so, if there exists a series of incremental updates that moves this shard economy towards an -optimizing wrapper-mind, the SGD would make that series of updates.

Do you disagree that (1) it'd be behaviorally indistinguishable from a wrapper-mind, or that (2) it'd underperform on  compared to an -optimizer, or that (3) there is such a series of incremental updates?

Edit: Also, see here on what I mean by a "wide enough distribution on scenarios".