I'm worried that many AI alignment researchers and other LWers have a view of how human morality works, that really only applies to a small fraction of all humans (notably moral philosophers and themselves). In this view, people know or at least suspect that they are confused about morality, and are eager or willing to apply reason and deliberation to find out what their real values are, or to correct their moral beliefs. Here's an example of someone who fits this view:

I’ve written, in the past, about a “ghost” version of myself — that is, one that can float free from my body; which travel anywhere in all space and time, with unlimited time, energy, and patience; and which can also make changes to different variables, and play forward/rewind different counterfactual timelines (the ghost’s activity somehow doesn’t have any moral significance).

I sometimes treat such a ghost kind of like an idealized self. It can see much that I cannot. It can see directly what a small part of the world I truly am; what my actions truly mean. The lives of others are real and vivid for it, even when hazy and out of mind for me. I trust such a perspective a lot. If the ghost would say “don’t,” I’d be inclined to listen.

I'm currently reading The Status Game by Will Storr (highly recommended BTW), and found in it the following description of how morality works in most people, which matches my own understanding of history and my observations of humans around me:

The moral reality we live in is a virtue game. We use our displays of morality to manufacture status. It’s good that we do this. It’s functional. It’s why billionaires fund libraries, university scholarships and scientific endeavours; it’s why a study of 11,672 organ donations in the USA found only thirty-one were made anonymously. It’s why we feel good when we commit moral acts and thoughts privately and enjoy the approval of our imaginary audience. Virtue status is the bribe that nudges us into putting the interests of other people – principally our co-players – before our own.

We treat moral beliefs as if they’re universal and absolute: one study found people were more likely to believe God could change physical laws of the universe than he could moral ‘facts’. Such facts can seem to belong to the same category as objects in nature, as if they could be observed under microscopes or proven by mathematical formulae. If moral truth exists anywhere, it’s in our DNA: that ancient game-playing coding that evolved to nudge us into behaving co-operatively in hunter-gatherer groups. But these instructions – strive to appear virtuous; privilege your group over others – are few and vague and open to riotous differences in interpretation. All the rest is an act of shared imagination. It’s a dream we weave around a status game.

The dream shifts as we range across the continents. For the Malagasy people in Madagascar, it’s taboo to eat a blind hen, to dream about blood and to sleep facing westwards, as you’ll kick the sunrise. Adolescent boys of the Marind of South New Guinea are introduced to a culture of ‘institutionalised sodomy’ in which they sleep in the men’s house and absorb the sperm of their elders via anal copulation, making them stronger. Among the people of the Moose, teenage girls are abducted and forced to have sex with a married man, an act for which, writes psychologist Professor David Buss, ‘all concerned – including the girl – judge that her parents giving her to the man was a virtuous, generous act of gratitude’. As alien as these norms might seem, they’ll feel morally correct to most who play by them. They’re part of the dream of reality in which they exist, a dream that feels no less obvious and true to them than ours does to us.

Such ‘facts’ also change across time. We don’t have to travel back far to discover moral superstars holding moral views that would destroy them today. Feminist hero and birth control campaigner Marie Stopes, who was voted Woman of the Millennium by the readers of The Guardian and honoured on special Royal Mail stamps in 2008, was an anti-Semite and eugenicist who once wrote that ‘our race is weakened by an appallingly high percentage of unfit weaklings and diseased individuals’ and that ‘it is the urgent duty of the community to make parenthood impossible for those whose mental and physical conditions are such that there is well-nigh a certainty that their offspring must be physically and mentally tainted’. Meanwhile, Gandhi once explained his agitation against the British thusly: ‘Ours is one continual struggle against a degradation sought to be inflicted upon us by the Europeans, who desire to degrade us to the level of the raw Kaffir [black African] … whose sole ambition is to collect a certain number of cattle to buy a wife with and … pass his life in indolence and nakedness.’ Such statements seem obviously appalling. But there’s about as much sense in blaming Gandhi for not sharing our modern, Western views on race as there is in blaming the Vikings for not having Netflix. Moral ‘truths’ are acts of imagination. They’re ideas we play games with.

The dream feels so real. And yet it’s all conjured up by the game-making brain. The world around our bodies is chaotic, confusing and mostly unknowable. But the brain must make sense of it. It has to turn that blizzard of noise into a precise, colourful and detailed world it can predict and successfully interact with, such that it gets what it wants. When the brain discovers a game that seems to make sense of its felt reality and offer a pathway to rewards, it can embrace its rules and symbols with an ecstatic fervour. The noise is silenced! The chaos is tamed! We’ve found our story and the heroic role we’re going to play in it! We’ve learned the truth and the way – the meaning of life! It’s yams, it’s God, it’s money, it’s saving the world from evil big pHARMa. It’s not like a religious experience, it is a religious experience. It’s how the writer Arthur Koestler felt as a young man in 1931, joining the Communist Party:

‘To say that one had “seen the light” is a poor description of the mental rapture which only the convert knows (regardless of what faith he has been converted to). The new light seems to pour from all directions across the skull; the whole universe falls into pattern, like stray pieces of a jigsaw puzzle assembled by one magic stroke. There is now an answer to every question, doubts and conflicts are a matter of the tortured past – a past already remote, when one lived in dismal ignorance in the tasteless, colourless world of those who don’t know. Nothing henceforth can disturb the convert’s inner peace and serenity – except the occasional fear of losing faith again, losing thereby what alone makes life worth living, and falling back into the outer darkness, where there is wailing and gnashing of teeth.’

I hope this helps further explain why I think even solving (some versions of) the alignment problem probably won't be enough to ensure a future that's free from astronomical waste or astronomical suffering. A part of me is actually more scared of many futures in which "alignment is solved", than a future where biological life is simply wiped out by a paperclip maximizer.

New Comment
40 comments, sorted by Click to highlight new comments since:

You sound like you're positing the existence of two type of people: type I people who have morality based on "reason" and type II people who have morality based on the "status game". In reality, everyone's nearly everyone's morality is based on something like the status game (see also: 1 2 3). It's just that EAs and moral philosophers are playing the game in a tribe which awards status differently.

The true intrinsic values of most people do place a weight on the happiness of other people (that's roughly what we call "empathy"), but this weight is very unequally distributed.

There are definitely thorny questions regarding the best way to aggregate the values of different people in TAI. But, I think that given a reasonable solution, a lower bound on the future is imagining that the AI will build a private utopia for every person, as isolated from the other "utopias" as that person wants it to be. Probably some people's "utopias" will not be great, viewed in utilitarian terms. But, I still prefer that over paperclips (by far). And, I suspect that most people do (even if they protest it in order to play the game).

It’s just that EAs and moral philosophers are playing the game in a tribe which awards status differently.

Sure, I've said as much in recent comments, including this one. ETA: Related to this, I'm worried about AI disrupting "our" status game in an unpredictable and possibly dangerous way. E.g., what will happen when everyone uses AI advisors to help them play status games, including the status game of moral philosophy?

The true intrinsic values of most people do place a weight on the happiness of other people (that’s roughly what we call “emapthy”), but this weight is very unequally distributed.

What do you mean by "true intrinsic values"? (I couldn't find any previous usage of this term by you.) How do you propose finding people's true intrinsic values?

These weights, if low enough relative to other "values", haven't prevented people from committing atrocities on each other in the name of morality.

There are definitely thorny questions regarding the best way to aggregate the values of different people in TAI. But, I think that given a reasonable solution, a lower bound on the future is imagining that the AI will build a private utopia for every person, as isolated from the other “utopias” as that person wants it to be.

This implies solving a version of the alignment problem that includes reasonable value aggregation between different people (or between AIs aligned to different people), but at least some researchers don't seem to consider that part of "alignment".

Given that playing status games and status competition between groups/tribes/status games constitute a huge part of people's lives, I'm not sure how private utopias that are very isolated from each other would work. Also, I'm not sure if your solution would prevent people from instantiating simulations of perceived enemies / "evil people" in their utopias and punishing them, or just simulating a bunch of low status people to lord over.

Probably some people’s “utopias” will not be great, viewed in utilitarian terms. But, I still prefer that over paperclips (by far).

I concede that a utilitarian would probably find almost all "aligned" futures better than paperclips. Perhaps I should have clarified that by "parts of me" being more scared, I meant the selfish and NU-leaning parts. The utilitarian part of me is just worried about the potential waste caused by many or most "utopias" being very suboptimal in terms of value created per unit of resource consumed.

What do you mean by "true intrinsic values"? (I couldn't find any previous usage of this term by you.) How do you propose finding people's true intrinsic values?

I mean the values relative to which a person seems most like a rational agent, arguably formalizable along these lines.

These weights, if low enough relative to other "values", haven't prevented people from committing atrocities on each other in the name of morality.

Yes.

This implies solving a version of the alignment problem that includes reasonable value aggregation between different people (or between AIs aligned to different people), but at least some researchers don't seem to consider that part of "alignment".

Yes. I do think multi-user alignment is an important problem (and occasionally spend some time thinking about it), it just seems reasonable to solve single user alignment first. Andrew Critch is an example of a person who seems to be concerned about this.

Given that playing status games and status competition between groups/tribes/status games constitute a huge part of people's lives, I'm not sure how private utopias that are very isolated from each other would work.

I meant that each private utopia can contain any number of people created by the AI, in addition to its "customer". Ofc groups that can agree on a common utopia can band together as well.

Also, I'm not sure if your solution would prevent people from instantiating simulations of perceived enemies / "evil people" in their utopias and punishing them, or just simulating a bunch of low status people to lord over.

They are prevented from simulating other pre-existing people without their consent, but can simulate a bunch of low status people to lord over. Yes, this can be bad. Yes, I still prefer this (assuming my own private utopia) over paperclips. And, like I said, this is just a relatively easy to imagine lower bound, not necessarily the true optimum.

Perhaps I should have clarified that by "parts of me" being more scared, I meant the selfish and NU-leaning parts.

The selfish part, at least, doesn't have any reason to be scared as long as you are a "customer".

Yes, I still prefer this (assuming my own private utopia) over paperclips.

For a utilitarian, this doesn't mean much. What's much more important is something like, "How close is this outcome to an actual (global) utopia (e.g., with optimized utilitronium filling the universe), on a linear scale?" For example, my rough expectation (without having thought about it much) is that your "lower bound" outcome is about midway between paperclips and actual utopia on a logarithmic scale. In one sense, this is much better than paperclips, but in another sense (i.e., on the linear scale), it's almost indistinguishable from paperclips, and a utilitarian would only care about the latter and therefore be nearly as disappointed by that outcome as paperclips.

I want to add a little to my stance on utilitarianism. A utilitarian superintelligence would probably kill me and everyone I love, because we are made of atoms that could be used for minds that are more hedonic[1][2][3]. Given a choice between paperclips and utilitarianism, I would still choose utilitarianism. But, if there was a utilitarian TAI project along with a half-decent chance to do something better (by my lights), I would actively oppose the utilitarian project. From my perspective, such a project is essentially enemy combatants.


  1. One way to avoid it is by modifying utilitarianism to only place weight on currently existing people. But this is already not that far from my cooperative bargaining proposal (although still inferior to it, IMO). ↩︎

  2. Another way to avoid it is by postulating some very strong penalty on death (i.e. discontinuity of personality). But this is not trivial to do, especially without creating other problems. Moreover, from my perspective this kind of thing is hacks trying to work around the core issue, namely that I am not a utilitarian (along with the vast majority of people). ↩︎

  3. A possible counterargument is, maybe the superhedonic future minds would be sad to contemplate our murder. But, this seems too weak to change the outcome, even assuming that this version of utilitarianism mandates minds who would want to know the truth and care about it, and that this preference is counted towards "utility". ↩︎

A utilitarian superintelligence would probably kill me and everyone I love, because we are made of atoms that could be used for minds that are more hedonic

This seems like a reasonable concern about some types of hedonic utilitarianism. To be clear, I'm not aware of any formulation of utilitarianism that doesn't have serious issues, and I'm also not aware of any formulation of any morality that doesn't have serious issues.

But, if there was a utilitarian TAI project along with a half-decent chance to do something better (by my lights), I would actively oppose the utilitarian project. From my perspective, such a project is essentially enemy combatants.

Just to be clear, this isn't in response to something I wrote, right? (I'm definitely not advocating any kind of "utilitarian TAI project" and would be quite scared of such a project myself.)

Moreover, from my perspective this kind of thing is hacks trying to work around the core issue, namely that I am not a utilitarian (along with the vast majority of people).

So what are you (and them) then? What would your utopia look like?

Just to be clear, this isn't in response to something I wrote, right? (I'm definitely not advocating any kind of "utilitarian TAI project" and would be quite scared of such a project myself.)

No! Sorry, if I gave that impression.

So what are you (and them) then? What would your utopia look like?

Well, I linked my toy model of partiality before. Are you asking about something more concrete?

Well, I linked my toy model of partiality before. Are you asking about something more concrete?

Yeah, I mean aside from how much you care about various other people, what concrete things do you want in your utopia?

I have low confidence about this, but my best guess personal utopia would be something like: A lot of cool and interesting things are happening. Some of them are good, some of them are bad (a world in which nothing bad ever happens would be boring). However, there is a limit on how bad something is allowed to be (for example, true death, permanent crippling of someone's mind and eternal torture are over the line), and overall "happy endings" are more common than "unhappy endings". Moreover, since it's my utopia (according to my understanding of the question, we are ignoring the bargaining process and acausal cooperation here), I am among the top along those desirable dimensions which are zero-sum (e.g. play an especially important / "protagonist" role in the events to the extent that it's impossible for everyone to play such an important role, and have high status to the extent that it's impossible for everyone to have such high status).

First, you wrote "a part of me is actually more scared of many futures in which alignment is solved, than a future where biological life is simply wiped out by a paperclip maximizer." So, I tried to assuage this fear for a particular class of alignment solutions.

Second... Yes, for a utilitarian this doesn't mean "much". But, tbh, who cares? I am not a utilitarian. The vast majority of people are not utilitarians. Maybe even literally no one is an (honest, not self-deceiving) utilitarian. From my perspective, disappointing the imaginary utilitarian is (in itself) about as upsetting as disappointing the imaginary paperclip maximizer.

Third, what I actually want from multi-user alignment is a solution that (i) is acceptable to me personally (ii) is acceptable to the vast majority of people (at least if they think through it rationally and are arguing honestly and in good faith) (iii) is acceptable to key stakeholders (iv) as much as possible, doesn't leave any Pareto improvements on the table and (v) sufficiently Schelling-pointy to coordinate around. Here, "acceptable" means "a lot better than paperclips and not worth starting an AI race/war to get something better".

Second… Yes, for a utilitarian this doesn’t mean “much”. But, tbh, who cares? I am not a utilitarian. The vast majority of people are not utilitarians. Maybe even literally no one is an (honest, not self-deceiving) utilitarian. From my perspective, disappointing the imaginary utilitarian is (in itself) about as upsetting as disappointing the imaginary paperclip maximizer.

I'm not a utilitarian either, because I don't know what my values are or should be. But I do assign significant credence to the possibility that something in the vincinity of utilitarianism is the right values (for me, or period). Given my uncertainties, I want to arrange the current state of the world so that (to the extent possible), whatever I end up deciding my values are, through things like reason, deliberation, doing philosophy, the world will ultimately not turn out to be a huge disappointment according to those values. Unfortunately, your proposed solution isn't very reassuring to this kind of view.

It's quite possible that I (and people like me) are simply out of luck, and there's just no feasible way to do what we want to do, but it sounds like you think I shouldn't even want what I want, or at least that you don't want something like this. Is it because you're already pretty sure what your values are or should be, and therefore think there's little chance that millennia from now you'll end up deciding that utilitarianism (or NU, or whatever) is right after all, and regret not doing more in 2021 to push the world in the direction of [your real values, whatever they are]?

I'm moderately sure what my values are, to some approximation. More importantly, I'm even more sure that, whatever my values are, they are not so extremely different from the values of most people that I should wage some kind of war against the majority instead of trying to arrive at a reasonable compromise. And, in the unlikely event that most people (including me) will turn out to be some kind of utilitarians after all, it's not a problem: value aggregation will then produce a universe which is pretty good for utilitarians.

I’m moderately sure what my values are, to some approximation. More importantly, I’m even more sure that, whatever my values are, they are not so extremely different from the values of most people [...]

Maybe you're just not part of the target audience of my OP then... but from my perspective, if I determine my values through the kind of process described in the first quote, and most people determine their values through the kind of process described in the second quote, it seems quite likely that the values end up being very different.

[...] that I should wage some kind of war against the majority instead of trying to arrive at a reasonable compromise.

The kind of solution I have in mind is not "waging war" but for example, solving metaphilososphy and building an AI that can encourage philosophical reflection in humans or enhance people's philosophical abilities.

And, in the unlikely possibility that most people (including me) will turn out to be some kind of utilitarians after all, it’s not a problem: value aggregation will then produce a universe which is pretty good for utilitarians.

What if you turn out to be some kind of utilitarian but most people don't (because you're more like the first group in the OP and they're more like the second group), or most people will eventually turn out to be some kind of utilitarian in a world without AI, but in a world with AI, this will happen?

I don't think people determine their values through either process. I think that they already have values, which are to a large extent genetic and immutable. Instead, these processes determine what values they pretend to have for game-theory reasons. So, the big difference between the groups is which "cards" they hold and/or what strategy they pursue, not an intrinsic difference in values.

But also, if we do model values as the result of some long process of reflection, and you're worried about the AI disrupting or insufficiently aiding this process, then this is already a single-user alignment issue and should be analyzed in that context first. The presumed differences in moralities are not the main source of the problem here.

I don’t think people determine their values through either process. I think that they already have values, which are to a large extent genetic and immutable. Instead, these processes determine what values they pretend to have for game-theory reasons. So, the big difference between the groups is which “cards” they hold and/or what strategy they pursue, not an intrinsic difference in values.

This is not a theory that's familiar to me. Why do you think this is true? Have you written more about it somewhere or can link to a more complete explanation?

But also, if we do model values as the result of some long process of reflection, and you’re worried about the AI disrupting or insufficiently aiding this process, then this is already a single-user alignment issue and should be analyzed in that context first. The presumed differences in moralities are not the main source of the problem here.

This seems reasonable to me. (If this was meant to be an argument against something I said, there may have been anther miscommuncation, but I'm not sure it's worth tracking that down.)

This is not a theory that's familiar to me. Why do you think this is true? Have you written more about it somewhere or can link to a more complete explanation?

I considering writing about this for a while, but so far I don't feel sufficiently motivated. So, the links I posted upwards in the thread are the best I have, plus vague gesturing in the directions of Hansonian signaling theories, Jaynes' theory of consciousness and Yudkowsky's belief in belief.

This comment seems to be consistent with the assumption that the outcome 1 year after the singularity is locked in forever. But the future we're discussing here is one where humans retain autonomy (?), and in that case, they're allowed to change their mind over time, especially if humanity has access to a superintelligent aligned AI. I think a future where we begin with highly suboptimal personal utopias and gradually transition into utilitronium is among the more plausible outcomes. Compared with other outcomes where Not Everyone Dies, anyway. Your credence may differ if you're a moral relativist.

But the future we’re discussing here is one where humans retain autonomy (?), and in that case, they’re allowed to change their mind over time, especially if humanity has access to a superintelligent aligned AI.

What if the humans ask the aligned AI to help them be more moral, and part of what they mean by "more moral" is having fewer doubts about their current moral beliefs? This is what a "status game" view of morality seems to predict, for the humans whose status games aren't based on "doing philosophy", which seems to be most of them.

I don't have any reason why this couldn't happen. My position is something like "morality is real, probably precisely quantifiable; seems plausible that in the scenario of humans with autonomy and aligned AI, this could lead to an asymmetry where more people tend toward utilitronium over time". (Hence why I replied, you didn't seem to consider that possibility.) I could make up some mechanisms for this, but probably you don't need me for that. Also seems plausible that this doesn't happen. If it doesn't happen, maybe the people who get to decide what happens with the rest of the universe tend toward utilitronium. But my model is widely uncertain and doesn't rule out futures of highly suboptimal personal utopias that persist indefinitely.

I could make up some mechanisms for this, but probably you don’t need me for that.

I'm interested in your view on this, plus what we can potentially do to push the future in this direction.

I strongly believe that (1) well-being is objective, (2) well-being is quantifiable, and (3) Open Individualism is true (i.e., the concept of identity isn't well-defined, and you're subjectively no less continuous with the future self if any other person than your own future self).

If (1-3) are all true, then utilitronium is the optimal outcome for everyone even if they're entirely selfish. Furthermore, I expect an AGI to figure this out, and to the extent that it's aligned, it should communicate that if it's asked. (I don't think an AGI will therefore decide to do the right thing, so this is entirely compatible with everyone dying if alignment isn't solved.)

In the scenario where people get to talk to the AGI freely and it's aligned, two concrete mechanisms I see are (a) people just ask the AGI what is morally correct and it tells them, and (b) they get some small taste of what utilitronium would feel like, which would make it less scary. (A crucial piece is that they can rationally expect to experience this themselves in the utilitronium future.)

In the scenario where people don't get to talk to the AGI, who knows. It's certainly possible that we have singleton scenario with a few people in charge of the AGI, and they decide to censor questions about ethics because they find the answers scary.

The only org I know of that works on this and shares my philosophical views is QRI. Their goal is to (a) come up with a mathematical space (probably a topological one, mb a Hilbert space) that precisely describes the subjective experience of someone, (b) find a way to put someone in the scanner and create that space, and (c) find a property of that space that corresponds to their well-being in that moment. The flag ship theory is that this property is symmetry. Their model is stronger than (1-3), but if it's correct, you could get hard evidence on this before AGI since it would make strong testable predictions about people's well-being (and they think it could also point to easy interventions, though I don't understand how that works). Whether it's feasible to do this before AGI is a different question. I'd bet against it, but I think I give it better odds than any specific alignment proposal. (And I happen to know that Mike agrees that the future is dominated by concerns about AI and thinks this is the best thing to work on.)

So, I think their research is the best bet for getting more people on board with utilitronium since it can provide evidence on (1) and (2). (Also has the nice property that it won't work if (1) or (2) are false, so there's low risk of outrage.) Other than that, write posts arguing for moral realism and/or for Open Individualism.

Quantifying suffering before AGI would also plausibly help with alignment, since at least you can formally specify a broad space of outcomes you don't want. though it certainly doesn't solve it, e.g. because of inner optimizers.

They are prevented from simulating other pre-existing people without their consent

Why do you think this will be the result of the value aggregation (or a lower bound on how good the aggregation will be)? For example, if there is a big block of people who all want to simulate person X in order to punish that person, and only X and a few other people object, why won't the value aggregation be "nobody pre-existing except X (and Y and Z etc.) can be simulated"?

Given some assumptions about the domains of the utility functions, it is possible to do better than what I described in the previous comment. Let be the space of possible experience histories[1] of user and the space of everything else the utility functions depend on (things that nobody can observe directly). Suppose that the domain of the utility functions is . Then, we can define the "denosing[2] operator" for user by

Here, is the argument of that ranges in , are the arguments that range in for and is the argument that ranges in .

That is, modifies a utility function by having it "imagine" that the experiences of all users other than have been optimized, for the experiences of user and the unobservables held constant.

Let be the utility function of user , and the initial disagreement point (everyone dying), where is the number of users. We then perform cooperative bargaining on the denosed utility functions with disagreement point , producing some outcome . Define by . Now we do another cooperative bargaining with as the disagreement point and the original utility functions . This gives us the final outcome .

Among other benefits, there is now much less need to remove outliers. Perhaps, instead of removing them we still want to mitigate them by applying "amplified denosing" to them which also removes the dependence on .

For this procedure, there is a much better case that the lower bound will be met.


  1. In the standard RL formalism this is the space of action-observation sequences . ↩︎

  2. From the expression "nosy preferences", see e.g. here. ↩︎

This is very interesting (and "denosing operator" is delightful).

Some thoughts:

If I understand correctly, I think there can still be a problem where user  wants an experience history such that part of the history is isomorphic to a simulation of user  suffering ( wants to fully experience  suffering in every detail).

Here a fixed  may entail some fixed  for (some copy of) some .

It seems the above approach can't then avoid leaving one of  or  badly off:
If  is permitted to freely determine the experience of the embedded  copy, the disagreement point in the second bargaining will bake this in:  may be horrified to see that  wants to experience its copy suffer, but will be powerless to stop it (if  won't budge in the bargaining).

Conversely, if the embedded  is treated as a user which  will imagine is exactly to 's liking, but who actually gets what  wants, then the selected  will be horrible for  (e.g. perhaps  wants to fully experience Hitler suffering, and instead gets to fully experience Hitler's wildest fantasies being realized).

I don't think it's possible to do anything like denosing to avoid this.

It may seem like this isn't a practical problem, since we could reasonably disallow such embedding. However, I think that's still tricky since there's a less exotic version of the issue: my experiences likely already are a collection of subagents' experiences. Presumably my maximisation over  is permitted to determine all the .

It's hard to see how you draw a principled line here: the ideal future for most people may easily be transhumanist to the point where today's users are tomorrow's subpersonalities (and beyond).

A case that may have to be ruled out separately is where  wants to become a suffering . Depending on what I consider 'me', I might be entirely fine with it if 'I' wake up tomorrow as suffering  (if I'm done living and think  deserves to suffer).
Or perhaps I want to clone myself  times, and then have all copies convert themselves to suffering s after a while. [in general, it seems there has to be some mechanism to distribute resources reasonably - but it's not entirely clear what that should be]

I think that a rigorous treatment of such issues will require some variant of IB physicalism (in which the monotonicity problem has been solved, somehow). I am cautiously optimistic that a denosing operator exists there which dodges these problems. This operator will declare both the manifesting and evaluation of the source codes of other users to be "out of scope" for a given user. Hence, a preference of to observe the suffering of would be "satisfied" by observing nearly anything, since the maximization can interpret anything as a simulation of .

The "subjoe" problem is different: it is irrelevant because "subjoe" is not a user, only Joe is a user. All the transhumanist magic that happens later doesn't change this. Users are people living during the AI launch, and only them. The status of any future (trans/post)humans is determined entirely according to the utility functions of users. Why? For two reasons: (i) the AI can only have access and stable pointers to existing people (ii) we only need the buy-in of existing people to launch the AI. If existing people want future people to be treated well, then they have nothing to worry about since this preference is part of the existing people's utility functions.

Ah - that's cool if IB physicalism might address this kind of thing (still on my to-read list).

Agreed that the subjoe thing isn't directly a problem. My worry is mainly whether it's harder to rule out  experiencing a simulation of , since sub isn't a user. However, if you can avoid the suffering s by limiting access to information, the same should presumably work for relevant sub-s.

If existing people want future people to be treated well, then they have nothing to worry about since this preference is part of the existing people's utility functions.

This isn't so clear (to me at least) if:

  1. Most, but not all current users want future people to be treated well.
  2. Part of being "treated well" includes being involved in an ongoing bargaining process which decides the AI's/future's trajectory.

For instance, suppose initially 90% of people would like to have an iterated bargaining process that includes future (trans/post)humans as users, once they exist. The other 10% are only willing to accept such a situation if they maintain their bargaining power in future iterations (by whatever mechanism).

If you iterate this process, the bargaining process ends up dominated by users who won't relinquish any power to future users. 90% of initial users might prefer drift over lock-in, but we get lock-in regardless (the disagreement point also amounting to lock-in).

Unless I'm confusing myself, this kind of thing seems like a problem. (not in terms of reaching some non-terrible lower bound, but in terms of realising potential)
Wherever there's this kind of asymmetry/degradation over bargaining iterations, I think there's an argument for building in a way to avoid it from the start - since anything short of 100% just limits to 0 over time. [it's by no means clear that we do want to make future people users on an equal footing to today's people; it just seems to me that we have to do it at step zero or not at all]

Ah - that's cool if IB physicalism might address this kind of thing

I admit that at this stage it's unclear because physicalism brings in the monotonicity principle that creates bigger problems than what we discuss here. But maybe some variant can work.

For instance, suppose initially 90% of people would like to have an iterated bargaining process that includes future (trans/post)humans as users, once they exist. The other 10% are only willing to accept such a situation if they maintain their bargaining power in future iterations (by whatever mechanism).

Roughly speaking, in this case the 10% preserve their 10% of the power forever. I think it's fine because I want the buy-in of this 10% and the cost seems acceptable to me. I'm also not sure there is any viable alternative which doesn't have even bigger problems.

Sure, I'm not sure there's a viable alternative either. This kind of approach seems promising - but I want to better understand any downsides.

My worry wasn't about the initial 10%, but about the possibility of the process being iterated such that you end up with almost all bargaining power in the hands of power-keepers.

In retrospect, this is probably silly: if there's a designable-by-us mechanism that better achieves what we want, the first bargaining iteration should find it. If not, then what I'm gesturing at must either be incoherent, or not endorsed by the 10% - so hard-coding it into the initial mechanism wouldn't get the buy-in of the 10% to the extent that they understood the mechanism.

In the end, I think my concern is that we won't get buy-in from a large majority of users:
In order to accommodate some proportion with odd moral views it seems likely you'll be throwing away huge amounts of expected value in others' views - if I'm correctly interpreting your proposal (please correct me if I'm confused).

Is this where you'd want to apply amplified denosing?
So, rather than filtering out the undesirable , for these  you use:

        [i.e. ignoring y and imagining it's optimal]

However, it's not clear to me how we'd decide who gets strong denosing (clearly not everyone, or we don't pick a ). E.g. if you strong-denose anyone who's too willing to allow bargaining failure [everyone dies] you might end up filtering out altruists who worry about suffering risks.
Does that make sense?

My worry wasn't about the initial 10%, but about the possibility of the process being iterated such that you end up with almost all bargaining power in the hands of power-keepers.

I'm not sure what you mean here, but also the process is not iterated: the initial bargaining is deciding the outcome once and for all. At least that's the mathematical ideal we're approximating.

In the end, I think my concern is that we won't get buy-in from a large majority of users: In order to accommodate some proportion with odd moral views it seems likely you'll be throwing away huge amounts of expected value in others' views

I don't think so? The bargaining system does advantage large groups over small groups.

In practice, I think that for the most part people don't care much about what happens "far" from them (for some definition of "far", not physical distance) so giving them private utopias is close to optimal from each individual perspective. Although it's true they might pretend to care more than they do for the usual reasons, if they're thinking in "far-mode".

I would certainly be very concerned about any system that gives even more power to majority views. For example, what if the majority of people are disgusted by gay sex and prefer it not the happen anywhere? I would rather accept things I disapprove of happening far away from me than allow other people to control my own life.

Ofc the system also mandates win-win exchanges. For example, if Alice's and Bob's private utopias each contain something strongly unpalatable to the other but not strongly important to the respective customer, the bargaining outcome will remove both unpalatable things.

E.g. if you strong-denose anyone who's too willing to allow bargaining failure [everyone dies] you might end up filtering out altruists who worry about suffering risks.

I'm fine with strong-denosing negative utlitarianists who would truly stick to their guns about negative utilitarianism (but I also don't think there are many).

Ah, I was just being an idiot on the bargaining system w.r.t. small numbers of people being able to hold it to ransom. Oops. Agreed that more majority power isn't desirable.
[re iteration, I only meant that the bargaining could become iterated if the initial bargaining result were to decide upon iteration (to include more future users). I now don't think this is particularly significant.]

I think my remaining uncertainty (/confusion) is all related to the issue I first mentioned (embedded copy experiences). It strikes me that something like this can also happen where minds grow/merge/overlap.

This operator will declare both the manifesting and evaluation of the source codes of other users to be "out of scope" for a given user. Hence, a preference of  to observe the suffering of  would be "satisfied" by observing nearly anything, since the maximization can interpret anything as a simulation of .

Does this avoid the problem if 's preferences use indirection? It seems to me that a robust pointer to  may be enough: that with a robust pointer it may be possible to implicitly require something like source-code-access without explicitly referencing it. E.g. where  has a preference to "experience  suffering in circumstances where there's strong evidence it's actually  suffering, given that these circumstances were the outcome of this bargaining process".

If  can't robustly specify things like this, then I'd guess there'd be significant trouble in specifying quite a few (mutually) desirable situations involving other users too. IIUC, this would only be any problem for the denosed bargaining to find a good : for the second bargaining on the true utility functions there's no need to put anything "out of scope" (right?), so win-wins are easily achieved.

I'm imagining cooperative bargaining between all users, where the disagreement point is everyone dying[1][2] (this is a natural choice assuming that if we don't build aligned TAI we get paperclips). This guarantees that every user will receive an outcome that's at least not worse than death.

With Nash bargaining, we can still get issues for (in)famous people that millions of people want to do unpleasant things to. Their outcome will be better than death, but maybe worse than in my claimed "lower bound".

With Kalai-Smorodinsky bargaining things look better, since essentially we're maximizing a minimum over all users. This should admit my lower bound, unless it is somehow disrupted by enormous asymmetries in the maximal payoffs of different users.

In either case, we might need to do some kind of outlier filtering: if e.g. literally every person on Earth is a user, then maybe some of them are utterly insane in ways that cause the Pareto frontier to collapse.

[EDIT: see improved solution]

Bargaining assumes we can access the utility function. In reality, even if we solve the value learning problem in the single user case, once you go to the multi-user case it becomes a mechanism design problem: users have incentives to lie / misrepresent their utility functions. A perfect solution might be impossible, but I proposed mitigating this by assigning each user a virtual "AI lawyer" that provides optimal input on their behalf into the bargaining system. In this case they at least have no incentive to lie to the lawyer, and the outcome will not be skewed in favor of users who are better in this game, but we don't get the optimal bargaining solution either.

All of this assumes the TAI is based on some kind of value learning. If the first-stage TAI is based on something else, the problem might become easier or harder. Easier because the first-stage TAI will produce better solutions to the multi-user problem for the second-stage TAI. Harder because it can allow the small group of people controlling it to impose their own preferences.

For IDA-of-imitation, democratization seems like a hard problem because the mechanism by which IDA-of-imitation solves AI risk is precisely by empowering a small group of people over everyone else (since the source of AI risk comes from other people launching unaligned TAI). Adding transparency can entirely undermine safety.

For quantilized debate, adding transparency opens us to an attack vector where the AI manipulates public opinion. This significantly lowers the optimization pressure bar for manipulation, compared to manipulating the (carefully selected) judges, which might undermine the key assumption that effective dishonest strategies are harder to find than effective honest strategies.


  1. This can be formalized by literally having the AI consider the possibility of optimizing for some unaligned utility function. This is a weird and risky approach but it works to 1st approximation. ↩︎

  2. An alternative choice of disagreement point is maximizing the utility of a randomly chosen user. This has advantages and disadvantages. ↩︎

Bargaining assumes we can access the utility function. In reality, even if we solve the value learning problem in the single user case, once you go to the multi-user case it becomes a mechanism design problem: users have incentives to lie / misrepresent their utility functions. A perfect solution might be impossible, but I proposed mitigating this by assigning each user a virtual “AI lawyer” that provides optimal input on their behalf into the bargaining system. In this case they at least have no incentive to lie to the lawyer, and the outcome will not be skewed in favor of users who are better in this game, but we don’t get the optimal bargaining solution either.

Assuming each lawyer has the same incentive to lie as its client, it has an incentive to misrepresent that some preferable-to-death outcomes are "worse-than-death" (in order to force those outcomes out of the set of "feasible agreements" in hope of getting a more preferred outcome as the actual outcome), and this at equilibrium is balanced by the marginal increase in the probability of getting "everyone dies" as the outcome (due to feasible agreements becoming a null set) caused by the lie. So the probability of "everyone dies" in this game has to be non-zero.

(It's the same kind of problem as in the AI race or tragedy of commons: people not taking into account the full social costs of their actions as they reach for private benefits.)

Of course in actuality everyone dying may not be a realistic consequence of failure to reach agreement, but if the real consequence is better than that, and the AI lawyers know this, they would be more willing to lie since the perceived downside of lying would be smaller, so you end up with a higher chance of no agreement.

Yes, it's not a very satisfactory solution. Some alternative/complementary solutions:

  • Somehow use non-transformative AI to do my mind uploading, and then have the TAI to learn by inspecting the uploads. Would be great for single-user alignment as well.
  • Somehow use non-transformative AI to create perfect lie detectors, and use this to enforce honesty in the mechanism. (But, is it possible to detect self-deception?)
  • Have the TAI learn from past data which wasn't affected by the incentives created by the TAI. (But, is there enough information there?)
  • Shape the TAI's prior about human values in order to rule out at least the most blatant lies.
  • Some clever mechanism design I haven't thought of. The problem with this is, most mechanism designs rely on money and money that doesn't seem applicable, whereas when you don't have money there are many impossibility theorems.

In either case, we might need to do some kind of outlier filtering: if e.g. literally every person on Earth is a user, then maybe some of them are utterly insane in ways that cause the Pareto frontier to collapse.

This seems near guaranteed to me: a non-zero amount of people will be that crazy (in our terms), so filtering will be necessary.

Then I'm curious about how we draw the line on outlier filtering. What filtering rule do we use? I don't yet see a good principled rule (e.g. if we want to throw out people who'd collapse agreement to the disagreement point, there's more than one way to do that).

I think this post makes an important point -- or rather, raises a very important question, with some vivid examples to get you started. On the other hand, I feel like it doesn't go further, and probably should have -- I wish it e.g. sketched a concrete scenario in which the future is dystopian not because we failed to make our AGIs "moral" but because we succeeded, or e.g. got a bit more formal and complemented the quotes with a toy model (inspired by the quotes) of how moral deliberation in a society might work, under post-AGI-alignment conditions, and how that could systematically lead to dystopia unless we manage to be foresightful and set up the social conditions just right.

I recommend not including this post, and instead including this one and Wei Dai's exchange in the comments.

I'm leaning towards the more ambitious version of the project of AI alignment being about corrigible anti-goodharting, with the AI optimizing towards good trajectories within scope of relatively well-understood values, preventing overoptimized weird/controversial situations, even at the cost of astronomical waste. Absence of x-risks, including AI risks, is generally good. Within this environment, the civilization might be able to eventually work out more about values, expanding the scope of their definition and thus allowing stronger optimization. Here corrigibility is in part about continually picking up the values and their implied scope from the predictions of how they would've been worked out some time in the future.

I’m leaning towards the more ambitious version of the project of AI alignment being about corrigible anti-goodharting, with the AI optimizing towards good trajectories within scope of relatively well-understood values

Please say more about this? What are some examples of "relatively well-understood values", and what kind of AI do you have in mind that can potentially safely optimize "towards good trajectories within scope" of these values?

My point is that the alignment (values) part of AI alignment is least urgent/relevant to the current AI risk crisis. It's all about corrigibility and anti-goodharting. Corrigibility is hope for eventual alignment, and anti-goodharting makes inadequacy of current alignment and imperfect robustness of corrigibility less of a problem. I gave the relevant example of relatively well-understood values, preference for lower x-risks. Other values are mostly relevant in how their understanding determines the boundary of anti-goodharting, what counts as not too weird for them to apply, not in what they say is better. If anti-goodharting holds (too weird and too high impact situations are not pursued in planning and possibly actively discouraged), and some sort of long reflection is still going on, current alignment (details of what the values-in-AI prefer, as opposed to what they can make sense of) doesn't matter in the long run.

I include maintaining a well-designed long reflection somewhere into corrigibility, for without it there is no hope for eventual alignment, so a decision theoretic agent that has long reflection within its preference is corrigible in this sense. Its corrigibility depends on following a good decision theory, so that there actually exists a way for the long reflection to determine its preference so that it causes the agent to act as the long reflection wishes. But being an optimizer it's horribly not anti-goodharting, so can't be stopped and probably eats everything else.

An AI with anti-goodharting turned to the max is the same as AI with its stop button pressed. An AI with minimal anti-goodharting is an optimizer, AI risk incarnate. Stronger anti-goodharting is a maintenance mode, opportunity for fundamental change, weaker anti-goodharting makes use of more developed values to actually do the things. So a way to control the level of anti-goodharting in an AI is a corrigibility technique. The two concepts work well with each other.

This seems interesting and novel to me, but (of course) I'm still skeptical.

I gave the relevant example of relatively well-understood values, preference for lower x-risks.

Preference for lower x-risk doesn't seem "well-understood" to me, if we include in "x-risk" things like value drift/corruption, premature value lock-in, and other highly consequential AI-enabled decisions (potential existential mistakes) that depend on hard philosophical questions. I gave some specific examples in this recent comment. What do you think about the problems on that list? (Do you agree that they are serious problems, and if so how do you envision them being solved or prevented in your scenario?)

You may not be interested in mutually exclusive compression schemas, but mutually exclusive compression schemas are interested in you. One nice thing is that given that the schemas use an arbitrary key to handshake with there is hope that they can be convinced to all get on the same arbitrary key without loss of useful structure.