Intelligent Agent Foundations Forumsign up / log in
Open Problems Regarding Counterfactuals: An Introduction For Beginners
link by Alex Appel 63 days ago | Vadim Kosoy, Tsvi Benson-Tilsen, Vladimir Nesov and Wei Dai like this | 2 comments

Note that the problem with exploration already arises in ordinary reinforcement learning, without going into “exotic” decision theories. Regarding the question of why humans don’t seem to have this problem, I think it is a combination of

  • The universe is regular (which is related to what you said about “we can’t see any plausible causal way it could happen”), so a Bayes-optimal policy with a simplicity prior has something going for it. On the other hand, sometimes you do need to experiment, so this can’t be the only explanation.

  • Any individual human has parents that teach em things, including things like “touching a hot stove is dangerous.” Later in life, ey can draw on much of the knowledge accumulated by human civilization. This tunnels the exploration into safe channels, analogously to the role of the advisor in my recent posts.

  • One may say that the previous point only passes the recursive buck, since we can consider all of humanity to be the “agent”. From this perspective, it seems that the universe just happens to be relatively safe, in the sense that it’s pretty hard for an individual human to do something that will irreparably damage all of humanity… or at least it was the case during most of human history.

  • In addition, we have some useful instincts baked in by evolution (e.g. probably some notion of existing in a three dimensional space with objects that interact mechanically). Again, you could zoom further out and say evolution works because it’s hard to create a species that will wipe out all life.

reply

Open Problems Regarding Counterfactuals: An Introduction For Beginners
link by Alex Appel 63 days ago | Vadim Kosoy, Tsvi Benson-Tilsen, Vladimir Nesov and Wei Dai like this | 2 comments

Typos on page 5:

  • “random explanation” should be “random exploration”
  • “Alpa” should be “Alpha”

reply


You don’t have to solve philosophy to deal with large guns, you just have to do some combination of (a) figure out how to wield them to do good with them, (b) get people to stop using them, (c) find strategies for fighting against them, or (d) defend against them.

Do you have ideas for how to do these things, for the specific “big gun” that I described earlier?

The threat is different in kind from that of a fully-automated autopoietic cognitive system, which is more like a big gun possessed by an alien soul.

If the big gun is being wielded by humans whose values and thought processes have been corrupted (by others using that big gun, or through some other way like being indoctrinated in bad ideas from birth), that doesn’t seem very different from a big gun possessed by an alien soul.


Do you have ideas for how to do these things, for the specific “big gun” that I described earlier?

Roughly, minimize direct contact with things that cause insanity, be the sanest people around, and as a result be generally more competent than the rest of the world at doing real things. At some point use this capacity to oppose things that cause insanity. I haven’t totally worked this out.

If the big gun is being wielded by humans whose values and thought processes have been corrupted (by others using that big gun, or through some other way like being indoctrinated in bad ideas from birth), that doesn’t seem very different from a big gun possessed by an alien soul.

It’s hard to corrupt human values without corrupting other forms of human sanity, such as epistemics and general ability to do things.

reply


figure out what my values actually are / should be

I think many human ideas are like low resolution pictures. Sometimes they show simple things, like a circle, so we can make a higher resolution picture of the same circle. That’s known as formalizing an idea. But if the thing in the picture looks complicated, figuring out a higher resolution picture of it is an underspecified problem. I fear that figuring out my values over all possible futures might be that kind of problem.

So apart from hoping to define a “full resolution picture” of human values, either by ourselves or with the help of some AI or AI-human hybrid, it might be useful to come up with approaches that avoid defining it. That was my motivation for this post, which directly uses our “low resolution” ideas to describe some particular nice future without considering all possible ones. It’s certainly flawed, but there might be other similar ideas.

Does that make sense?


I think I understand what you’re saying, but my state of uncertainty is such that I put a lot of probability mass on possibilities that wouldn’t be well served by what you’re suggesting. For example, the possibility that we can achieve most value not through the consequences of our actions in this universe, but through their consequences in much larger (computationally richer) universes simulating this one. Or that spreading hedonium is actually the right thing to do and produces orders of magnitude more value than spreading anything that resembles human civilization. Or that value scales non-linearly with brain size so we should go for either very large or very small brains.

While discussing the VR utopia post, you wrote “I know you want to use philosophy to extend the domain, but I don’t trust our philosophical abilities to do that, because whatever mechanism created them could only test them on normal situations.” I have some hope that there is a minimal set of philosophical abilities that would allow us to eventually solve arbitrary philosophical problems, and we already have this. Otherwise it seems hard to explain the kinds of philosophical progress we’ve made, like realizing that other universes probably exist, and figuring out some ideas about how to make decisions when there are multiple copies of us in this universe and others.

Of course it’s also possible that’s not the case, and we can’t do better than to optimize the future using our current “low resolution” values, but until we’re a lot more certain of this, any attempt to do this seems to constitute a strong existential risk.

reply


I agree that selection bias is a problem. I plan on discussing and writing about AI alignment somewhat in the future. Also note that Eliezer and Nate think the problem is pretty hard and unlikely to be solved.

You didn’t respond to my point that defending against this type of technology does seem to require solving hard philosophical problems. What are your thoughts on this?

Automation technology (in an adversarial context) is kind of like a very big gun. It projects a lot of force. It can destroy lots of things if you point it wrong. It might be hard to point at the right target. And you might kill or incapacitate yourself if you do something wrong. But it’s inherently stupid, and has no agency by itself. You don’t have to solve philosophy to deal with large guns, you just have to do some combination of (a) figure out how to wield them to do good with them, (b) get people to stop using them, (c) find strategies for fighting against them, or (d) defend against them. (Certainly, some of these things involve philosophy, but they don’t necessarily require fully formalizing anything). The threat is different in kind from that of a fully-automated autopoietic cognitive system, which is more like a big gun possessed by an alien soul.


You don’t have to solve philosophy to deal with large guns, you just have to do some combination of (a) figure out how to wield them to do good with them, (b) get people to stop using them, (c) find strategies for fighting against them, or (d) defend against them.

Do you have ideas for how to do these things, for the specific “big gun” that I described earlier?

The threat is different in kind from that of a fully-automated autopoietic cognitive system, which is more like a big gun possessed by an alien soul.

If the big gun is being wielded by humans whose values and thought processes have been corrupted (by others using that big gun, or through some other way like being indoctrinated in bad ideas from birth), that doesn’t seem very different from a big gun possessed by an alien soul.

reply


My confusion is the following:

Premises (*) and inferences (=>):

  • The primary way for the agent to avoid traps is to delegate to a soft-maximiser.

  • Any action with boundedly negative utility, a soft-maximiser will take with positive probability.

  • Actions leading to traps do not have infinitely negative utility.

=> The agent will fall into traps with positive probability.

  • If the agent falls into a trap with positive probability, then it will have linear regret.

=> The agent will have linear regret.

So when you say in the beginning of the post “a Bayesian DIRL agent is guaranteed to attain most of the value”, you must mean that in a different sense than a regret sense?


Your confusion is because you are thinking about regret in an anytime setting. In an anytime setting, there is a fixed policy \(\pi\), we measure the expected reward of \(\pi\) over a time interval \(t\) and compare it to the optimal expected reward over the same time interval. If \(\pi\) has probability \(p > 0\) to walk into a trap, regret has the linear lower bound \(\Omega(pt)\).

On other hand, I am talking about policies \(\pi_t\) that explicitly depend on the parameter \(t\) (I call this a “metapolicy”). Both the advisor and the agent policies are like that. As \(t\) goes to \(\infty\), the probability \(p(t)\) to walk into a trap goes to \(0\), so \(p(t)t\) is a sublinear function.

A second difference with the usual definition of regret is that I use an infinite sum of rewards with geometric time discount \(e^{-1/t}\) instead of a step function time discount that cuts off at \(t\). However, this second difference is entirely inessential, and all the theorems work about the same with step function time discount.

reply


The only assumptions about the prior are that it is supported on a countable set of hypotheses, and that in each hypothesis the advisor is \(\beta\)-rational (for some fixed \(\beta(t)=\omega(t^{2/3})\)).

There is no such thing as infinitely negative value in this framework. The utility function is bounded because of the geometric time discount (and because the momentary rewards are assumed to be bounded), and in fact I normalize it to lie in \([0,1]\) (see the equation defining \(\mathrm{U}\) in the beginning of the Results section).

Falling into a trap is an event associated with \(\Omega(1)\) loss (i.e. loss that remains constant as \(t\) goes to \(\infty\)). Therefore, we can risk such an event, as long as the probability is \(o(1)\) (i.e. goes to \(0\) as \(t\) goes to \(\infty\)). This means that as \(t\) grows, the agent will spend more rounds delegating to the advisor, but for any given \(t\), it will not delegate on most rounds (even on most of the important rounds, i.e. during the first \(O(t)\)-length “horizon”). In fact, you can see in the proof of Lemma A, that the policy I construct delegates on \(O(t^{2/3})\) rounds.

As a simple example, consider again the toy environment from before. Consider also the environments you get from it by applying a permutation to the set of actions \(\mathcal{A}\). Thus, you get a hypothesis class of 6 environments. Then, the corresponding DIRL agent will spend \(O(t^{2/3})\) rounds delegating, observe which action is chosen by the advisor most frequently, and perform this action forevermore. (The phenomenon that all delegations happen in the beginning is specific to this toy example, because it only has 1 non-trap state.)

If you mean this paper, I saw it?


My confusion is the following:

Premises (*) and inferences (=>):

  • The primary way for the agent to avoid traps is to delegate to a soft-maximiser.

  • Any action with boundedly negative utility, a soft-maximiser will take with positive probability.

  • Actions leading to traps do not have infinitely negative utility.

=> The agent will fall into traps with positive probability.

  • If the agent falls into a trap with positive probability, then it will have linear regret.

=> The agent will have linear regret.

So when you say in the beginning of the post “a Bayesian DIRL agent is guaranteed to attain most of the value”, you must mean that in a different sense than a regret sense?

reply


It seems quite plausible that we’ll live to see a world where it’s considered dicey for your browser to uncritically display sentences written by an untrusted party.

How would your browser know who can be trusted, if any of your friends and advisers could be corrupted at any given moment (or just their accounts taken over by malware and used to spread optimized disinformation)?

The case you describe seems clearly contrary to my preferences about how I should reflect.

How would an automated system help you avoid it, aside from blocking off all outside contact? (I doubt I’d be able to ever figure out what my values actually are / should be, if I had to do it without talking to other humans.) If you’re thinking of some sort of meta-execution-style system to help you analyze arguments and distinguish between correct arguments and merely convincing ones, I think that involves solving hard philosophical problems. My understanding is that Jessica agrees with me on that, so I was asking why she doesn’t think the same problem applies in the non-autopoietic automation scenario.


figure out what my values actually are / should be

I think many human ideas are like low resolution pictures. Sometimes they show simple things, like a circle, so we can make a higher resolution picture of the same circle. That’s known as formalizing an idea. But if the thing in the picture looks complicated, figuring out a higher resolution picture of it is an underspecified problem. I fear that figuring out my values over all possible futures might be that kind of problem.

So apart from hoping to define a “full resolution picture” of human values, either by ourselves or with the help of some AI or AI-human hybrid, it might be useful to come up with approaches that avoid defining it. That was my motivation for this post, which directly uses our “low resolution” ideas to describe some particular nice future without considering all possible ones. It’s certainly flawed, but there might be other similar ideas.

Does that make sense?

reply


I hope you stay engaged with the AI risk discussions and maintain your credibility. I’m really worried about the self-selection effect where people who think AI alignment is really hard end up quitting or not working in the field in the first place, and then it appears to outsiders that all of the AI safety experts don’t think the problem is that hard.

I’m also worried about something like this, though I would state the risk as “mass insanity” rather than “value drift”. (“Value drift” brings to mind an individual or group trying to preserve their current object-level values, rather than trying to preserve somewhat-universal human values and sane reflection processes)

I’m envisioning that in the future there will also be systems where you can input any conclusion that you want to argue (including moral conclusions) and the target audience, and the system will give you the most convincing arguments for it. At that point people won’t be able to participate in any online (or offline for that matter) discussions without risking their object-level values being hijacked.

You didn’t respond to my point that defending against this type of technology does seem to require solving hard philosophical problems. What are your thoughts on this?


I agree that selection bias is a problem. I plan on discussing and writing about AI alignment somewhat in the future. Also note that Eliezer and Nate think the problem is pretty hard and unlikely to be solved.

You didn’t respond to my point that defending against this type of technology does seem to require solving hard philosophical problems. What are your thoughts on this?

Automation technology (in an adversarial context) is kind of like a very big gun. It projects a lot of force. It can destroy lots of things if you point it wrong. It might be hard to point at the right target. And you might kill or incapacitate yourself if you do something wrong. But it’s inherently stupid, and has no agency by itself. You don’t have to solve philosophy to deal with large guns, you just have to do some combination of (a) figure out how to wield them to do good with them, (b) get people to stop using them, (c) find strategies for fighting against them, or (d) defend against them. (Certainly, some of these things involve philosophy, but they don’t necessarily require fully formalizing anything). The threat is different in kind from that of a fully-automated autopoietic cognitive system, which is more like a big gun possessed by an alien soul.

reply


defending against this type of technology does seem to require solving hard philosophical problems

Why is this?

The case you describe seems clearly contrary to my preferences about how I should reflect. So a system which helped me implement my preferences would help me avoid this situation (in the same way that it would help me avoid being shot, or giving malware access to valuable computing resources).

It seems quite plausible that we’ll live to see a world where it’s considered dicey for your browser to uncritically display sentences written by an untrusted party.


It seems quite plausible that we’ll live to see a world where it’s considered dicey for your browser to uncritically display sentences written by an untrusted party.

How would your browser know who can be trusted, if any of your friends and advisers could be corrupted at any given moment (or just their accounts taken over by malware and used to spread optimized disinformation)?

The case you describe seems clearly contrary to my preferences about how I should reflect.

How would an automated system help you avoid it, aside from blocking off all outside contact? (I doubt I’d be able to ever figure out what my values actually are / should be, if I had to do it without talking to other humans.) If you’re thinking of some sort of meta-execution-style system to help you analyze arguments and distinguish between correct arguments and merely convincing ones, I think that involves solving hard philosophical problems. My understanding is that Jessica agrees with me on that, so I was asking why she doesn’t think the same problem applies in the non-autopoietic automation scenario.

reply


I hope you stay engaged with the AI risk discussions and maintain your credibility. I’m really worried about the self-selection effect where people who think AI alignment is really hard end up quitting or not working in the field in the first place, and then it appears to outsiders that all of the AI safety experts don’t think the problem is that hard.

I’m also worried about something like this, though I would state the risk as “mass insanity” rather than “value drift”. (“Value drift” brings to mind an individual or group trying to preserve their current object-level values, rather than trying to preserve somewhat-universal human values and sane reflection processes)

I’m envisioning that in the future there will also be systems where you can input any conclusion that you want to argue (including moral conclusions) and the target audience, and the system will give you the most convincing arguments for it. At that point people won’t be able to participate in any online (or offline for that matter) discussions without risking their object-level values being hijacked.

You didn’t respond to my point that defending against this type of technology does seem to require solving hard philosophical problems. What are your thoughts on this?


defending against this type of technology does seem to require solving hard philosophical problems

Why is this?

The case you describe seems clearly contrary to my preferences about how I should reflect. So a system which helped me implement my preferences would help me avoid this situation (in the same way that it would help me avoid being shot, or giving malware access to valuable computing resources).

It seems quite plausible that we’ll live to see a world where it’s considered dicey for your browser to uncritically display sentences written by an untrusted party.

reply


  1. That makes sense.

  2. OK, it seems like I misinterpreted your comment on philosophy. But in this post you seem to be saying that we might not need to solve philosophical problems related to epistemology and agency?

  3. That concept also seems useful and different from autopoiesis as I understand it (since it requires continual human cognitive work to run, though not very much).


  1. I think that we can avoid coming up with a good decision theory or priors or so on—there are particular reasons that we might have had to solve philosophical problems, which I think we can dodge. But I agree that we need or want to solve some philosophical problems to align AGI (e.g. defining corrigibility precisely is a philosophical problem).

reply


I’m curious what initially triggered this.

I tried to solve the problem and found that I thought it was very hard to make the sort of substantial progress that would meaningfully bridge the gap from our current epistemic/philosophical state to the state where the problem is largely solved. I did make incremental progress, but not the sort of incremental progress I saw as attacking the really hard problems. Towards the later parts of my work at MIRI, I was doing research that seemed to be largely overlapping with complex systems theory (in order to reason about how to align autopoietic systems similar to evolution) in a way that made it hard to imagine that I’d come up with useful crisp formal definitions/proofs/etc.

This seems a bit low, given that there’s a number of disjunctive ways that it could happen.

I feel like saying 2% now. Not sure what caused the update.

I’m pretty worried that such technology will accelerate value drift within the current autopoietic system.

I’m also worried about something like this, though I would state the risk as “mass insanity” rather than “value drift”. (“Value drift” brings to mind an individual or group trying to preserve their current object-level values, rather than trying to preserve somewhat-universal human values and sane reflection processes)


I hope you stay engaged with the AI risk discussions and maintain your credibility. I’m really worried about the self-selection effect where people who think AI alignment is really hard end up quitting or not working in the field in the first place, and then it appears to outsiders that all of the AI safety experts don’t think the problem is that hard.

I’m also worried about something like this, though I would state the risk as “mass insanity” rather than “value drift”. (“Value drift” brings to mind an individual or group trying to preserve their current object-level values, rather than trying to preserve somewhat-universal human values and sane reflection processes)

I’m envisioning that in the future there will also be systems where you can input any conclusion that you want to argue (including moral conclusions) and the target audience, and the system will give you the most convincing arguments for it. At that point people won’t be able to participate in any online (or offline for that matter) discussions without risking their object-level values being hijacked.

You didn’t respond to my point that defending against this type of technology does seem to require solving hard philosophical problems. What are your thoughts on this?

reply


  1. A competitive system can use a very large number of human hours in the future, as long as it uses relatively few human hours today.

  2. By “lack of philosophical understanding isn’t a big risk” I meant: “getting object-level philosophy questions wrong in the immediate future, like how to trade off speed vs. safety or how to compromise amongst different values, doesn’t seem to destroy too much value in expectation.” We may or may not need to solve philosophical problems to build aligned AGI. (I think Wei Dai believes that object-level philosophical errors destroy a lot of value in expectation.)

  3. I think autopoietic is a useful category and captures half of what is interesting about “recursively self-improving AGI.” There is a slightly different economic concept, of automation that can be scaled up using fixed human inputs, without strongly diminishing returns. This would be relevant because it changes the character and pace of economic growth. It’s not clear whether this is equivalent to autopoiesis. For example, Elon Musk seems to hope for technology which is non-autopoeitic but has nearly the same transformative economic impact. (Your view in this post is similar to my best guess at Elon Musk’s view, though more clearly articulated / philosophically crisp.)


  1. That makes sense.

  2. OK, it seems like I misinterpreted your comment on philosophy. But in this post you seem to be saying that we might not need to solve philosophical problems related to epistemology and agency?

  3. That concept also seems useful and different from autopoiesis as I understand it (since it requires continual human cognitive work to run, though not very much).

reply


I have recently come to the opinion that AGI alignment is probably extremely hard.

I’m curious what initially triggered this.

My snap judgment is to assign about 1% probability to humanity solving this problem in the next 20 years.

This seems a bit low, given that there’s a number of disjunctive ways that it could happen. Besides MIRI and Paul’s approaches, there’s IRL (and related ideas), and using ML to directly imitate humans (including long-term behaviors and thought processes). The last one doesn’t seem to necessarily require solving many philosophical problems. Oh, there’s also whole brain emulation.

But in an important sense, non-autopoietic cognitive systems are “just another technology” contiguous with other automation technology, and managing them doesn’t require doing anything like wrapping up large parts of philosophy.

I’m pretty worried that such technology will accelerate value drift within the current autopoietic system. Considering that we already have things like automation-mediated social media addiction / echo chambers and automation-enhanced propaganda/disinformation, the situation seems likely to get worse as technology keeps improving. The underlying problem here appears to be that it’s easier to apply automation technology when the goal can be clearly defined and measured. We know how to define and measure things like engagement and making someone believe something; we don’t know how to define and measure normative correctness. We seem to need that to help defend against those offensive technologies and prevent value drift.


I’m curious what initially triggered this.

I tried to solve the problem and found that I thought it was very hard to make the sort of substantial progress that would meaningfully bridge the gap from our current epistemic/philosophical state to the state where the problem is largely solved. I did make incremental progress, but not the sort of incremental progress I saw as attacking the really hard problems. Towards the later parts of my work at MIRI, I was doing research that seemed to be largely overlapping with complex systems theory (in order to reason about how to align autopoietic systems similar to evolution) in a way that made it hard to imagine that I’d come up with useful crisp formal definitions/proofs/etc.

This seems a bit low, given that there’s a number of disjunctive ways that it could happen.

I feel like saying 2% now. Not sure what caused the update.

I’m pretty worried that such technology will accelerate value drift within the current autopoietic system.

I’m also worried about something like this, though I would state the risk as “mass insanity” rather than “value drift”. (“Value drift” brings to mind an individual or group trying to preserve their current object-level values, rather than trying to preserve somewhat-universal human values and sane reflection processes)

reply

Autopoietic systems and difficulty of AGI alignment
post by Jessica Taylor 33 days ago | Ryan Carey, Owen Cotton-Barratt and Paul Christiano like this | 13 comments

I have recently come to the opinion that AGI alignment is probably extremely hard. But it’s not clear exactly what AGI or AGI alignment are. And there are some forms of aligment of “AI” systems that are easy. Here I operationalize “AGI” and “AGI alignment” in some different ways and evaluate their difficulties.

continue reading »

  1. A competitive system can use a very large number of human hours in the future, as long as it uses relatively few human hours today.

  2. By “lack of philosophical understanding isn’t a big risk” I meant: “getting object-level philosophy questions wrong in the immediate future, like how to trade off speed vs. safety or how to compromise amongst different values, doesn’t seem to destroy too much value in expectation.” We may or may not need to solve philosophical problems to build aligned AGI. (I think Wei Dai believes that object-level philosophical errors destroy a lot of value in expectation.)

  3. I think autopoietic is a useful category and captures half of what is interesting about “recursively self-improving AGI.” There is a slightly different economic concept, of automation that can be scaled up using fixed human inputs, without strongly diminishing returns. This would be relevant because it changes the character and pace of economic growth. It’s not clear whether this is equivalent to autopoiesis. For example, Elon Musk seems to hope for technology which is non-autopoeitic but has nearly the same transformative economic impact. (Your view in this post is similar to my best guess at Elon Musk’s view, though more clearly articulated / philosophically crisp.)

reply

Autopoietic systems and difficulty of AGI alignment
post by Jessica Taylor 33 days ago | Ryan Carey, Owen Cotton-Barratt and Paul Christiano like this | 13 comments

I have recently come to the opinion that AGI alignment is probably extremely hard. But it’s not clear exactly what AGI or AGI alignment are. And there are some forms of aligment of “AI” systems that are easy. Here I operationalize “AGI” and “AGI alignment” in some different ways and evaluate their difficulties.

continue reading »

I have recently come to the opinion that AGI alignment is probably extremely hard.

I’m curious what initially triggered this.

My snap judgment is to assign about 1% probability to humanity solving this problem in the next 20 years.

This seems a bit low, given that there’s a number of disjunctive ways that it could happen. Besides MIRI and Paul’s approaches, there’s IRL (and related ideas), and using ML to directly imitate humans (including long-term behaviors and thought processes). The last one doesn’t seem to necessarily require solving many philosophical problems. Oh, there’s also whole brain emulation.

But in an important sense, non-autopoietic cognitive systems are “just another technology” contiguous with other automation technology, and managing them doesn’t require doing anything like wrapping up large parts of philosophy.

I’m pretty worried that such technology will accelerate value drift within the current autopoietic system. Considering that we already have things like automation-mediated social media addiction / echo chambers and automation-enhanced propaganda/disinformation, the situation seems likely to get worse as technology keeps improving. The underlying problem here appears to be that it’s easier to apply automation technology when the goal can be clearly defined and measured. We know how to define and measure things like engagement and making someone believe something; we don’t know how to define and measure normative correctness. We seem to need that to help defend against those offensive technologies and prevent value drift.

reply


From my perspective, I don’t think it’s been adequately established that we should prefer updateless CDT to updateless EDT

I agree with this.

It would be nice to have an example which doesn’t arise from an obviously bad agent design, but I don’t have one.

I’d also be interested in finding such a problem.

I am not sure whether your smoking lesion steelman actually makes a decisive case against evidential decision theory. If an agent knows about their utility function on some level, but not on the epistemic level, then this can just as well be made into a counter-example to causal decision theory. For example, consider a decision problem with the following payoff matrix:

Smoke-lover:

  • Smokes:
    • Killed: 10
    • Not killed: -90
  • Doesn’t smoke:
    • Killed: 0
    • Not killed: 0

Non-smoke-lover:

  • Smokes:
    • Killed: -100
    • Not killed: -100
  • Doesn’t smoke:
    • Killed: 0
    • Not killed: 0

For some reason, the agent doesn’t care whether they live or die. Also, let’s say that smoking makes a smoke-lover happy, but afterwards, they get terribly sick and lose 100 utilons. So they would only smoke if they knew they were going to be killed afterwards. The non-smoke-lover doesn’t want to smoke in any case.

Now, smoke-loving evidential decision theorists rightly choose smoking: they know that robots with a non-smoke-loving utility function would never have any reason to smoke, no matter which probabilities they assign. So if they end up smoking, then this means they are certainly smoke-lovers. It follows that they will be killed, and conditional on that state, smoking gives 10 more utility than not smoking.

Causal decision theory, on the other hand, seems to recommend a suboptimal action. Let \(a_1\) be smoking, \(a_2\) not smoking, \(S_1\) being a smoke-lover, and \(S_2\) being a non-smoke-lover. Moreover, say the prior probability \(P(S_1)\) is \(0.5\). Then, for a smoke-loving CDT bot, the expected utility of smoking is just

\(\mathbb{E}[U|a_1]=P(S_1)\cdot U(S_1\wedge a_1)+P(S_2)\cdot U(S_2\wedge a_1)=0.5\cdot 10 + 0.5\cdot (-90) = -40\),

which is less then the certain \(0\) utilons for \(a_2\). Assigning a credence of around \(1\) to \(P(S_1|a_1)\), a smoke-loving EDT bot calculates

\(\mathbb{E}[U|a_1]=P(S_1|a_1)\cdot U(S_1\wedge a_1)+P(S_2|a_1)\cdot U(S_2\wedge a_1)\approx 1 \cdot 10 + 0\cdot (-90) = 10\),

which is higher than the expected utility of \(a_2\).

The reason CDT fails here doesn’t seem to lie in a mistaken causal structure. Also, I’m not sure whether the problem for EDT in the smoking lesion steelman is really that it can’t condition on all its inputs. If EDT can’t condition on something, then EDT doesn’t account for this information, but this doesn’t seem to be a problem per se.

In my opinion, the problem lies in an inconsistency in the expected utility equations. Smoke-loving EDT bots calculate the probability of being a non-smoke-lover, but then the utility they get is actually the one from being a smoke-lover. For this reason, they can get some “back-handed” information about their own utility function from their actions. The agents basically fail to condition two factors of the same product on the same knowledge.

Say we don’t know our own utility function on an epistemic level. Ordinarily, we would calculate the expected utility of an action, both as smoke-lovers and as non-smoke-lovers, as follows:

\(\mathbb{E}[U|a]=P(S_1|a)\cdot \mathbb{E}[U|S_1, a]+P(S_2|a)\cdot \mathbb{E}[U|S_2, a]\),

where, if \(U_{1}\) (\(U_{2}\)) is the utility function of a smoke-lover (non-smoke-lover), \(\mathbb{E}[U|S_i, a]\) is equal to \(\mathbb{E}[U_{i}|a]\). In this case, we don’t get any information about our utility function from our own action, and hence, no Newcomb-like problem arises.

I’m unsure whether there is any causal decision theory derivative that gets my case (or all other possible cases in this setting) right. It seems like as long as the agent isn’t certain to be a smoke-lover from the start, there are still payoffs for which CDT would (wrongly) choose not to smoke.

by Abram Demski 32 days ago | link | on: Smoking Lesion Steelman

Excellent example.

It seems to me, intuitively, that we should be able to get both the CDT feature of not thinking we can control our utility function through our actions and the EDT feature of taking the information into account.

Here’s a somewhat contrived decision theory which I think captures both effects. It only makes sense for binary decisions.

First, for each action you compute the posterior probability of the causal parents for each decision. So, depending on details of the setup, smoking tells you that you’re likely to be a smoke-lover, and refusing to smoke tells you that you’re more likely to be a non-smoke-lover.

Then, for each action, you take the action with best “gain”: the amount better you do in comparison to the other action keeping the parent probabilities the same:

\[\texttt{Gain}(a) = \mathbb{E}(U|a) - \mathbb{E}(U|a, \texttt{do}(\bar a))\]

(\(\mathbb{E}(U|a, \texttt{do}(\bar a))\) stands for the expectation on utility which you get by first Bayes-conditioning on \(a\), then causal-conditioning on its opposite.)

The idea is that you only want to compare each action to the relevant alternative. If you were to smoke, it means that you’re probably a smoker; you will likely be killed, but the relevant alternative is one where you’re also killed. In my scenario, the gain of smoking is +10. On the other hand, if you decide not to smoke, you’re probably not a smoker. That means the relevant alternative is smoking without being killed. In my scenario, the smoke-lover computes the gain of this action as -10. Therefore, the smoke-lover smokes.

(This only really shows the consistency of an equilibrium where the smoke-lover smokes – my argument contains unjustified assumption that smoking is good evidence for being a smoke lover and refusing to smoke is good evidence for not being one, which is only justified in a circular way by the conclusion.)

In your scenario, the smoke-lover computes the gain of smoking at +10, and the gain of not smoking at 0. So, again, the smoke-lover smokes.

The solution seems too ad-hoc to really be right, but, it does appear to capture something about the kind of reasoning required to do well on both problems.

reply

Cooperative Oracles: Introduction
post by Scott Garrabrant 130 days ago | Abram Demski, Jessica Taylor and Patrick LaVictoire like this | 1 comment

This is the first in a series of posts introducing a new tool called a Cooperative Oracle. All of these posts are joint work Sam Eisenstat, Tsvi Benson-Tilsen, and Nisan Stiennon.

Here is my plan for posts in this sequence. I will update this as I go.

  1. Introduction
  2. Nonexploited Bargaining
  3. Stratified Pareto Optima and Almost Stratified Pareto Optima
  4. Definition and Existence Proof
  5. Alternate Notions of Dependency
continue reading »

I have stopped working on this sequence, because a coauthor is trying to write it up as a more formal paper instead.

reply


So this requires the agent’s prior to incorporate information about which states are potentially risky?

Because if there is always some probability of there being a risky action (with infinitely negative value), then regardless how small the probability is and how large the penalty is for asking, the agent will always be better off asking.

(Did you see Owain Evans recent paper about trying to teach the agent to detect risky states.)


The only assumptions about the prior are that it is supported on a countable set of hypotheses, and that in each hypothesis the advisor is \(\beta\)-rational (for some fixed \(\beta(t)=\omega(t^{2/3})\)).

There is no such thing as infinitely negative value in this framework. The utility function is bounded because of the geometric time discount (and because the momentary rewards are assumed to be bounded), and in fact I normalize it to lie in \([0,1]\) (see the equation defining \(\mathrm{U}\) in the beginning of the Results section).

Falling into a trap is an event associated with \(\Omega(1)\) loss (i.e. loss that remains constant as \(t\) goes to \(\infty\)). Therefore, we can risk such an event, as long as the probability is \(o(1)\) (i.e. goes to \(0\) as \(t\) goes to \(\infty\)). This means that as \(t\) grows, the agent will spend more rounds delegating to the advisor, but for any given \(t\), it will not delegate on most rounds (even on most of the important rounds, i.e. during the first \(O(t)\)-length “horizon”). In fact, you can see in the proof of Lemma A, that the policy I construct delegates on \(O(t^{2/3})\) rounds.

As a simple example, consider again the toy environment from before. Consider also the environments you get from it by applying a permutation to the set of actions \(\mathcal{A}\). Thus, you get a hypothesis class of 6 environments. Then, the corresponding DIRL agent will spend \(O(t^{2/3})\) rounds delegating, observe which action is chosen by the advisor most frequently, and perform this action forevermore. (The phenomenon that all delegations happen in the beginning is specific to this toy example, because it only has 1 non-trap state.)

If you mean this paper, I saw it?

reply


If the agent always delegates to the advisor, it loses a large fraction of the value. Returning again to the simple example above, the advisor on its own is only guaranteed to get expected utility \(1/2 + \omega(t^{-1/3})\) (because it often takes the suboptimal action 1). On the other hand, for any prior over a countable set of environments that includes this one, the corresponding DIRL agent gets expected utility \(1 - o(1)\) on this environment (because it will learn to only take action 2). You can also add an external penalty for each delegation, adjusting the proof is straightforward.

So, the agent has to exercise judgement about whether to delegate, using its prior + past observations. For example, the policy I construct in Lemma A delegates iff there is no action whose expected loss (according to current beliefs) is less than \(\beta(t)^{-1}t^{-1/3}\).


So this requires the agent’s prior to incorporate information about which states are potentially risky?

Because if there is always some probability of there being a risky action (with infinitely negative value), then regardless how small the probability is and how large the penalty is for asking, the agent will always be better off asking.

(Did you see Owain Evans recent paper about trying to teach the agent to detect risky states.)

reply


Hi Vadim!

So basically the advisor will be increasingly careful as the cost of falling into the trap goes to infinity? Makes sense I guess.

What is the incentive for the agent not to always let the advisor choose? Is there always some probability that the advisor saves them from infinite loss, or only in certain situations that can be detected by the agent?


If the agent always delegates to the advisor, it loses a large fraction of the value. Returning again to the simple example above, the advisor on its own is only guaranteed to get expected utility \(1/2 + \omega(t^{-1/3})\) (because it often takes the suboptimal action 1). On the other hand, for any prior over a countable set of environments that includes this one, the corresponding DIRL agent gets expected utility \(1 - o(1)\) on this environment (because it will learn to only take action 2). You can also add an external penalty for each delegation, adjusting the proof is straightforward.

So, the agent has to exercise judgement about whether to delegate, using its prior + past observations. For example, the policy I construct in Lemma A delegates iff there is no action whose expected loss (according to current beliefs) is less than \(\beta(t)^{-1}t^{-1/3}\).

reply


Hi Tom!

There is a positive probability that the advisor falls into the trap, but this probability goes to \(0\) as the time discount parameter \(t\) goes to \(\infty\) (which is the limit I study here). This follows from the condition \(\beta(t)=\omega(t^{2/3})\) in the Theorem. To give a simple example, suppose that \(\mathcal{A}=\{0,1,2\}\) and the environment is s.t.:

  • When you take action 0, you fall into a trap and get reward 0 forever.

  • When you take action 1, you get reward 0 for the current round and remain in the same state.

  • When you take action 2, you get reward 1 for the current round (unless you are in the trap) and remain in the same state.

In this case, our advisor would have to take action 0 with probability \(\exp\left(-\omega\left(t^{2/3}\right)\right)\) and action 2 has to be more probable than action 1 by a factor of \(\exp\left(\omega\left(t^{-1/3}\right)\right) \approx 1 + \omega\left(t^{-1/3}\right)\).


Hi Vadim!

So basically the advisor will be increasingly careful as the cost of falling into the trap goes to infinity? Makes sense I guess.

What is the incentive for the agent not to always let the advisor choose? Is there always some probability that the advisor saves them from infinite loss, or only in certain situations that can be detected by the agent?

reply

Older

NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

Note that the problem with
by Vadim Kosoy on Open Problems Regarding Counterfactuals: An Introd... | 0 likes

Typos on page 5: *
by Vadim Kosoy on Open Problems Regarding Counterfactuals: An Introd... | 0 likes

Ah, you're right. So gain
by Abram Demski on Smoking Lesion Steelman | 0 likes

> Do you have ideas for how
by Jessica Taylor on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

I think I understand what
by Wei Dai on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

>You don’t have to solve
by Wei Dai on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

Your confusion is because you
by Vadim Kosoy on Delegative Inverse Reinforcement Learning | 0 likes

My confusion is the
by Tom Everitt on Delegative Inverse Reinforcement Learning | 0 likes

> First of all, it seems to
by Abram Demski on Smoking Lesion Steelman | 0 likes

> figure out what my values
by Vladimir Slepnev on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

I agree that selection bias
by Jessica Taylor on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

>It seems quite plausible
by Wei Dai on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

> defending against this type
by Paul Christiano on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

2. I think that we can avoid
by Paul Christiano on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

I hope you stay engaged with
by Wei Dai on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

RSS

Privacy & Terms