Intelligent Agent Foundations Forumsign up / log in
The odd counterfactuals of playing chicken
post by Benja Fallenstein 780 days ago | Kaya Stechly, Nate Soares, Patrick LaVictoire and Stuart Armstrong like this | discuss

In this post, I examine an odd consequence of “playing chicken with the universe”, as used in proof-based UDT. Let’s say that our agent uses PA, and that it has a provability oracle, so that if it doesn’t find a proof, there really isn’t one. In this case, one way of looking at UDT is to say that it treats the models of PA as impossible possible worlds: UDT thinks that taking action \(a\) leads to utility \(u\) iff the universe program \(U()\) returns \(u\) in all models \(\mathcal{M}\) in which \(A()\) returns \(a\). The chicken step ensures that for every \(a\), there is at least one model \(\mathcal{M}\) in which this is true. But how? Well, even though in the “real world”, \(\mathbb{N}\), the sentence \(\ulcorner{\bar A() \neq \bar a}\urcorner\) isn’t provable—that is, \(\mathbb{N}\vDash\neg\square\ulcorner{\bar{\bar A}() \neq \bar{\bar a}}\urcorner\)—there are other models \(\mathcal{M}\) such that \(\mathcal{M}\vDash\square\ulcorner{\bar{\bar A}() \neq \bar{\bar a}}\urcorner\), and in these models, the chicken step can make \(A()\) output \(a\).

In general, the only “impossible possible worlds” in which \(A() = a\) are models \(\mathcal{M}\) according to which it is provable that \(A() \neq a\). In this post, I show that this odd way of constructing the counterfactual “what would happen if I did \(a\)” can cause problems for modal UDT and the corresponding notion of third-party counterfactuals.

To simplify things, I’ll consider modal UDT in this post. (The fact that our definition of modal UDT doesn’t have an explicit chicken-playing step will make some later arguments a little subtler than they otherwise would have been, but it’s not really a problem since it still does something very similar implicitly.)

I’ll consider a version of the 5-and-10 problem, which has two actions (\(1,2\)) and three outcomes (\(1,2,3\), ordered from best to worst by the convention I’ve adopted in my posts on modal UDT). If the agent takes action \(2\), it always gets outcome \(2\). If the agent takes action \(1\), it gets the optimal outcome \(1\), unless it’s provable that it doesn’t take action \(1\), in which case it gets the worst outcome, \(3\).

Of course, in the real world, if it’s provable that the agent doesn’t take a particular action, then it really doesn’t take that action. Hence, if the agent takes action \(1\) in the real world, it will receive the optimal outcome \(1\), so intuitively, it’s best to take action \(1\).

But modal UDT, as we’ll see, takes action \(2\) instead. This is self-consistent, because in the only “impossible possible worlds” in which it takes action \(1\), it does so because of (implicitly) playing chicken; that is, it only takes action \(1\) because, in these worlds, it’s “provable” that it doesn’t take action \(1\), implying that it receives the worst possible outcome in all of these worlds.

We model the above set-up as a modal decision problem, that is, as three formulas \(P_1(a_1,a_2)\), \(P_2(a_1,a_2)\) and \(P_3(a_1,a_2)\), such that \(P_j(a_1,a_2)\) is true if an agent obtains outcome \(j\). Here, \(a_1\) and \(a_2\) are propositional variables; \(a_i\) is interpreted as meaning that the agent takes action \(i\). Thus, we can define \(\vec P(\vec a)\) as follows:

\[\begin{aligned} P_1(a_1,a_2) & \;\;{:}{{\leftrightarrow}}\;\; a_1 \,\wedge\, \neg\square\neg a_1 \\ P_2(a_1,a_2) & \;\;{:}{{\leftrightarrow}}\;\; a_2 \\ P_3(a_1,a_2) & \;\;{:}{{\leftrightarrow}}\;\; a_1 \,\wedge\, \phantom{\neg}\square\neg a_1 \\ \end{aligned}\]

Here’s a different possible reading. You may know the diamond operator, \(\Diamond\varphi\), which is an abbreviation for \(\neg\square\neg\varphi\). We have \(\mathbb{N}\vDash\square\phi\) if \(\phi\) is provable, that is, iff it is true in all models of PA; we have \(\mathbb{N}\vDash\Diamond\phi\) if it isn’t provable that not \(\phi\), i.e., if there is some model of PA in which \(\phi\) is true. (The relation between \(\square\) and \(\Diamond\) is like the relation between \(\forall\) and \(\exists\)—we have \(\mathbb{N}\vDash\square\varphi\) iff for all models \(\mathcal{M}\), \(\mathcal{M}\vDash\varphi\), and we have \(\mathbb{N}\vDash\neg\square\neg\varphi\) or \(\mathbb{N}\vDash\Diamond\varphi\) if not for all models \(\mathcal{M}\) not \(\mathcal{M}\vDash\varphi\), which is equivalent to saying that there exists a model \(\mathcal{M}\) such that \(\mathcal{M}\vDash\varphi\).) So the box can be read as “true in all models”, whereas the diamond can be read “true in some models” or “possible”. Using the diamond operator, we can rewrite the decision problem as

\[\begin{aligned} P_1(a_1,a_2) & \;\;{{\leftrightarrow}}\;\; a_1 \,\wedge\, \phantom{\neg}\Diamond a_1 \\ P_2(a_1,a_2) & \;\;{{\leftrightarrow}}\;\; a_2 \\ P_3(a_1,a_2) & \;\;{{\leftrightarrow}}\;\; a_1 \,\wedge\, \neg\Diamond a_1 \\ \end{aligned}\]

So the agent gets the good outcome \(1\) if it chooses action \(1\) and it “was possible” for it to choose action \(1\), whereas it gets the bad outcome \(3\) if it chooses action \(1\) but it “wasn’t possible” for it to choose this action.

For example, let’s consider the agent which just takes action \(1\) without any further reasoning, which we represent by substituting the pair \((\top,\bot)\) for the pair of variables \((a_1,a_2)\). By substitution, \(P_1(\top,\bot)\) is equivalent to \(\top\wedge\neg\square\neg\top\), which is equivalent to \(\neg\square\bot\), which states that PA is consistent (“no contradiction is provable”). Since \(\mathbb{N}\vDash\neg\square\bot\), it follows that “in the real world”, the agent \((\top,\bot)\) obtains the best possible outcome \(1\).

It turns out that modal UDT, on the other hand, will choose action \(2\) and receive a suboptimal payoff. Moreover, this is true even if we consider modal UDT with a stronger proof system than PA, as described in An optimality result for modal UDT. This means, of course, that the assumptions of the optimality theorem aren’t satisfied; this isn’t too surprising, because the decision problem refers to the agent’s action inside boxes (see Patrick’s recent post for a much simpler example of non-optimality when the decision problem refers to the agent inside boxes).

Let’s write \((A_1,A_2)\), or \(\vec A\) for short, for the two formulas describing the action of modal UDT on our decision problem, and let’s write \((U_1,U_2,U_3)\) for the formulas describing the outcome it obtains; that is, \(U_j\) is equivalent to \(P_j(A_1,A_2)\). (Technically, this means that \((\vec A,\vec U)\) is the fixed point of \(\vec P(\vec a)\) with \(\vec{\mathrm{UDT}}(\vec u)\); refer to the optimality post and the references therein for technical details.)

I claim that \(\mathrm{GL}\vdash A_1\to\square\neg A_1\); that is: we can show that if UDT takes action \(1\), then it isn’t provable that it takes action \(1\). Which, of course, implies that \(\mathrm{GL}\vdash A_1\to U_3\), since \(U_3\) is equivalent to \(A_1\wedge\square\neg A_1\); UDT concludes that taking action \(1\) will lead to the worst possible outcome, \(3\). We can also think of this in terms of possible worlds: In all models \(\mathcal{M}\) of PA (all “impossible possible worlds”, the way that modal UDT evaluates counterfactuals) in which our agent takes action \(1\), i.e., in all models satisfying \(\mathcal{M}\vDash A_1\), we also have \(\mathcal{M}\vDash U_3\).

All of the above implies that “in the real world”, UDT must take action \(2\), i.e., \(\mathbb{N}\vDash A_2\). The most direct way to conclude this, once I’ve proven my claim that \(\mathrm{GL}\vdash A_1\to\square\neg A_1\), is by a quick proof of contradiction: Suppose that the agent took action \(1\) (\(\mathbb{N}\vDash A_1\)). Then by my claim, it would be provable that it doesn’t take action \(1\) (\(\mathbb{N}\vDash\square\neg A_1\)). But this would mean that PA is inconsistent; contradiction.

I’ll now proceed to prove my claim. I guess you may not be surprised if I tell you I’ll do so by Löb’s theorem; that is, I’ll actually prove that \(\mathrm{GL}\vdash\square(A_1\to\square\neg A_1)\to(A_1\to\square\neg A_1)\).

This may look like an opaque collection of boxes, but actually there’s some interpretation. First of all, \(\square(A_1\to\square\neg A_1)\) implies \(\square(A_1\to U_3)\), by taking the argument I made in the previous section and carrying out inside the boxes: If we have \(A_1\) and \(A_1\to\square\neg A_1\), then we have \(A_1\wedge\neg A_1\), which is the same as \(P_3(A_1,A_2)\), which is equivalent to \(U_3\).

So it’s enough if we can prove \(\mathrm{GL}\vdash\square(A_1\to U_3)\to(A_1\to\square\neg A_1)\), or equivalently, \(\mathrm{GL}\vdash[A_1\wedge\square(A_1\to U_3)]\to\square\neg A_1\). Now consider the way modal UDT operates:

  • For all outcomes \(j\) from 1 to 3:
    • For all actions \(i\) from 1 to 2:
      • If you can prove \(A_i\to U_j\), return \(i\).
  • If you’re still here, return a default action.

Consider, furthermore, that we clearly have \(\mathrm{GL}\vdash\square(A_2\to U_2)\) (by definition, we have \(\mathrm{GL}\vdash U_2{\leftrightarrow}A_2\), which yields \(\mathrm{GL}\vdash A_2\to U_2\) and hence \(\mathrm{GL}\vdash\square(A_2\to U_2)\)). Thus, the search will stop at \(i=j=2\) at the latest, and by using this fact, GL can show that the only way \(A_1\) could be true is if we have \(\square(A_1\to U_1)\) or \(\square(A_1\to U_2)\). But if either of these is true, and we also have \(\square(A_1\to U_3)\), then it follows that \(\square\neg A_1\) (since \((U_1,U_2,U_3)\) are provably mutually exclusive, i.e., only one of them can be true).

This proves \(\mathrm{GL}\vdash[A_1\wedge\square(A_1\to U_3)]\to\square\neg A_1\), which concludes the proof of my claim.

But is this really so bad? Sure, modal UDT performs suboptimally on this decision problem, but we already know that every agent behaves badly on its “evil” decision problems, for example, and these problems are a lot simpler to reason about than what I’ve discussed here.

Moreover, the proof in this post only works for modal UDT using PA as its proof system, not for variants using proof systems stronger than PA. (You could define an analogous decision problem for these stronger proof systems, but without changing the decision problem, the proof only works for \(\mathrm{PA}\)-based UDT.) Now, the optimality result for modal UDT deals with the “evil” decision problems by saying that for every provably extensional decision problem, there’s some \(n_0\) such that modal UDT using \(\mathrm{PA}+ n\), \(n\ge n_0\), performs optimally on this decision problem. This result probably doesn’t apply to the decision problem I’ve discussed in this post, because that’s probably not provably extensional (though I haven’t actually tried to prove this); but we might still say, why care if \(\mathrm{PA}\)-based UDT fails on some specific decision problem? We already knew that you need to use stronger proof systems sometimes.

The real reason to consider this problem becomes clear when we consider the logical third-person counterfactuals corresponding to modal UDT.

First of all, note that the optimality picture looks quite different when using logical counterfactuals, which ask “what would have happened if this agent had chosen a different action”, rather than the physical counterfactuals from the optimality result, which ask “what would happen if you replaced the agent by a different agent”.

Consider the “evil” decision problems: Fix a two-action agent \(\vec A' \equiv (A_1',A_2')\). The corresponding evil decision problem, \(\vec P'(\vec a) \equiv (P'_1(a_1,a_2),P'_2(a_1,a_2))\), compares the action the agent \(\vec a\) is taking against the action taken by \(\vec A'\); if they’re different, it rewards the agent \(\vec a\), if they’re the same, it punishes \(\vec a\). Clearly, every agent gets punished on its own evil decision problem, even though there’s a very simple agent which gets rewarded on the same problem.

But that’s judging the agent through physical counterfactuals. When we consider logical counterfactuals, we don’t compare the agent against other agents; rather, we ask whether this agent could have done better by taking a different action. From that perspective, the evil problems don’t seem so problematic: It’s intuitively clear, and our proof-based third-person counterfactuals agree, that if an agent returned a different action, it would still be punished by its evil decision problem.

It’s true that we can’t expect PA-based UDT to be optimal, even from the perspective of logical third-party counterfactuals. After all, consider a decision problem that rewards action \(1\) iff PA is consistent, and action \(2\) otherwise. In this case, modal UDT will not be able to prove anything of the form \(A_i\to U_j\), and will end up taking its default action, which could end up being the bad action \(2\).

However, from the perspective of proof-based third-party counterfactuals, this is acceptable. In the language of my previous post, we have “ambiguous counterfactuals”: we can’t figure out what outcome each action leads to, and so we throw up our hands and make peace with the fact that our agent may not behave optimally in any sense. I showed in the previous post that modal UDT is “optimal” in a certain sense whenever a universe is “fully informative”, in the sense that for every action \(i\) there’s a unique \(j\) for which it can prove \(A_i\to U_j\), or at least “sufficiently informative”, which is a slight weakening of this condition. We only have reason to expect modal UDT to behave well when it’s in a sufficiently informative universe; the decision problem that rewards \(1\) iff PA is consistent doesn’t satisfy this condition (if we use PA as our proof system), and so we don’t expect PA-based modal UDT to do well on in the first place. The fact that the decision problem fails to be sufficiently informative alerts us to the fact that we shouldn’t expect optimality.

But now consider the decision problem discussed in this post! This decision problem is fully informative—we obviously have \(\mathrm{GL}\vDash A_2\to U_2\); we have \(\mathrm{GL}\nvdash\neg A_2\), because we’ve shown that the agent actually takes action \(2\) (i.e., \(\mathbb{N}\vDash A_2\)), and \(\mathrm{GL}\) is sound; we’ve shown \(\mathrm{GL}\vdash A_1\to U_3\); and we have \(\mathrm{GL}\nvdash\neg A_1\), because otherwise, \(A_1\to U_1\) would be provable and our agent would take action \(1\). By the proof in the previous post, this implies that modal UDT performs “optimally”, according to the notion of optimality defined there—that is, it acts optimally if you believe that taking action \(1\) leads to the bad outcome \(3\), as examining provability in PA would lead you to believe. So unlike the problem discussed in the previous paragraph, in this case the third-party counterfactuals don’t alert us to the fact that something is going wrong, and instead lead us to expect optimality—and yet we get arguably suboptimal behavior.

So the problem I want to point out in this post is that both modal UDT and the proof-based third-party counterfactuals defined in my previous post use a problematic notion of counterfactuals, related to the implicit chicken-playing step; in this case, this leads them to believe that taking action \(1\) leads to the bad outcome \(3\), and to believe that they know what they’re talking about (we’re in the fully informative case, where for every action we seem to know exactly what outcome it would lead to), even though intuitively, it seems like taking action \(1\) should be thought of as leading to the optimal outcome \(1\).





I don't know which open
by Jessica Taylor on Some problems with making induction benign, and ap... | 0 likes

KWIK learning is definitely
by Vadim Kosoy on Some problems with making induction benign, and ap... | 0 likes

I should have said "reliably
by Patrick LaVictoire on HCH as a measure of manipulation | 0 likes

I think that one can argue
by Vadim Kosoy on Generalizing Foundations of Decision Theory | 0 likes

"Having a well-calibrated
by Jessica Taylor on HCH as a measure of manipulation | 0 likes

Re #2, I think this is an
by Patrick LaVictoire on HCH as a measure of manipulation | 0 likes

Re #1, an obvious set of
by Patrick LaVictoire on HCH as a measure of manipulation | 0 likes

There's the additional
by Patrick LaVictoire on HCH as a measure of manipulation | 0 likes

I agree it's not a complete
by David Krueger on An idea for creating safe AI | 0 likes

I spoke with Huw about this
by David Krueger on An idea for creating safe AI | 0 likes

Both of your conjectures are
by Alex Mennen on Generalizing Foundations of Decision Theory | 0 likes

I can think of two problems:
by Ryan Carey on HCH as a measure of manipulation | 0 likes

Question that I haven't seen
by Patrick LaVictoire on All the indifference designs | 0 likes

Agree that IRL doesn't solve
by Jessica Taylor on Some problems with making induction benign, and ap... | 0 likes

Designing an agent which is
by Vadim Kosoy on An idea for creating safe AI | 0 likes


Privacy & Terms