Intelligent Agent Foundations Forumsign up / log in

I (just yesterday) found a counterexample to this. The universe is a 5-and-10 variant that uses the unprovability of consistency:

def U():
  if A() == 2:
    if PA is consistent:
      return 10
    else:
      return 0
  else:
    return 5

The agent can be taken to be modal UDT, using PA as its theory. (The example will still work for other theories extending PA, we just need the universe’s theory to include the agents. Also, to simplify some later arguments we suppose that the agent uses the chicken rule, and that it checks action 1 first, then action 2.) Since the agent cannot prove the consistency of its theory, it will not be able to prove \(\tt{A() = 2} \to \tt{U() = 10}\), so the first implication which it can prove is \(\tt{A() = 1} \to \tt{U() = 5}\). Thus, it will end up taking action 1.

Now, we work in PA and try to show \(\tt{A() = 2} \to \tt{U() = 0}\). If PA is inconsistent (we have to account for this case since we are working in PA), then \(\tt{A() = 2} \to \tt{U() = 0}\) follows straightforwardly. Next, we consider the case that PA is consistent and work through the agent’s decision. PA can’t prove \(\tt{A() \ne 1}\), since we used the chicken rule, so since the sentence \(\tt{A() = 1} \to \tt{U() = 5}\) is easily provable, the sentence \(\tt{A() = 1} \to \tt{U() = 10}\) (ie. the first sentence that the agents checks for proofs of) must be unprovable.

The next sentence we check is \(\tt{A() = 2} \to \tt{U() = 10}\). If the agent finds a proof of this, then it takes action 2. Otherwise, it moves on to the sentence \(\tt{A() = 1} \to \tt{U() = 5}\), which is easily provable as mentioned above, and it takes action 1. Hence, the agent takes action 2 iff it can prove \(\tt{A() = 2} \to \tt{U() = 10}\), so \[\tt{A() = 2} \leftrightarrow \square(\tt{A() = 2} \to \tt{U() = 10}).\] Löb’s theorem tells us that \[\square(\tt{U = 10}) \leftrightarrow \square(\square(\tt{U = 10}) \to \tt{U() = 10}),\] so, by the uniqueness of fixed points, it follows that \(\tt{A() = 2} \leftrightarrow \square(\tt{U = 10})\). Then, we get \(\tt{A() = 2} \rightarrow \square(\tt{U = 10})\), so \(\tt{A() = 2} \rightarrow \square(\neg \square \bot)\) by the definition of the universe, and so \(\tt{A() = 2} \rightarrow \square(\bot)\) by Gödel’s second incompleteness theorem. Thus, if the agent takes action 2, then PA is inconsistent, so \(\tt{U() = 0}\) as desired.

This tells us that \(\rm{PA} \vdash \tt{A() = 2} \to \tt{U() = 0}\). Also, \(\rm{PA} \nvdash \tt{A() \ne 2}\) by the chicken rule, so \(\rm{PA} \nvdash \tt{A() = 2} \to \tt{U() \ne 0}\). Since PA does not prove \(\tt{A() = 2} \to \tt{U() \ne 0}\) at all, the shortest proof of \(\rm{PA} \vdash \tt{A() = 2} \to \tt{U() = 0}\) is much shorter than the shortest proof of \(\tt{A() = 2} \to \tt{U() \ne 0}\) for any definition of “much shorter”. (One can object here that there is no shortest proof, but (a) it seems natural to define the “length of the shortest proof” to be infinite if there is no proof, and (b) it is probably straightforward but tedious to modify the agent and universe so that there is a proof of \(\tt{A() = 2} \to \tt{U() \ne 0}\), but it is very long.)

However, it is clear that \(\tt{U() = 0}\) is not a legitimate counterfactual consequence of \(\tt{A() = 2}\). Informally, if the agent chose action 2, it would have received utility 10, since PA is consistent. Thus, we have a counterexample.

One issue we discussed during the workshop is whether counterfactuals should be defined with respect to a state of knowledge. We may want to say here that we, who know a lot, are in a state of knowledge with respect to which \(\tt{A() = 2}\) would counterfactually result in \(\tt{U() = 10}\), but that someone who reasons in PA is in a state of knowledge w.r.t. which it would result in \(\tt{U() = 0}\). One way to think about this is that we know that PA is obviously consistent, irrespective of how the agent acts, whereas PA does not know that it is consistent, allowing an agent using PA to think of itself as counterfactually controlling PA’s consistency. Indeed, this is roughly how the argument above proceeds.

I’m not sure that this is a good way of thinking about this though. The agents goes through some weird steps, most notably a rather opaque application of the fixed point theorem, so I don’t have a good feel for why it is reasoning this way. I want to unwrap that argument before I can say whether it’s doing something that, on an intuitive level constitutes legitimate counterfactual reasoning.

More worryingly, the perspective of counterfactuals as being defined w.r.t. states of knowledge seems to be at odds with PA believing a wrong counterfactual here. It would make sense for PA not to have enough information to make any statement about the counterfactual consequences of \(\tt{A() = 2}\), but that’s not what’s happening if we think of PA’s counterfactuals as obeying this conjecture; instead, PA postulates a causal mechanism by which the agent controls the consistency of PA, which we didn’t expect to be there at all. Maybe it would all make sense if I had a deeper understanding of the proof I gave, but right now it is very odd.

(This is rather long; perhaps it should be a post? Would anyone prefer that I clean up a few things and make this a post? I’ll also expand on the issue I mention at the end when I have more time to think about it.)



by Benja Fallenstein 945 days ago | Patrick LaVictoire likes this | link

Next, we consider the case that PA is consistent and work through the agent’s decision. PA can’t prove \(A()\neq1\), since we used the chicken rule, so since the sentence \(A()=1\to U()=5\) is easily provable, the sentence \(A()=1\to U()=10\) (ie. the first sentence that the agents checks for proofs of) must be unprovable.

It seems like this argument needs soundness of PA, not just consistency of PA. Do you see a way to prove in PA that if \(\mathrm{PA}\vdash A()\neq 1\), then PA is inconsistent?

[edited to add:] However, your idea reminds me of my post on the odd counterfactuals of playing chicken, and I think the example I gave there makes your idea go through:

The scenario is that you get 10 if you take action 1 and it’s not provable that you don’t take action 1; you get 5 if you take action 2; and you get 0 if you take action 1 and it’s provable that you don’t. Clearly you should take action 1, but I prove that modal UDT actually takes action 2. To do so, I show that PA proves \(A() = 1 \to \neg\square\ulcorner A() = 1\urcorner\). (Then from the outside, \(A() = 2\) follows from the outside by soundness of PA.)

This seems to make your argument go through if we can also show that PA doesn’t show \(A() \neq 1\). But if it did, then modal UDT would take action 1 because this comes first in its proof search, contradiction.

Thus, PA proves \(A() = 1 \to U() = 0\) (because this follows from \(A() = 1 \to \neg\square\ulcorner A() = 1\urcorner\)), and also PA doesn’t prove \(A() = 1 \to U() = 10\). As in your argument, then, the trolljecture implies that we should think “if the agent takes action 1, it gets utility 0” is a good counterfactual, and we don’t think that’s true.

Still interested in whether you can make your argument go through in your case as well, especially if you can use the chicken step in a way I’m not seeing yet. Like Patrick, I’d encourage you to develop this into a post.

reply

by Sam Eisenstat 944 days ago | Benja Fallenstein and Patrick LaVictoire like this | link

The argument that I had in mind was that if \(\rm{PA} \vdash \tt{A()} \ne 1\), then \(\rm{PA} \vdash \square \ulcorner \tt{A()} \ne 1 \urcorner\), so \(\rm{PA} \vdash \tt{A()} = 1\) since PA knows how the chicken rule works. This gives us \(\rm{PA} \vdash \bot\), so PA can prove that if \(\rm{PA} \vdash \tt{A()} \ne 1\), then PA is inconsistent. I’ll include this argument in my post, since you’re right that this was too big a jump.

Edit: We also need to use this argument to show that the modal UDT agent gets to the part where it iterates over utilities, rather than taking an action at the chicken rule step. I didn’t mention this explicitly, since I felt like I had seen it before often enough, but now I realize it is nontrivial enough to point out.

reply

by Patrick LaVictoire 945 days ago | link

Nice! Yes, I encourage you to develop this into a post.

reply

by Eliezer Yudkowsky 945 days ago | Jessica Taylor and Patrick LaVictoire like this | link

I can’t see the grandparent, so posting here:

It occurs to me that maybe we could regard the agent as consistently reasoning, “If I choose of my own free will to output 2, that thereby causes Peano Arithmetic to be inconsistent, causing me to get 0 points.”

I mostly don’t buy this, but it slightly defends the legitness of the counterfactual.

reply



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

This is exactly the sort of
by Stuart Armstrong on Being legible to other agents by committing to usi... | 0 likes

When considering an embedder
by Jack Gallagher on Where does ADT Go Wrong? | 0 likes

The differences between this
by Abram Demski on Policy Selection Solves Most Problems | 0 likes

Looking "at the very
by Abram Demski on Policy Selection Solves Most Problems | 0 likes

Without reading closely, this
by Paul Christiano on Policy Selection Solves Most Problems | 1 like

>policy selection converges
by Stuart Armstrong on Policy Selection Solves Most Problems | 0 likes

Indeed there is some kind of
by Vadim Kosoy on Catastrophe Mitigation Using DRL | 0 likes

Very nice. I wonder whether
by Vadim Kosoy on Hyperreal Brouwer | 0 likes

Freezing the reward seems
by Vadim Kosoy on Resolving human inconsistency in a simple model | 0 likes

Unfortunately, it's not just
by Vadim Kosoy on Catastrophe Mitigation Using DRL | 0 likes

>We can solve the problem in
by Wei Dai on The Happy Dance Problem | 1 like

Maybe it's just my browser,
by Gordon Worley III on Catastrophe Mitigation Using DRL | 2 likes

At present, I think the main
by Abram Demski on Looking for Recommendations RE UDT vs. bounded com... | 0 likes

In the first round I'm
by Paul Christiano on Funding opportunity for AI alignment research | 0 likes

Fine with it being shared
by Paul Christiano on Funding opportunity for AI alignment research | 0 likes

RSS

Privacy & Terms