Previous: An Informal Conjecture on Proof Length and Logical Counterfactuals

This is a simplified and more complete presentation of my previous counterexample to Scott Garrabrant's conjecture on logical counterfactuals. I present an example of two statements and such that and , but is not "really" a counterfactual consequence of , in an intuitive, informal sense. I also argue, based on the proof of , that we should trust our intuition that is not a real counterfactual consequence, rather than believing that our intuitions are being stretched too far and misleading us.

Consider the following universe and agent.

def U():
  if A() = 1:
    if PA is consistent:
      return 10
    else:
      return 0
  else: return 5

def A():
  if PA ⊢ A() = 1 → U() = 10:
    return 1
  else: return 2

Note that this agent reasons very similarly to modal UDT. It is simpler, but that's because it's clear that action 2 will lead to utility 5, so if the agent cannot get utility 10 by taking action 1, there is no reason for it to continue its proof search.

Consider the statements and . We first show . Work in and suppose for contradiction that , i.e. . If were inconsistent, the agent would get 0 utility, so it must be the case that is consistent. Also, since , we know by looking at the agent's code that the proof search succeeded, so we have . knows that it can prove this, so it knows that the agent takes action 1, i.e. we have . Putting this all together, consistency tells us that . This is logically equivalent to . Thus, stepping back to the metalanguage, we see that the theory asserts its own consistency, so it is inconsistent. (Gödel's second incompleteness theorem is used here, but the same argument can be carried out using Löb's theorem.)

Since is inconsistent, proves , i.e. . (By soundness of , we can see at this point that the agent takes action 2, but this is unnecessary for the present argument.) It remains only to show that . If is inconsistent, then the agent takes action 1, since its proof search trivially succeeds, and it receives utility 0, so there is at least one model of where and , establishing the result.

We now have both and , so Garrabrant's conjecture claims that in the counterfactual world where the agent takes action 1, it receives utility 0. By the structure of , in the case, we also have and , so Garrabrant's conjecture similarly claims that the inconsistency of holds in this world. Both of these seem intuitively wrong; we would expect that the agent receives utility 10 in this world, and that is still consistent. Even if we do not expect these things very strongly --- for example, we may think that our beliefs about this counterfactual world are best modeled by a probability distribution that places weight on both and --- it is surprising that a notion of counterfactual would be certain that the less intuitive option, , holds in this counterfactual world.

We can obtain further evidence that our intuition is correct by examining the structure of the argument that . Central to this argument is a step where we reason from the assumption that to the conclusion that this happened because the agent's proof search succeeded, and thus that holds. This reasoning is causally backwards; we regard the proof search as the cause of the agent's action and we reason backward from the effect to the cause. This is valid logically, but it is not what we mean by the counterfactual world where . We can draw an analogy to graph surgery on Bayesian networks. There, in order to perform causal reasoning, we sever the links connecting a node to its causal parents, and only allow the counterfactual to change our probability distribution through its causal children. This is a different kind of reasoning, and it is one that this example shows we do not yet have a good analogue of in a logical context.

Personal Blog

4

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 12:33 AM

We can also construct an example where with a short proof and also proves , but any such proof is much longer. We only need to put a bound on the proof length in A's proof search. Then, the argument that proves its own consistency still works, and is rather short: as the proof length bound increases. However, there cannot be a proof of within 's proof length bound, since if it found one it would immediately take action 1. In this case can still prove that simply by running the agent, but this argument shows that any such proof must be long.