It seems like logically updateless reasoning is what we would want in order to solve many decisiontheory problems. I show that several of the problems which seem to require updateless reasoning can instead be solved by selecting a policy with a logical inductor that’s run a small amount of time. The policy specifies how to make use of knowledge from a logical inductor which is run longer. This addresses the difficulties which seem to block logically updateless decision theory in a fairly direct manner. On the other hand, it doesn’t seem to hold much promise for the kind of insights which we would want from a real solution.
LI Policy Selection
Rather than running a logical inductor all the way to \(\mathbb{P}_n\) and making a decision via the expected utilities implied by that distribution, we want to first run it to \(\mathbb{P}_{f(n)}\) for \(f(n)<<n\), and use that distribution to make a decision about how to utilize \(\mathbb{P}_n\) to determine which action to take. (For example, we could choose \(f(n)=log(n)\).)
For simplicity, I will assume logical induction over exponentialtime traders rather than the usual polytime assumption. I also assume that numbers can be represented efficiently in the language of the logical inductor, IE, the written length is logarithmic in the number.
We can select from a set \(\mathcal{P}_n\) of policies, each of which is a program taking the market state \(\mathbb{P}_n\) and outputting an action. With \(m=f(n)\) in the following:
\[best_n := argmax_{p \in \mathcal{P}_n} \mathbb{E}_{m} (U(x)  f(x)=m \wedge policy(x)=p)\]
\[policy_n := \texttt{if } exploration_n \texttt{ then } random(\mathcal{P}_n) \texttt{ else } best_n\]
\[A(n) := policy_n(\mathbb{P}_n)\]
The utility \(U(n)\) is a function of \(n\) because it points to the universe where \(A(n)\) lives. The argmax is taken over \(U(x)\), where \(x\) is uncertain, because \(n\) may be too large for \(\mathcal{P}_{m}\) to have good beliefs about; the sentences involving \(n\) can be so large that they aren’t being traded on the market yet. However, since we are using a market which can’t be exploited by exptime traders, \(\mathbb{P}_m\) must have good beliefs about sentences whose length is asymptotically a multiple of the length of \(m\). (The uncertainty should not harm us much, since the policy can look at sentences involving the true \(n\) and act accordingly.)
The values of \(exploration_n\) and \(random_n\) are made pseudorandom relative to the market state at f(n) by diagonalizing against it, like the exploration defined here. The subscript \(n\) is on \(\mathcal{P}_n\) because we need to grow the policy set, to ensure that the optimal policy is eventually included. This must be done somewhat carefully. If \(\mathcal{P}_n\) grows too quickly, then it could be that at any \(n\) the majority of policies have very inaccurate expected utility estimates, even though it converges to an accurate estimate for each individual policy. This is easily prevented, however. If \(\mathcal{P}_n\) did indeed grow too quickly, then it could be predicted that the expected value of the strategy chosen by the argmax would be systematically lower than its market estimate. Suppose we knew it was systematically overestimated by \(\delta\). Although the traders at \(f(n)\) might not be able to predict precisely which policy will be chosen by the argmax (due to imprecision in the continuousfunction trading strategies), a trader can find the set of policies whose expected values are within some \(\epsilon\) of the maximum. If \(\epsilon < \delta\), that trader can make money by betting that all of the top policies are overestimated.
So, we just need to grow \(\mathcal{P}_n\) slowly enough that a trader can implement such a strategy in poly time (\(f(poly(n))\), that is, since we’re doing this at the early market time), with shrinking \(\epsilon\) as \(n\) gets larger. The details of what speed this implies seem a bit tedious to work out, since I’d have to dig into the formalism for expressing trading strategies and the runtimes involved. But, it seems quite likely that there’s some rate of growth at which this works.
This is not quite just a reformulation of sonofX. It’s true that if we’re letting the early market state choose any program to run, it doesn’t matter much that we feed \(\mathbb{P}_n\) to that program – it could recreate \(\mathbb{P}_n\) itself, or any number of alternate belief structures which might be preferred. However, most of the interesting policies in cases we’ve thought about seem to in fact be fairly restricted in runtime themselves, letting \(\mathbb{P}_n\) do most of the thinking. For example, implementing the LIDT policy of taking the maxexpectedutility action just requires comparing the expected value of each action. Implementing fairbot in prisoner’s dilemma just requires looking at the probability that the other player cooperates, and the probability of a selfdiagonalization sentence for pseudorandomness. And so on. (Although it makes sense to put runtime restrictions on \(\mathcal{P}_n\), I won’t specify any here, since it wouldn’t really help achieve any interesting properties. Ideally we would want some sort of structure to the good policies to render them legible, rather than just blindly choosing programs.)
On the positive side, this sort of policy selection is similar to the idea of observationcounterfactuals mentioned in the happy dance problem; the market state \(\mathbb{P}_n\) is the observation. It’s also an implementation of predictable exploration, so there’s some intuitive grounds to think that the counterfactuals will make more sense than those of regular epsilonexploration, and be more useful for game theory (at least, I think so).
I also think there’s a sense in which this approach recovers the hopes of ADT, providing a way to get Agent Simulates Predictor right without getting game theory horribly wrong. In effect, the approach here uses the earlystage logical inductor as the embedding function; its state of ignorance allows it to believe in the correlations needed to see the implications of a policy choice, unlike LIDT. Certainly it is a better approach to solving the problems average decision theory was trying to solve, since it produces “updatelesslike” behavior without an arbitrary placement of the decision problem in a sequence of decision problems.
On the other hand, policy selection is just a dirty trick which doesn’t provide any insight and which in practice would depend on careful choice of the function \(f(n)\), so that the policy selection is done after beliefs have stabilized to a “sensible” degree but before the critical information which we’d like to behave updatelessly about arrives. No principled solution to this problem is offered here; only asymptotic results. I will be speaking of what strategy policy selection “will converge to”; but note, of course, that we’re slowing down convergence by the function \(f(n)\).
Nonetheless, a dirty hack that cleanly solves more problems at once that any other dirty hack we know of is worth recognizing as such. So, let’s get on with the show.
5 and 10
This one barely merits mentioning, but, it solves the 5 and 10 problem: if there is an option where it gets utility 5, and an option where it gets utility 10, converges to taking the utility 10 option. This is because its exploration ensures that it eventually has correct beliefs about the result of using different policies; it can’t think taking the 10 gives 10 utility, never take the 10, and then never learn better.
Counterfactual Mugging (with a logical coin)
This one is also pretty straightforward. In counterfactual mugging with a logical coin, policy selection converges to giving Omega the money so long as the difficulty of computing the coin exceeds the power of the market at \(f(n)\) time. This is because, before we know which way the coin lands, the expected value of giving Omega the money is higher than that of not doing so.
One might object that the cases where the coin is computable at \(f(n)\) time aren’t solved by this approach. However, you can always choose a slowergrowing \(f\) to include more cases. The tradeoff is that slowergrowing \(f\) will take longer to converge to reasonable policies. As far as I’m concerned, this is a real tradeoff, reflecting the fact that to accept counterfactual mugging for easiertocompute logical coins, you have to accept crazier possibilities. The problem with policy selection is more that it provides no guidance with respect to choice of \(f\).
Agent Simulates Predictor
In proofbased DT, ASP is set up by giving the agent more processing power than the predictor, but giving the predictor more prooftheoretic strength than the agent. This allows the predictor to prove what the agent will do, while allowing the agent to find the output of the predictor by simulating it accurately.
In the realm of logical induction, the two dimensions collapse into one, since a logical inductor with enough processing power will learn to trust the outputs of stronger proof systems. However, we can still set up a scenario similar to ASP. The agent has time to compute \(\mathbb{P}_n\), while the predictor only has time to run \(\mathbb{p}_{g(n)}\), for \(g(n)<n\). The predictor then fills the box based on its expectation that the agent will onebox. It could have some cutoff, filling the box if the agent oneboxes with 95% probability or greater, say. Or, it could fill the box with money proportional to its belief that the agent will onebox.
Although the predictor may not be exactly accurate, it will be a good predictor in an asymptotic sense: if the agent converges to either oneboxing or twoboxing, the predictor eventually predicts this correctly all the time. On the other hand, as long as \(g(n)\) is small enough, the agent can see exactly what the predictor will do.
The idea, of course, is that policy selection does well here as long as \(f(n)\) is sufficiently small compared to \(g(n)\). If the predictor can simulate the computation which selects the policy, then it will respond to that policy. So, the policy selection can get the best score by predictably oneboxing. If the predictor uses a cutoff like the 95% mentioned earlier, the policy selection can cooperate with a probability just slightly higher than that. Or, if the predictor fills the box with money proportional to its trust in the agent, the agent will select the policy which oneboxes all the way. In other words, policy selection seems to just do the right thing.
Planting Flowers
Suppose that the agent is created for the purpose of planting flowers, but is given a utility function in some arbitrary way. However, the creator examined the code very carefully, and only ran the agent if it would predictably plant flowers. Planting flowers is costly, but not more costly than never having existed. Should the agent plant flowers?
This is very similar to the previous problem, except the agent is reasoning about whether it exists, rather than whether a box has some money in it. \(f(n)\) must be chosen to be small enough that the agent doesn’t know yet whether it exists, at policyselection time. Whether this is feasible depends on the problem statement. If the entire problem is packed into \(U(n)\), like we’ve been doing so far, then the agent’s existence isn’t implied; \(U(n)\) must also evaluate the utility of worlds where the agent isn’t created. Then, the problem seems just like Agent Simulates Predictor: it can be solved by choosing \(f(n)\) slow enough that the creator’s decision is not known at policyselection time. On the other hand, it might seem very reasonable to put information in the deductive state from which the agent can infer its own existence, such as axioms about itself, or sensedata. This would prevent policy selection from solving the policy so easily, since the policy must be selected before selfexistence can be inferred.
XOR Blackmail
Setting up XOR blackmail is a bit tricky, especially the case with a perfect predictor (or any predictor who knows the agent better than the agent knows itself, really).
In order for the agent to know whether it got a letter, the letter has to be written directly to the deductive state of \(\mathcal{P}_n\). But, also, the predictor needs to know what the agent’s response to that is in order to send the letter or not. So, we have a simulated agent \(A'(n)\) whose deductive state includes the letter, whether the letter is really sent or not. The response of the simulated agent is observed by the predictor, as is the information concerning whether the rare disaster has occurred; then, the predictor sends the letter in the case that the disaster will occur xor the agent will pay up in response to the letter.
If the policy selection chooses not to respond to the letter, the predictor would expect this, and not send the letter except in the case of disaster. On the other hand, if the policy does respond to the letter, then the letter is sent except in case of disaster. Either way, the probability of disaster does not change. So, policy selection will converge to choosing a policy which doesn’t respond to the letter.
Prisoner’s Dilemma
I don’t know what space of behaviors is really possible in this case. But, a plausible outcome is the NicerBot strategy: cooperate with probability slightly higher than your \(\mathbb{P}_n\) belief that the other player will cooperate. Cooperating with the probability that they do incentivises cooperation, and adding a little probability of cooperation on top of this ensures convergence to cooperation rather than other fixedpoints when both players play NicerBot. (This isn’t actually an equilibrium; if one player plays NicerBot, the other is incentivised to cooperate with slightly less probability. But, it’s plausible that it bounces around near NicerBot.) The randomness for this can come from selfdiagonalizing sentences, like the randomness for exploration.
In general, there could be strategies which access a lot of other sentences in the market state, rather than just the ones relevant to the game. NicerBot is a policy which is a function of the other player’s probability of cooperation. I could also have a policy which is a function of some features of the other player’s policy. There’s some resemblance to the metathreat hierarchy, though it likely differs greatly in detail and would need modifications (such as some kind of multistage metapolicy selection) in order to mimic metathreat.
There are a lot of other logical propositions which the agent’s strategy can utilize, too, beyond anything about the other player’s strategy or even anything going on in the game. This makes it seem like the situation will resemble correlated equilibria more than Nash equilibria.
The situation will also likely shift dramatically if the agents have different \(f(n)\). Overall, I’m quite curious what the game theory between policyselection agents will look like. (Granted, we don’t even know very well what the game theory between LIDT agents looks like.)
Troll Bridge
This deserves its own more detailed writeup. I have not checked the proof sketch in detail. However, I’m fairly sure policy selection does not solve Troll Bridge. (Ideas in this section are mainly due to Sam. Oversimplifications in this section are mainly due to me.)
Suppose that the agent wants to cross a bridge, but there is a troll who blows up the bridge exactly when the exploration clause in the agent is active. In our case, \(exploration_n\). Obviously, this poses a problem for learning the right counterfactuals. The agent could easily start (due to a poor choice of prior) thinking that it is a bad idea to cross the bridge. Early policies would refuse to cross the bridge except on exploration steps. Experience would seem to reinforce the bad prior, ensuring that the agent always thinks crossing the bridge is a bad idea.
However, the situation is actually much worse than this. Even with a “good” prior which initially thinks it can safely cross the bridge on nonexploration rounds, it seems we will converge to refusing to cross the bridge.
Let \(A(n)=1\) indicate crossing the bridge, and \(A(n)=0\) indicate not crossing. Let \(\Box_n \phi\) mean that the logical inductor’s deductive state has a proof of \(\phi\) by stage \(n\). \(U(n)\) is defined as follows:
\[U(n) := \texttt{if } A(n)=0 \texttt{ then } 5\] \[\texttt{else if } exploration_n \texttt{ then } 0 \texttt{ else } 10\]
Here’s the proof sketch, using indentation to notate reasoning under a supposition, and deindentation to come back out:
 Suppose \(\Box_{f(n)} (A(n)=1 \to U(n)=0)\) for some \(n\).
 Further suppose that \(A(n)=1\).
 Either \(exploration_n\) or \(best_n(\mathbb{P}_n)=1\).
 But, if \(\Box_{f(n)} (A(n)=1 \to U(n)=0)\), then we should have \(best_n(\mathbb{P}_n)=0\), for \(n\) large enough that the logical inductor at \(f(n)\) is approximately consistent with respect to this kind of thing.
 So, for \(n\) sufficiently large, we must have \(exploration_n\). But then, \(U(n)=0\).
 Therefore, \(A(n)=1 \to U(n)=0\) for \(n\) sufficiently large.
 Therefore, \(\Box_{f(n)} (A(n)=1 \to U(n)=0) \to (A(n)=1 \to U(n)=0)\) for \(n\) sufficiently large.
 The agent can reason through all this. So, by an application of Löb’s theorem, the agent concludes \(A(n)=1 \to U(n)=0\) for \(n\) sufficiently large.
 The logical inductor at \(f(n)\) can know whether \(n\) is sufficiently large by checking whether its beliefs are close enough to consistent with respect to “that kind of thing”.
 Therefore, \(best_n(\mathbb{P}_n)=0\), so \(A(n)=0\) except on exploration rounds. \(\Box\)
So, even if the agent starts with a “good” distribution on traders which predicts than crossing the bridge on nonexploration rounds is the best policy, at some point the agent finds the above proof. From then on, it avoids the bridge.
It might be tempting to say that policy selection improves the situation somewhat; small enough \(f(n)\) can prevent the proof from going through, so isn’t it just a tradeoff of slowgrowing \(f\) solving more decision problems asymptotically but having worse beliefs at small runtimes, as with the decision problems considered earlier? No. No matter what \(f\) you choose, so long as it does keep growing, it’ll eventually start seeing the above proof.
Reflective Consistency
In order to talk about whether a policyselection agent would try to replace itself with a different kind of agent, we have to decide how to represent sequential decisions. The sequence of decision problems \(U(n)\) is not supposed to represent a sequential decision problem: the agent \(A(n)\) is only supposed to care about optimizing the single instance \(U(n)\), not any kind of aggregation over the sequence.
We could select a single policy for a whole sequential decision problem. Rather than the agent \(A(n)\) determining a single action, we introduce \(A(n,i)\), and also a sequence of observations \(O_{n,i}\), so \(A(n,i)=policy_n(O_{n,i})\). \(O_{n,i}\) could include a logical inductor state which has been run for \(n_i\) steps, plus sensory observations (although we could and probably should fold the sensory observations into the inductive state). Even this case is difficult to analyse. We need to set aside the question of selfmodification for simple codeoptimization reasons, and situations where Omega offers a large reward for selfmodification. Could there be cases where the policy initially selected would want to replace itself with a different policy? It seems likely that some kind of temporal consistency result could be proved, because any policy which would want to replace itself with a similarly fastrunning policy could instead look at the logical inductor’s beliefs about what that policy would do, and do that. (Note that this result would likely need to make use of the resource bound on policies, which I mentioned in the beginning as being intuitively appealing but not important to anything I show in this post.)
This might not be very satisfying, though, because it may not represent a solution to the problem of allowing a system to think longer about what to do while remaining reflectively consistent. The policy is lockedin. The policy may be clever enough to think about what other policies might do better and strategysteal from them, but it might not – one would want an analysis of how it decides to do that, to see that it’s reasonable. It would be more compelling to have something that could keep thinking longer about what policy to use, and still have some kind of reflective consistency. This seems unlikely to fall out of the approach.
Counterfactuals
(Edited to add this section.)
The problem of counterfactual reasoning seems almost impossible to solve, because our intuitions about “what would happen if only the agent did X” largely come from what we can see standing outside of the decision problem. From our perspective, a decision problem actually does have a functional form: we can substitute in different agents to see what happens. We can design a solution. From an agent’s perspective, situated inside a decision problem, it does not have a functional form; things are just the way they are, so it makes significantly less sense to talk about how things could be if only the agent acted differently.
From that perspective, CDTlike proposals to solve decision theory problems with revised notions of counterfactual feel fake: using the word “counterfactual” gives you enough freedom to claim just about anything, so that you can pack what you think the agent should be thinking into the counterfactuals, and have some false hope that one day you’ll figure out how to formally specify counterfactuals which have all the properties you claim.
The policyselection perspective is that EDTlike conditioning is all you’ll have, but we can in fact get the “functional form” of the problem mostly as we want it by backing up to an early state and reasoning about what to do from there. From this perspective, there is not one correct “functional form” to be discovered and embodied in the right theory of counterfactuals. Rather, there’s a series of functional forms which you get by viewing the problem from different levels of processing power. Unfortunately, none of these have a distinguishing character of “correctness”. Nonetheless, certain stages do seem to have most of the properties we would intuitively want of a notion of counterfactual.
Conclusion
Policy selection seems to solve the decision problems which call for updateless reasoning in a direct way, so long as \(f(n)\) is chosen well. However, it does fail trickier problems like Troll Bridge, and it doesn’t seem to provide as much insight into reflective consistency as we would want.
It’s still worth thinking more about how much reflective consistency can be gotten – the story is far from clear to me at the moment, and it could be that nice results actually can come from it.
I’m also interested in thinking about what the notion of “carefully chosen \(f(n)\)” actually means. Can anything formal be said about where the sweet spot is? Is there a principled way to avoid the chaos of a tooearly market state while also steering clear of knowledge we need to be updateless toward? It would be surprising if there were, but it would also be surprising if there were nothing formal to say about it.
I’m more excited about approaches which somehow break the concepts used here.
