Training Garrabrant inductors to predict counterfactuals discussion post by Tsvi Benson-Tilsen 726 days ago | Abram Demski, Jessica Taylor and Scott Garrabrant like this | discuss The ideas in this post are due to Scott, me, and possibly others. Thanks to Nisan Stiennon for working through the details of an earlier version of this post with me. We will use the notation and definitions given in https://github.com/tsvibt/public-pdfs/blob/master/decision-theory/notation/main.pdf. Let $${{\overline{{\mathbb{P}}}}}$$ be a universal Garrabrant inductor and let $${{\overline{U}}}: {\mathbb{N}}^+ \to {\textrm{Expr}}(2^\omega \to {\mathbb{R}})$$ be a sequence of utility function machines. We will define an agent schema $$({A^{U_{n}}_{n}})$$. We give a schema where each agent selects a single action with no observations. Roughly, $${A^{U_{n}}_{n}}$$ learns how to get what it wants by computing what the $${A^{U_{i}}_{i}}$$ with $$i < n$$ did, and also what various traders predicted would happen, given each action that the $${A^{U_{i}}_{i}}$$ could have taken. The traders are rewarded for predicting what (counterfactually) would be the case in terms of bitstrings, and then their predictions are used to evaluate expected utilities of actions currently under consideration. This requires modifying our UGI and the traders involved to take a possible action as input, so that we get a prediction (a “counterfactual distribution over worlds”) for each action. More precisely, define \begin{aligned} {A^{U_{n}}_{n}} := &\textrm{ let } {\hat{{\mathbb{P}}}}_n := {\textrm{Counterfactuals}}(n)\\ &\;{\textrm{return}}{\operatorname{arg\,max}}_{a \in {\textrm{Act}}} {\hat{{\mathbb{E}}}}_n[a](U_n)\end{aligned} where ${\hat{{\mathbb{E}}}}_n[a](U_n):= \sum_{\sigma \in 2^n} {\hat{{\mathbb{P}}}}_n[a](\sigma) \cdot U_n(\sigma) .$ Here $${\hat{{\mathbb{P}}}}_n$$ is a dictionary of belief states, one for each action, defined by the function $${\textrm{Counterfactuals}}: {\mathbb{N}}^+ \to ({\textrm{Act}}\to \Delta({2^\omega}))$$ using recursion as follows: input: $$n \in {\mathbb{N}}^+$$ output: A dictionary of belief states $${\mathbb{P}}: {\textrm{Act}}\to \Delta({2^\omega})$$ initialize: $${\textrm{hist}}_{n-1} {\leftarrow}$$ array of belief states of length $$n-1$$ for $$i\leq n-1$$: $${\hat{{\mathbb{P}}}}_i {\leftarrow}{\textrm{Counterfactuals}}(i)$$ $$a_i {\leftarrow}{\operatorname{arg\,max}}_{a \in {\textrm{Act}}} \sum_{\sigma \in 2^i} {\hat{{\mathbb{P}}}}_i[a](\sigma) \cdot U_i(\sigma)$$ $${\textrm{hist}}_{n-1}[i] {\leftarrow}{\hat{{\mathbb{P}}}}_i[a_i]$$ for $$(a : {\textrm{Act}})$$: $${\mathbb{P}}[a] {\leftarrow}{\textrm{MarketMaker}}({\textrm{hist}}_{n-1}, {\textrm{TradingFirm}}'(a, a_{\leq n-1}, {\textrm{hist}}_{n-1}))$$ return $${\mathbb{P}}$$ Here, we use a modified form of traders and of the $${\textrm{TradingFirm}}'$$ function from the $$LIA$$ algorithm given in the logical induction paper. In detail, let traders have the type ${\mathbb{N}}^+ \times {\textrm{Act}}\to \textrm{trading strategy}.$ On day $$n$$, traders are passed a possible action $$a \in {\textrm{Act}}$$, which we interpret as “an action that $${A^{U_{n}}_{n}}$$ might take”. Then each trader returns a trading strategy, and those trading strategies are used as usual to construct a belief state $${\mathbb{P}}[a]$$. We pass to $${\textrm{TradingFirm}}'$$ the full history $$a_{\leq n-1}$$ of the actions taken by the previous $${A^{U_{i}}_{i}}$$, since $${\textrm{TradingFirm}}'$$ calls the Budgeter function; that function requires computing the traders’s previous trading strategies, which require passing the $$a_i$$ as arguments. Thus, traders are evaluated based on the predictions they made about logic when given the actual action $$a_n$$ as input. In particular, the sequence $$({\mathbb{P}}_n[a_n])$$ is a UGI over the class of efficient traders given access to the actual actions taken by the agent $${A^{U_{n}}_{n}}$$. This scheme probably suffers from spurious counterfactuals, but feels like a natural baseline proposal.

### NEW DISCUSSION POSTS

[Note: This comment is three
 by Ryan Carey on A brief note on factoring out certain variables | 0 likes

There should be a chat icon
 by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes

Apparently "You must be
 by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like

There is a replacement for
 by Alex Mennen on Meta: IAFF vs LessWrong | 1 like

Regarding the physical
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think that we should expect
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think I understand your
 by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

This seems like a hack. The
 by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

After thinking some more,
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yes, I think that we're
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

My intuition is that it must
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

To first approximation, a
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Actually, I *am* including
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yeah, when I went back and
 by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes

> Well, we could give up on
 by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes