Intelligent Agent Foundations Forumsign up / log in
An optimality result for modal UDT
discussion post by Benja Fallenstein 1196 days ago | Kaya Stechly, Abram Demski, Nate Soares and Patrick LaVictoire like this | discuss

Yesterday, I gave a definition of “decision theory” and “fair decision problem” in the context of provability logic, and gave a formal version of drnickbone’s argument that no decision theory can perform optimally on every “fair” decision problem. Today, I prove that for any provably “fair” decision problem \(\vec P(\vec u)\), there is a sound extension of \(\mathrm{PA}\) by a finite set of formulas, expressible in the language of provability logic, such that Vladimir Slepnev’s modal logic version of UDT performs optimally on \(\vec P(\vec u)\) if we have it search for proofs in this extended proof system.

This is a version of a result that Kenny Easwaran, Nate Soares and I proved in the context of UDT with bounded proof length when Kenny visited MIRI for two days earlier this year. (I’ll post about the bounded proof length version some other time; it’s more complicated, as you’d expect.)

Prerequisites: A primer on provability logic, “Evil” decision problems in provability logic.

Provability in extensions of PA

In order to carry out the idea above, we need to talk about provability in extensions of \(\mathrm{PA}\), not just about the provability in \(\mathrm{PA}\) expressed by the \({\square}(\cdot)\) operator of Gödel-Löb provability logic. Fortunately, the systems we’ll need in this post only extend \(\mathrm{PA}\) by a finite number of axioms which can be expressed as closed formulas in the language of \(\mathrm{GL}\), and there’s a simple trick to reason about such extensions in \(\mathrm{GL}\) itself:

Suppose that we extend \(\mathrm{PA}\) by a single axiom, \(\varphi\). Then, by the deduction theorem for first-order logic, \(\mathrm{PA}+ \varphi\vdash\psi\) if and only if \(\mathrm{PA}\vdash\varphi\to\psi\). Thus, if both \(\varphi\) and \(\psi\) can be expressed as modal formulas, then we can express the proposition that \(\psi\) is provable in \(\mathrm{PA}+\varphi\) by the modal formula \({\square}(\varphi\to\psi)\). The same trick works, of course, if we want to add finitely many different modal formulas to \(\mathrm{PA}\), since we can simply let \(\varphi\) be their conjunction.

In this post, when I’m using this trick to talk about \(\mathrm{PA}+ \varphi\), I’ll emphasize this fact by abbreviating \({\square}(\varphi\to\psi)\) to \({\square}_\varphi(\psi)\) (since we often subscript \({\square}\) by the proof system when it’s not clear which system we are referring to).

Modal UDT

The version of modal UDT I’m using in this article is equivalent to the one in Vladimir’s article, except for using \({\square}_\varphi(\cdot)\) instead of \({\square}(\cdot)\), for an appropriate \(\varphi\). Nevertheless, let me give a quick recap, which puts a slightly different spin on the system, and check that this does indeed meet the specification of a “modal decision theory”.

As a modal decision theory, \(m\)-action, \(n\)-preference level modal UDT is a sequence \(\vec{\mathrm{UDT}}(\vec u)\) of \(m\) formulas in \(n\) parameters, such that \({\mathrm{UDT}}_i(\vec U)\) gives the action UDT chooses in the modal universe \(\vec U\). Let’s write \(\vec{\mathrm{UDT}}^{(\varphi)}(\vec u)\) for the version which uses provability in \(\mathrm{PA}+ \varphi\).

I think the easiest way to specify \(\vec{\mathrm{UDT}}^{(\varphi)}(\vec u)\) is to say that it’s a sequence of modal formulas which describe the behavior of the following an algorithm (which needs a halting oracle):

  • For each \(j = 1,\dotsc,n\):
    • For each \(i = 1,\dotsc,m\):
      • If it’s provable in \(\mathrm{PA}+ \varphi\) that “I take action \(i\)” implies “I obtain outcome \(j\)”, then return \(i\).
  • If still here, return \(1\).

In other words, UDT starts with the most preferred outcome, and tries to find an action that provably implies that outcome; if there is such an action, it returns it, breaking ties lexicographically. If there isn’t, it tries the second-most-preferred outcome, and so on down to the least preferred one. If no actions imply any outcomes, it returns an arbitrary default action.

To actually write out \({\mathrm{UDT}}^{(\varphi)}_i\), we need to find a formula that describes when the above algorithm returns the action \(i\). Clearly, for actions other than the default action, the condition is that there’s some outcome \(j\) such that (a) it’s provable that \(i\) implies \(j\); (b) there’s no action \(i' \lt i\) which provably implies \(j\); (c) and there isn’t any outcome \(j' \lt j\) which is provably implied by any action \(i'\). The default action \(1\) is taken if the condition above holds or if no action provably implies any outcome.

Let’s agree to order pairs \((j,i)\) lexicographically, i.e., \((j,i) < (j',i')\) if \(j \lt j'\) or \(j = j'\) and \(i \lt i'\). Then, we can write \({\mathrm{UDT}}^{(\varphi)}_i(\vec u)\) as follows, for \(i > 1\): \[{\mathrm{UDT}}^{(\varphi)}_i(\vec u) \,:\equiv\, \bigvee_{j=1}^n\left({\square}_\varphi\big({\mathrm{UDT}}^{(\varphi)}_i(\vec u)\to u_j\big) \wedge \bigwedge_{(j',i')<(j,i)} \neg{\square}_\varphi\big({\mathrm{UDT}}^{(\varphi)}_{i'}(\vec u)\to u_{j'}\big)\right),\] and for \(i = 1\): \[\begin{aligned}{\mathrm{UDT}}^{(\varphi)}_1(\vec u) & \,:\equiv\, \bigvee_{j=1}^n\left({\square}_\varphi\big({\mathrm{UDT}}^{(\varphi)}_1(\vec u)\to u_j\big) \wedge \bigwedge_{(j',i')<(j,1)} \neg{\square}_\varphi\big({\mathrm{UDT}}^{(\varphi)}_{i'}(\vec u)\to u_{j'}\big)\right) \\ & \,\vee\, \bigwedge_{(j',i')=(1,1)}^{(m,n)} \neg{\square}_\varphi\big({\mathrm{UDT}}^{(\varphi)}_{i'}(\vec u)\to u_{j'}\big).\end{aligned}\] From this definition, it’s straight-forward to see that \(\vec{\mathrm{UDT}}^{(\varphi)}\) is provably mutually exclusive and exhaustive (p.m.e.e.); we work in \(\mathrm{GL}\) and distinguish the following cases: There either is no pair \((j,i)\) such that \({\square}\big({\mathrm{UDT}}^{(\varphi)}_i(\vec u)\to u_j\big)\), or some pair \((j,i)\) is the first one. If there is no such pair, then \({\mathrm{UDT}}^{(\varphi)}_1(\vec u)\) is true, and all other formulas in the sequence are false. If \((j,i)\) is the first such pair, then \({\mathrm{UDT}}^{(\varphi)}_i(\vec u)\) is true, and all the other formulas are false.

Provably extensional problems

Let’s call a closed formula in the language of \(\mathrm{GL}\) sound if its translation into the language of arithmetic is true on the natural numbers; unsurprisingly, we’ll write this as \(\mathbb{N}\vDash\varphi\) for short. My claim is that for every provably extensional decision problem \(\vec P(\vec a)\), there is a sound formula \(\varphi\) such that \(\vec{\mathrm{UDT}}^{(\varphi)}(\vec u)\) performs optimally on \(\vec P(\vec a)\), that is, obtains the best outcome that any decision theory can achieve on this outcome.

Recall that a decision problem \(\vec P(\vec a)\) is defined to be provably extensional if \[\mathrm{GL}\vdash\big(\vec a\leftrightarrow \vec b\big)\to\big(\vec P(\vec a)\leftrightarrow \vec P(\vec b)\big),\] where \(\vec\varphi\leftrightarrow \vec\psi\) stands for the conjunction \((\varphi_1\leftrightarrow \psi_1)\wedge\cdots\wedge(\varphi_n\leftrightarrow \psi_n)\). (Of course, the two sequences need to be of the same length.)

Intuitively, such a decision problem assigns a single outcome \(f(i)\) to every action \(i\), and every agent which chooses action \(i\) will obtain outcome \(f(i)\); this is a formalization of the idea that we should consider a decision problem “fair” if it only rewards or punishes you for the actions you decide to take, not for the decision process you used to arrive at these decisions.

This doesn’t mean that the mapping \(f(i)\) is all that matters about the decision problem—for example, any particular version of modal UDT fails on its “evil” decision problem, despite the fact that these “evil” problems are provably extensional. That’s because it can be difficult to determine which mapping \(f(\cdot)\) a particular \(\vec P(\vec a)\) corresponds to.

The idea of the optimality proof in this post is that for any particular \(\vec P(\vec u)\), there is some truth about what the corresponding mapping \(f(\cdot)\) is, even if it’s hard to determine, and if we run UDT with a proof system strong enough to handle this problem, this version of UDT will simply be able to figure out which action is best and then take it.

How do we find a proof system strong enough to determine the action/outcome mapping \(f(\cdot)\) for a given \(\vec P(\vec a)\)? The simplest possibility is to just take \(\mathrm{PA}\) and add to it the information about the true action/outcome mapping.

Let \(\chi_{i}^{(i)} :\equiv \top\) and \(\chi_{i'}^{(i)} :\equiv \bot\) for \(i'\neq i\), and write \(\vec\chi^{(i)}\) for the corresponding sequence of length \(m\) or \(n\) (depending on the context). By the true mapping, I mean the mapping \(f(\cdot)\) such that for all \(i\in\{1,\dotsc,m\}\), we have \(\mathbb{N}\vDash P_{f(i)}(\vec\chi^{(i)})\). This mapping is unique, because \(\vec P(\vec a)\) is a decision problem, which by definition implies that the formulas in this sequence are p.m.e.e. (i.e., exactly one of them is true, for any particular value of the \(\vec a\)).

Now it’s clear how we can encode the fact that this is the true mapping as a formula of \(\mathrm{GL}\): we simply set \(\varphi :\equiv \bigwedge_{i=1}^n P_{f(i)}(\vec\chi^{(i)})\).


It’s now clear the best outcome any agent can obtain on the provably extensional decision problem \(\vec P(\vec a)\) is \(\min_i f(i)\): An agent \(\vec A\) is a sequence of closed p.m.e.e. formulas, which means that for any agent \(\vec A\) there is some \(i\) such that \(\mathbb{N}\vDash\vec A\leftrightarrow \vec\chi^{(i)}\), which by extensionality implies \(\mathbb{N}\vDash\vec P(\vec A)\leftrightarrow \vec P(\vec\chi^{(i)})\), so the outcome obtained by \(\vec A\) is \(f(i)\).

Write \((\vec A,\vec U)\) for the fixed point of \(\vec{\mathrm{UDT}}^{(\varphi)}(\vec u)\) and \(\vec P(\vec a)\), with \(\varphi\) defined as in the previous section. Then, in order to show that \(\vec{\mathrm{UDT}}^{(\varphi)}(\vec u)\) performs optimally on this decision problem, we need to show that \(\mathbb{N}\vDash\vec A\leftrightarrow \vec\chi^{(i^*)}\) for some \(i^*\) such that \(f(i^*) = j^* := \min_{i} f(i)\).

I claim that this holds for \(i^*\) the smallest \(i\) such that \(f(i) = j^*\). Given the way \(\vec{\mathrm{UDT}}^{(\varphi)}(\vec u)\) is constructed, in order to show this I need to show that \(\mathbb{N}\vDash{\square}_\varphi\big({\mathrm{UDT}}^{(\varphi)}_{i^*}(\vec U)\to U_{j^*}\big)\), plus \(\mathbb{N}\vDash\neg{\square}_\varphi\big({\mathrm{UDT}}^{(\varphi)}_{i'}(\vec U)\to U_{j'}\big)\) for all \((j',i')<(j^*,i^*)\).

The first of these statements is equivalent to \[\mathrm{GL}\,\vdash\,\varphi\to\big({\mathrm{UDT}}^{(\varphi)}_{i^*}(\vec U)\to U_{j^*}\big),\] which, given the properties \(\mathrm{GL}\vdash\vec A\leftrightarrow \vec{\mathrm{UDT}}^{(\varphi)}(\vec U)\) and \(\mathrm{GL}\vdash\vec U\leftrightarrow \vec P(\vec A)\) defining \(\vec A\) and \(\vec U\), we can rewrite as \[\mathrm{GL}\,\vdash\,\varphi\to\big(A_{i^*}\to P_{j^*}(\vec A)\big).\] Since \(\vec A\) is p.m.e.e., \(A_{i^*}\) is equivalent to \(\vec A\leftrightarrow \chi^{(i^*)}\), so by provable extensionality of \(\vec P(\vec a)\), we have \(\mathrm{GL}\vdash A_{i^*}\to\big(\vec P(\vec A)\leftrightarrow \vec P(\vec\chi^{(i^*)})\big)\). The definition of \(\varphi\) implies \(\mathrm{GL}\vdash\varphi\to\big(\vec P(\vec\chi^{(i^*)}\leftrightarrow \vec\chi^{(j^*)})\), so we obtain \(\mathrm{GL}\vdash (\varphi\wedge A_{i^*})\to\big(\vec P(\vec A)\leftrightarrow \vec\chi^{(j^*)}\big)\) or \(\mathrm{GL}\vdash (\varphi\wedge A_{i^*})\to P_{j^*}(\vec A)\), which is equivalent to the desired result.

For the second statement, we need to show that \({\mathrm{UDT}}^{(\varphi)}_{i'}(\vec U)\to U_{j'}\) is not provable in \(\mathrm{PA}+ \varphi\) for any \((j',i')<(j^*,i^*)\). So suppose it were provable, for some such \((j',i')\). Then \(\vec{\mathrm{UDT}}^{(\varphi)}(\vec U)\) would take the action \(i'\) corresponding to the smallest such pair; in other words, we would have \(\mathbb{N}\vDash {\mathrm{UDT}}^{(\varphi)}_{i'}\). We can again use the defining properties of \(\vec A\) and \(\vec U\) to rewrite this as \(\mathbb{N}\vDash A_{i'}\).

Since \(\mathrm{PA}\) is sound and \(\mathbb{N}\vDash\varphi\), the assumption that \({\mathrm{UDT}}^{(\varphi)}_{i'}(\vec U)\to U_{j'}\) is provable in \(\mathrm{PA}+ \varphi\) implies that it is true on \(\mathbb{N}\). We can rewrite this as well, to \(\mathbb{N}\vDash A_{i'}\to P_{j'}(\vec A)\). Hence, we have \(\mathbb{N}\vDash A_{i'}\wedge P_{j'}(\vec A)\).

If \(j' \lt j^*\), this says that \(\vec A\) achieves an outcome better than \(j^*\), which we’ve shown above to be the best outcome that any agent can possibly achieve—contradiction. If \(j'=j^*\) and \(i'\lt i^*\), then we have found an \(i'\lt i^*\) such that \(f(i') = j^*\) (since by extensionality, \(\mathbb{N}\vDash A_{i'}\) implies \(\mathbb{N}\vDash P_{f(i')}(\vec A)\), and hence \(\mathbb{N}\vDash \neg P_{j'}(\vec A)\) for \(j'\neq f(i')\) as \(\vec P(\vec a)\) is p.m.e.e.)—contradiction to the definition of \(i^*\).





[Delegative Reinforcement
by Vadim Kosoy on Stable Pointers to Value II: Environmental Goals | 1 like

Intermediate update: The
by Alex Appel on Further Progress on a Bayesian Version of Logical ... | 0 likes

Since Briggs [1] shows that
by 258 on In memoryless Cartesian environments, every UDT po... | 2 likes

This doesn't quite work. The
by Nisan Stiennon on Logical counterfactuals and differential privacy | 0 likes

I at first didn't understand
by Sam Eisenstat on An Untrollable Mathematician | 1 like

This is somewhat related to
by Vadim Kosoy on The set of Logical Inductors is not Convex | 0 likes

This uses logical inductors
by Abram Demski on The set of Logical Inductors is not Convex | 0 likes

Nice writeup. Is one-boxing
by Tom Everitt on Smoking Lesion Steelman II | 0 likes

Hi Alex! The definition of
by Vadim Kosoy on Delegative Inverse Reinforcement Learning | 0 likes

A summary that might be
by Alex Appel on Delegative Inverse Reinforcement Learning | 1 like

I don't believe that
by Alex Appel on Delegative Inverse Reinforcement Learning | 0 likes

This is exactly the sort of
by Stuart Armstrong on Being legible to other agents by committing to usi... | 0 likes

When considering an embedder
by Jack Gallagher on Where does ADT Go Wrong? | 0 likes

The differences between this
by Abram Demski on Policy Selection Solves Most Problems | 1 like

Looking "at the very
by Abram Demski on Policy Selection Solves Most Problems | 0 likes


Privacy & Terms