Intelligent Agent Foundations Forumsign up / log in
A failed attempt at Updatelessness using Universal Inductors
discussion post by Scott Garrabrant 505 days ago | Jessica Taylor and Patrick LaVictoire like this | 1 comment

Here, I present a failed attempt to build an updateless decision theory out of universal inductors. It fails because it is is mistaking updatelessness about which logical theory it is in for true logical updatelessness about computations. I will use Tsvi’s notation.

Fix a UGI, \((\mathbb{P}_n)\). Fix a sequence of utility functions \((U_n:2^\omega\rightarrow \mathbb{R})\), which assigns a utility to all propositionally consistent worlds (represented by infinite bit strings). We assume that \(U_n(W)\) is computable function of \(n\) and the first \(k(n)\) bits for some computable function \(k\). In the simplest example, \(U_n\) is just equal to a single bit in the string.

We define a sequence of agents \((A_n)\) which output a single bit \(1\) or \(0\). These agents will be broken into two pieces, a deductive process, which outputs a bunch of logical facts, and a decision process, which chooses a policy in the form of a function from the possible outputs of the deductive process to \(\{0,1\}\).

Let \(P_n\) denote the set of policies that the decision process can output. There is a computable partition of worlds into sets of worlds where each policy is output, \(S_n:2^\omega\rightarrow P_n\). For each \(p\in P_n\), we can compute the expectation, \(\mathbb{E}(U_n(W)|S_n(W)=p)\), where W is sampled according to \(\mathbb{P}_n\). The decision process outputs the policy \(p\) which maximizes \(\mathbb{E}(U_n(W)|S_n(W)=p)\), and the agent \(A_n\) outputs the result of applying that policy to the output of the deductive process.

There are actually many things wrong with the above proposal, and many similar proposals that fail in similar or different ways. However, I want to focus on the one problem that proposals like this have in common:

Universal Induction is a model for uncertainty about what theory/model you are in; it is not a model for uncertainty about the output of computations.

It is easiest to see why this is a problem using the counterfactual mugging problem. We would like to use a universal inductor to be uncertain about a digit of \(\pi\), and thus reason about the world in which it went another way. The problem is that a powerful universal inductor has the digits of \(\pi\) in its probabilities, even if it does not know that it is in PA. This is because the Kolomorogov complexity of a infinite string of the digits of \(\pi\) is very low, while the Kolomorogov complexity of a string that looks like \(\pi\) for a very long time, and then changes is high. We do not have to direct our UGI at PA for it to have good beliefs about late bits in a string that starts out looking like \(\pi\).

I will use the phrase “logical updatelessness” to refer updatelessness about computations. I think updatelessness about the logical system is mostly a distraction from the more important concept of logical updatelessness. (Similarly, I believe that early work in logical uncertainty about distributions over complete theories was mostly a distraction from the later work that focused on uncertainty about computations.)

by Paul Christiano 503 days ago | link

From my perspective, the point of reasoning about complete theories isn’t that we actually care about them, it’s that “what does this halting oracle output?” might be a useful analogy for “what does this long-running computation output?” I still think it is/was a useful analogy, though the time eventually came to move on to smaller and better things.






If you drop the
by Alex Appel on Distributed Cooperation | 1 like

Cool! I'm happy to see this
by Abram Demski on Distributed Cooperation | 0 likes

Caveat: The version of EDT
by 258 on In memoryless Cartesian environments, every UDT po... | 2 likes

[Delegative Reinforcement
by Vadim Kosoy on Stable Pointers to Value II: Environmental Goals | 1 like

Intermediate update: The
by Alex Appel on Further Progress on a Bayesian Version of Logical ... | 0 likes

Since Briggs [1] shows that
by 258 on In memoryless Cartesian environments, every UDT po... | 2 likes

This doesn't quite work. The
by Nisan Stiennon on Logical counterfactuals and differential privacy | 0 likes

I at first didn't understand
by Sam Eisenstat on An Untrollable Mathematician | 1 like

This is somewhat related to
by Vadim Kosoy on The set of Logical Inductors is not Convex | 0 likes

This uses logical inductors
by Abram Demski on The set of Logical Inductors is not Convex | 0 likes

Nice writeup. Is one-boxing
by Tom Everitt on Smoking Lesion Steelman II | 0 likes

Hi Alex! The definition of
by Vadim Kosoy on Delegative Inverse Reinforcement Learning | 0 likes

A summary that might be
by Alex Appel on Delegative Inverse Reinforcement Learning | 1 like

I don't believe that
by Alex Appel on Delegative Inverse Reinforcement Learning | 0 likes

This is exactly the sort of
by Stuart Armstrong on Being legible to other agents by committing to usi... | 0 likes


Privacy & Terms