Intelligent Agent Foundations Forumsign up / log in
by Janos Kramar 979 days ago | Patrick LaVictoire likes this | link | parent

In order to understand what the measure \(\mu\) that was constructed from \(d\) will reward, here’s the sort of machine that comes close to \(\sup_M\mu(M)=3\):

Let \(M_0\) be an arbitrary UTM. Now consider the function \(r(n)=n-2^{\lfloor \lg n \rfloor}\) (or, really, any function \(r:\mathbb{N}^+\rightarrow\mathbb{N}^0\) with \(r(n)<n\) that visits every nonnegative integer infinitely many times), and let \(L=\{x\in\{0,1\}^*:|x|>2,x_{|x|-1}=x_{r(|x|-1)},x_{|x|-2}=x_{r(|x|-2)}\}\). (The indices here are zero-based.) Choose \(x_0\in L\) such that \(x_0\) has no proper prefix in \(L\). Then, construct the UTM \(M\) that does:

repeat:
    s := ""
    while s not in L:
        # if there is no next character, halt
        s := s + readchar()
    if s == x0:
        break
M0()

This \(M\) will have \(\mu(M)>3-2^{-|x_0|}+d(M_0,M)2^{-|x_0|-d(M_0,M)}\).

\(M\) here is optimized for building up internal states (that are then UTMs that are efficiently encoded), while also being very easy to reset from these internal states; in other words being easy to “encode” from the UTMs it efficiently encodes, using at most 2 bits (an average of \(\frac{1+\sqrt{5}}{2}\)). This is somewhat interesting, but clearly doesn’t capture the kind of computational expressivity we’re primarily interested in.



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

[Delegative Reinforcement
by Vadim Kosoy on Stable Pointers to Value II: Environmental Goals | 1 like

Intermediate update: The
by Alex Appel on Further Progress on a Bayesian Version of Logical ... | 0 likes

Since Briggs [1] shows that
by 258 on In memoryless Cartesian environments, every UDT po... | 2 likes

This doesn't quite work. The
by Nisan Stiennon on Logical counterfactuals and differential privacy | 0 likes

I at first didn't understand
by Sam Eisenstat on An Untrollable Mathematician | 1 like

This is somewhat related to
by Vadim Kosoy on The set of Logical Inductors is not Convex | 0 likes

This uses logical inductors
by Abram Demski on The set of Logical Inductors is not Convex | 0 likes

Nice writeup. Is one-boxing
by Tom Everitt on Smoking Lesion Steelman II | 0 likes

Hi Alex! The definition of
by Vadim Kosoy on Delegative Inverse Reinforcement Learning | 0 likes

A summary that might be
by Alex Appel on Delegative Inverse Reinforcement Learning | 1 like

I don't believe that
by Alex Appel on Delegative Inverse Reinforcement Learning | 0 likes

This is exactly the sort of
by Stuart Armstrong on Being legible to other agents by committing to usi... | 0 likes

When considering an embedder
by Jack Gallagher on Where does ADT Go Wrong? | 0 likes

The differences between this
by Abram Demski on Policy Selection Solves Most Problems | 1 like

Looking "at the very
by Abram Demski on Policy Selection Solves Most Problems | 0 likes

RSS

Privacy & Terms