Intelligent Agent Foundations Forumsign up / log in
An implementation of modal UDT
post by Benja Fallenstein 1103 days ago | Jessica Taylor, Luke Muehlhauser, Nate Soares and Patrick LaVictoire like this | discuss

One of the great advantages of working with Gödel-Löb provability logic is that it’s possible to implement an evaluator which efficiently checks whether a sentence in the language of GL is true. Mihaly and Marcello used this to write a program that checks whether two modal agents cooperate or defect against each other. Today, Nate and I extended this with an implementation of modal UDT, which allows us to check what UDT does on different decision problems—see Program.hs in the Github repository. No guarantees for correctness, since this was written rather quickly; if anybody is able to take the time to check the code, that would be very much appreciated!


The implementation of UDT is rather pleasing, I think. Here’s the informal definition of modal UDT, using PA + \(\ell\):

  • For every possible outcome \(j\), from best to worst:
    • For every possible action \(i\), in order:
      • If it’s provable in PA + \(\ell\) that “UDT takes action \(i\)” implies “the universe returns outcome \(j\)”, then take action \(i\).
  • If you’re still here, return a default action.

Here is the corresponding Haskell code:

udt :: (Enum a,Ord b,Show b,Enum b)
    => Int -> ModalProgram b a -> b -> ModalProgram b b
udt level univ dflt = modalProgram dflt $
  mFor $ \a ->
    mFor $ \b ->
      mIf (boxk level (Var b %> univ a)) (mReturn b)

Being able to write modal UDT like this makes it very easy to implement and try small variations on the code.


We used this code to check what modal UDT would do in a version of Newcomb’s problem where Omega uses proofs in PA (rather than simulations) to decide whether to put the money in the box; that is, it will put a million dollars in the first box if and only if it can prove that you will one-box. If our code is correct, it turns out that in this case, modal UDT will do whatever its default action was.

Earlier, we thought we had proved a different result on the whiteboard, but after the code disagreed with us, we went over it again and found a bug in our proof. After fixing that bug, we now have a manual proof that UDT will end up taking its default action in this scenario (which I’ll write about some other time). So looks like this can be a useful tool for figuring out this sort of thing!



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

[Delegative Reinforcement
by Vadim Kosoy on Stable Pointers to Value II: Environmental Goals | 1 like

Intermediate update: The
by Alex Appel on Further Progress on a Bayesian Version of Logical ... | 0 likes

Since Briggs [1] shows that
by 258 on In memoryless Cartesian environments, every UDT po... | 2 likes

This doesn't quite work. The
by Nisan Stiennon on Logical counterfactuals and differential privacy | 0 likes

I at first didn't understand
by Sam Eisenstat on An Untrollable Mathematician | 1 like

This is somewhat related to
by Vadim Kosoy on The set of Logical Inductors is not Convex | 0 likes

This uses logical inductors
by Abram Demski on The set of Logical Inductors is not Convex | 0 likes

Nice writeup. Is one-boxing
by Tom Everitt on Smoking Lesion Steelman II | 0 likes

Hi Alex! The definition of
by Vadim Kosoy on Delegative Inverse Reinforcement Learning | 0 likes

A summary that might be
by Alex Appel on Delegative Inverse Reinforcement Learning | 1 like

I don't believe that
by Alex Appel on Delegative Inverse Reinforcement Learning | 0 likes

This is exactly the sort of
by Stuart Armstrong on Being legible to other agents by committing to usi... | 0 likes

When considering an embedder
by Jack Gallagher on Where does ADT Go Wrong? | 0 likes

The differences between this
by Abram Demski on Policy Selection Solves Most Problems | 1 like

Looking "at the very
by Abram Demski on Policy Selection Solves Most Problems | 0 likes

RSS

Privacy & Terms