by Jessica Taylor 714 days ago | Patrick LaVictoire likes this | link | parent I think the main problem with using a pre-specified distribution over actions is that, since it doesn’t reflect the AI’s actual behavior, you can’t say much about $$P(B= b | e)$$ in relation to real life. For example, maybe the implicit policy is to take random actions, which results in humans not pressing the shutdown button; therefore, in real life the AI is confident that the button will not be pressed, so it spends no resources looking at the shutdown button to see if it was pressed.

 by Patrick LaVictoire 711 days ago | link Maybe if the pre-specified distribution is a reasonably well-calibrated predictor of the AI (given that distribution)? Like, maybe this is a way that an Oracle AI could help ensure the safety of a somewhat weaker Tool AI. reply
 by Stuart Armstrong 711 days ago | link How about an iterated process? If you have a series of actions and observations $$e_n=a_no_na_{n-1}o_{n-1}\ldots$$, and $$A$$ is the set of possible actions, have the AI pick $$a$$ such that: $$a_{n} = \operatorname{argmax}_{a} \frac{1}{|A|}\sum_{a'\in A,b} U(ao_ne_{n-1}|B=b) P(B=b|a'o_ne_{n-1})$$ (the $$\frac{1}{|A|}$$ is not necessary, just a reminder that we are choosing a uniform distribution over the possible actions). Notice that the AI conditions correctly on all its past actions, but not its current hypothetical. I wonder how this would work if we set $$A={0,1}$$ and had the AI type out messages bit by bit, each time thinking the probability of $$B$$ was independent of its bit-choice? EDIT: What this means is that, as soon as it’s done any action that might cause the shutdown button to be pressed, it will then look at the shutdown button (but maybe doesn’t expect, ahead of time, that it will be looking at the button). reply
 by Jessica Taylor 710 days ago | link I think, to get $$P(B = b)$$, you have to have an implicit policy for the entire rest of the game (not just the next action $$a'$$). I like the idea of using the evidence you have so far to inform the $$P(b)$$, so you spend more effort on looking at the shutdown button if you expect shutdown might be imminent based on your evidence. Of course, you can combine this with the fixed point thing, so the distribution of $$a'$$ is the same as the distribution of $$a$$. My main concern is that this isn’t reflectively stable. If at an early time step the AI has a certain $$P(b)$$ distribution, it may want to modify into an agent that fixes this as the correct $$P(b)$$ rather than changing $$P(b)$$ in response to new evidence; this is because it is modelling $$B$$ as coming independently from $$P(b)$$. reply

### NEW DISCUSSION POSTS

If you drop the
 by Alex Appel on Distributed Cooperation | 1 like

Cool! I'm happy to see this
 by Abram Demski on Distributed Cooperation | 0 likes

Caveat: The version of EDT
 by 258 on In memoryless Cartesian environments, every UDT po... | 2 likes

[Delegative Reinforcement
 by Vadim Kosoy on Stable Pointers to Value II: Environmental Goals | 1 like

Intermediate update: The
 by Alex Appel on Further Progress on a Bayesian Version of Logical ... | 0 likes

Since Briggs [1] shows that
 by 258 on In memoryless Cartesian environments, every UDT po... | 2 likes

This doesn't quite work. The
 by Nisan Stiennon on Logical counterfactuals and differential privacy | 0 likes

I at first didn't understand
 by Sam Eisenstat on An Untrollable Mathematician | 1 like

This is somewhat related to
 by Vadim Kosoy on The set of Logical Inductors is not Convex | 0 likes

This uses logical inductors
 by Abram Demski on The set of Logical Inductors is not Convex | 0 likes

Nice writeup. Is one-boxing
 by Tom Everitt on Smoking Lesion Steelman II | 0 likes

Hi Alex! The definition of
 by Vadim Kosoy on Delegative Inverse Reinforcement Learning | 0 likes

A summary that might be
 by Alex Appel on Delegative Inverse Reinforcement Learning | 1 like

I don't believe that
 by Alex Appel on Delegative Inverse Reinforcement Learning | 0 likes

This is exactly the sort of
 by Stuart Armstrong on Being legible to other agents by committing to usi... | 0 likes