Stable Pointers to Value II: Environmental Goals discussion post by Abram Demski 12 days ago | 1 comment

Delegative Reinforcement Learning solves this problem by keeping humans in the loop while preserving consequentialist reasoning. Ofc currently the theory is based on a lot of simplification and the ultimate learning protocol will probably look differently, but I think that the basic mechanism (delegation combined with model-based reasoning) is sound.

Further Progress on a Bayesian Version of Logical Uncertainty
post by Alex Appel 20 days ago | Scott Garrabrant likes this | 1 comment

I’d like to credit Daniel Demski for helpful discussion.

Intermediate update:

The handwavy argument about how you’d get propositional inconsistency in the limit of imposing the constraint of “the string cannot contain $$a\wedge b\wedge c...\to\neg\phi$$ and $$a$$ and $$b$$ and… and $$\phi$$

is less clear than I thought. The problem is that, while the prior may learn that that constraint applies as it updates on more sentences, that particular constraint can get you into situations where adding either $$\phi$$ or $$\neg\phi$$ leads to a violation of the constraint.

So, running the prior far enough forward leads to the probability distribution being nearly certain that, while that particular constraint applied in the past, it will stop applying at some point in the future by vetoing both possible extensions of a string of sentences, and then less-constrained conditions will apply from that point forward.

On one hand, if you don’t have the computational resources to enforce full propositional consistency, it’s expected that most of the worlds you generate will be propositionally inconsistent, and midway through generating them you’ll realize that some of them are indeed propositionally inconsistent.

On the other hand, we want to be able to believe that constraints capable of painting themselves into a corner will apply to reality forevermore.

I’ll think about this a bit more. One possible line of attack is having $$\mathbb{P}(\phi)$$ and $$\mathbb{P}(\neg\phi)$$ not add up to one, because it’s possible that the sentence generating process will just stop cold before one of the two shows up, and renormalizing them to 1. But I’d have to check if it’s still possible to $$\varepsilon$$-approximate the distribution if we introduce this renormalization, and to be honest, I wouldn’t be surprised if there was a more elegant way around this.

EDIT: yes it’s still possible to $$\varepsilon$$-approximate the distribution in known time if you have $$\mathbb{P}(\phi)$$ refer to $$\frac{probability to encounter \phi first}{1-probability to halt first}$$, although the bounds are really loose. This is because if most of the execution paths involve halting before the sentence is sampled, $$\varepsilon$$-error in the probability of sampling $$\phi$$ first will get blown up by the small denominator.

Will type up the proof later, but it basically proceeds by looking at the probability mass associated with “sample the trivial constraint that accepts everything, and sample it again on each successive round”, because this slice of probability mass has a near-guarantee of hitting $$\phi$$, and then showing that even this tiny slice has substantially more probability mass than the cumulative probability of ever sampling a really rare sentence or not hitting any of $$\phi$$, $$\neg\phi$$, or the string terminating.

In memoryless Cartesian environments, every UDT policy is a CDT+SIA policy
post by Jessica Taylor 620 days ago | Vadim Kosoy and Abram Demski like this | 2 comments

Summary: I define a memoryless Cartesian environments (which can model many familiar decision problems), note the similarity to memoryless POMDPs, and define a local optimality condition for policies, which can be roughly stated as “the policy is consistent with maximizing expected utility using CDT and subjective probabilities derived from SIA”. I show that this local optimality condition is necesssary but not sufficient for global optimality (UDT).

Since Briggs [1] shows that EDT+SSA and CDT+SIA are both ex-ante-optimal policies in some class of cases, one might wonder whether the result of this post transfers to EDT+SSA. I.e., in memoryless POMDPs, is every (ex ante) optimal policy also consistent with EDT+SSA in a similar sense. I think it is, as I will try to show below.

Given some existing policy $$\pi$$, EDT+SSA recommends that upon receiving observation $$o$$ we should choose an action from $\arg\max_a \sum_{s_1,...,s_n} \sum_{i=1}^n SSA(s_i\text{ in }s_1,...,s_n\mid o, \pi_{o\rightarrow a})U(s_n).$ (For notational simplicity, I’ll assume that policies are deterministic, but, of course, actions may encode probability distributions.) Here, $$\pi_{o\rightarrow a}(o')=a$$ if $$o=o'$$ and $$\pi_{o\rightarrow a}(o')=\pi(o')$$ otherwise. $$SSA(s_i\text{ in }s_1,...,s_n\mid o, \pi_{o\rightarrow a})$$ is the SSA probability of being in state $$s_i$$ of the environment trajectory $$s_1,...,s_n$$ given the observation $$o$$ and the fact that one uses the policy $$\pi_{o\rightarrow a}$$.

The SSA probability $$SSA(s_i\text{ in }s_1,...,s_n\mid o, \pi_{o\rightarrow a})$$ is zero if $$m(s_i)\neq o$$ and $SSA(s_i\text{ in }s_1,...,s_n\mid o, \pi_{o\rightarrow a}) = P(s_1,...,s_n\mid \pi_{o\rightarrow a}) \frac{1}{\#(o,s_1,...,s_n)}$ otherwise. Here, $$\#(o,s_1,...,s_n)=\sum_{i=1}^n \left[ m(s_i)=o \right]$$ is the number of times $$o$$ occurs in $$\#(o,s_1,...,s_n)$$. Note that this is the minimal reference class version of SSA, also known as the double-halfer rule (because it assigns 1/2 probability to tails in the Sleeping Beauty problem and sticks with 1/2 if it’s told that it’s Monday).

Inserting this into the above, we get $\arg\max_a \sum_{s_1,...,s_n} \sum_{i=1}^n SSA(s_i\text{ in }s_1,...,s_n\mid o, \pi_{o\rightarrow a})U(s_n)=\arg\max_a \sum_{s_1,...,s_n\text{ with }o} \sum_{i=1...n, m(s_i)=o} \left( P(s_1,...,s_n\mid \pi_{o\rightarrow a}) \frac{1}{\#(o,s_1,...,s_n)} \right) U(s_n),$ where the first sum on the right-hand side is over all histories that give rise to observation $$o$$ at some point. Dividing by the number of agents with observation $$o$$ in a history and setting the policy for all agents at the same time cancel each other out, such that this equals $\arg\max_a \sum_{s_1,...,s_n\text{ with }o} P(s_1,...,s_n\mid \pi_{o\rightarrow a}) U(s_n)=\arg\max_a \sum_{s_1,...,s_n} P(s_1,...,s_n\mid \pi_{o\rightarrow a}) U(s_n).$ Obviously, any optimal policy chooses in agreement with this. But the same disclaimers apply; multiple policies satisfy the right-hand side of this equation and not all of these are optimal.

[1] Rachael Briggs (2010): Putting a value on Beauty. In Tamar Szabo Gendler and John Hawthorne, editors, Oxford Studies in Epistemology: Volume 3, pages 3–34. Oxford University Press, 2010. http://joelvelasco.net/teaching/3865/briggs10-puttingavalueonbeauty.pdf

Logical counterfactuals and differential privacy
post by Nisan Stiennon 30 days ago | Abram Demski and Scott Garrabrant like this | 1 comment

This idea was informed by discussions with Abram Demski, Scott Garrabrant, and the MIRIchi discussion group.

This doesn’t quite work. The theorem and examples only work if you maximize the unconditional mutual information, $$H(X;Y)$$, not $$H(X;Y|A)$$. And the choice of $$X$$ is doing a lot of work — it’s not enough to make it “sufficiently rich”.

An Untrollable Mathematician
post by Abram Demski 29 days ago | Alex Appel, Sam Eisenstat, Vadim Kosoy, Jack Gallagher, Paul Christiano, Scott Garrabrant and Vladimir Slepnev like this | 1 comment

Follow-up to All Mathematicians are Trollable.

It is relatively easy to see that no computable Bayesian prior on logic can converge to a single coherent probability distribution as we update it on logical statements. Furthermore, the non-convergence behavior is about as bad as could be: someone selecting the ordering of provable statements to update on can drive the Bayesian’s beliefs arbitrarily up or down, arbitrarily many times, despite only saying true things. I called this wild non-convergence behavior “trollability”. Previously, I showed that if the Bayesian updates on the provabilily of a sentence rather than updating on the sentence itself, it is still trollable. I left open the question of whether some other side information could save us. Sam Eisenstat has closed this question, providing a simple logical prior and a way of doing a Bayesian update on it which (1) cannot be trolled, and (2) converges to a coherent distribution.

by Sam Eisenstat 28 days ago | Abram Demski likes this | link | on: An Untrollable Mathematician

I at first didn’t understand your argument for claim (2), so I wrote an alternate proof that’s a bit more obvious/careful. I now see why it works, but I’ll give my version below for anyone interested. In any case, what you really mean is the probability of deciding a sentence outside of $$\Phi$$ by having it announced by nature; there may be a high probability of sentences being decided indirectly via sentences in $$\Phi$$.

Instead of choosing $$\Phi$$ as you describe, pick $$\Phi$$ so that the probability $$\mu(\Phi)$$ of sampling something in $$\Phi$$ is greater than $$1 - \mu(\psi) \cdot \varepsilon / 2$$. Then, the probability of sampling something in $$\Phi - \{\psi\}$$ is at least $$1 - \mu(\psi) \cdot (1 + \varepsilon / 2)$$. Hence, no matter what sentences have been decided already, the probability that repeatedly sampling from $$\mu$$ selects $$\psi$$ before it selects any sentence outside of $$\Phi$$ is at least

\begin{align*} \sum_{k = 0}^\infty (1 - \mu(\psi) \cdot (1 + \varepsilon / 2))^k \cdot \mu(\psi) & = \frac{\mu(\psi)}{\mu(\psi) \cdot (1 + \varepsilon / 2)} \\ & > 1 - \varepsilon / 2 \end{align*}

as desired.

Furthermore, this argument makes it clear that the probability distribution we converge to depends only on the set of sentences which the environment will eventually assert, not on their ordering!

Oh, I didn’t notice that aspect of things. That’s pretty cool.

 by Abram Demski 40 days ago | link | parent | on: The set of Logical Inductors is not Convex This uses logical inductors of distinctly different strengths. I wonder if there’s some kind of convexity result for logical inductors which can see each other? Suppose traders in $$\mathbb{P}_n$$ have access to $$\mathbb{P}'_n$$ and vice versa. Or perhaps just assume that the markets cannot be arbitrarily exploited by such traders. Then, are linear combinations also logical inductors?

This is somewhat related to what I wrote about here. If you consider only what I call convex gamblers/traders and fix some weighting (“prior”) over the gamblers then there is a natural convex set of dominant forecasters (for each history, it is the set of minima of some convex function on $$\Delta\mathcal{O}^\omega$$.)

The set of Logical Inductors is not Convex
post by Scott Garrabrant 512 days ago | Sam Eisenstat, Abram Demski and Patrick LaVictoire like this | 3 comments

Sam Eisenstat asked the following interesting question: Given two logical inductors over the same deductive process, is every (rational) convex combination of them also a logical inductor? Surprisingly, the answer is no! Here is my counterexample.

This uses logical inductors of distinctly different strengths. I wonder if there’s some kind of convexity result for logical inductors which can see each other? Suppose traders in $$\mathbb{P}_n$$ have access to $$\mathbb{P}'_n$$ and vice versa. Or perhaps just assume that the markets cannot be arbitrarily exploited by such traders. Then, are linear combinations also logical inductors?

Smoking Lesion Steelman II
post by Abram Demski 144 days ago | Tom Everitt and Scott Garrabrant like this | 1 comment

After Johannes Treutlein’s comment on Smoking Lesion Steelman, and a number of other considerations, I had almost entirely given up on CDT. However, there were still nagging questions about whether the kind of self-ignorance needed in Smoking Lesion Steelman could arise naturally, how it should be dealt with if so, and what role counterfactuals ought to play in decision theory if CDT-like behavior is incorrect. Today I sat down to collect all the arguments which have been rolling around in my head on this and related issues, and arrived at a place much closer to CDT than I expected.

Nice writeup. Is one-boxing in Newcomb an equilibria?

 by Alex Appel 56 days ago | link | parent | on: Delegative Inverse Reinforcement Learning I don’t believe that $$x_{:n}^{!k}$$ was defined anywhere, but we “use the definition” in the proof of Lemma 1. As far as I can tell, it’s a set of (j,y) pairs, where j is the index of a hypothesis, and y is an infinite history string, rather like the set $$h^{!k}$$. How do the definitions of $$h^{!k}$$ and $$x^{!k}_{:n}$$ differ?

Hi Alex!

The definition of $$h^{!k}$$ makes sense for any $$h$$, that is, the superscript $$!k$$ in this context is a mapping from finite histories to sets of pairs as you said. In the line in question we just apply this mapping to $$x_{:n}$$ where $$x$$ is a bound variable coming from the expected value.

I hope this helps?

Delegative Inverse Reinforcement Learning
post by Vadim Kosoy 235 days ago | Alex Appel likes this | 11 comments

We introduce a reinforcement-like learning setting we call Delegative Inverse Reinforcement Learning (DIRL). In DIRL, the agent can, at any point of time, delegate the choice of action to an “advisor”. The agent knows neither the environment nor the reward function, whereas the advisor knows both. Thus, DIRL can be regarded as a special case of CIRL. A similar setting was studied in Clouse 1997, but as far as we can tell, the relevant literature offers few theoretical results and virtually all researchers focus on the MDP case (please correct me if I’m wrong). On the other hand, we consider general environments (not necessarily MDP or even POMDP) and prove a natural performance guarantee.

A summary that might be informative to other people: Where does the $$\omega(\frac{2}{3})$$ requirement on the growth rate of the “rationality parameter” $$\beta$$ come from?

Well, the expected loss of the agent comes from two sources. Making a suboptimal choice on its own, and incurring a loss from consulting a not-fully-rational advisor. The policy of the agent is basically “defer to the advisor when the expected loss over all time of acting (relative to the optimal move by an agent who knew the true environment) is too high”. Too high, in this case, cashes out as “higher than $$\beta(t)^{-1}t^{-1/x}$$”, where t is the time discount parameter and $$\beta$$ is the level-of-rationality parameter. Note that as the operator gets more rational, the agent gets less reluctant about deferring. Also note that t is reversed from what you might think, high values of t mean that the agent has a very distant planning horizon, low values mean the agent is more present-oriented.

On most rounds, the agent acts on its own, so the expected all-time loss on a single round from taking suboptimal choices is on the order of $$\beta(t)^{-1}t^{-1/x}$$, and also we’re summing up over about t rounds (technically exponential discount, but they’re similar enough). So the loss from acting on its own ends up being about $$\beta(t)^{-1}t^{(x-1)/x}$$.

On the other hand, delegation will happen on at most ~$$t^{2/x}$$ rounds, with a loss of $$\beta(t)^{-1}$$ value, so the loss from delegation ends up being around $$\beta(t)^{-1}t^{2/x}$$.

Setting these two losses equal to each other/minimizing the exponent on the t when they are smooshed together gets you x=3. And then $$\beta(t)$$ must grow asymptotically faster than $$t^{2/3}$$ to have the loss shrink to 0. So that’s basically where the 2/3 comes from, it comes from setting the delegation threshold to equalize long-term losses from the AI acting on its own, and the human picking bad choices, as the time horizon t goes to infinity.

Delegative Inverse Reinforcement Learning
post by Vadim Kosoy 235 days ago | Alex Appel likes this | 11 comments

We introduce a reinforcement-like learning setting we call Delegative Inverse Reinforcement Learning (DIRL). In DIRL, the agent can, at any point of time, delegate the choice of action to an “advisor”. The agent knows neither the environment nor the reward function, whereas the advisor knows both. Thus, DIRL can be regarded as a special case of CIRL. A similar setting was studied in Clouse 1997, but as far as we can tell, the relevant literature offers few theoretical results and virtually all researchers focus on the MDP case (please correct me if I’m wrong). On the other hand, we consider general environments (not necessarily MDP or even POMDP) and prove a natural performance guarantee.

I don’t believe that $$x_{:n}^{!k}$$ was defined anywhere, but we “use the definition” in the proof of Lemma 1.

As far as I can tell, it’s a set of (j,y) pairs, where j is the index of a hypothesis, and y is an infinite history string, rather like the set $$h^{!k}$$.

How do the definitions of $$h^{!k}$$ and $$x^{!k}_{:n}$$ differ?

Being legible to other agents by committing to using weaker reasoning systems
post by Alex Mennen 80 days ago | Stuart Armstrong and Vladimir Slepnev like this | 1 comment

Suppose that an agent $$A_{1}$$ reasons in a sound theory $$T_{1}$$, and an agent $$A_{2}$$ reasons in a theory $$T_{2}$$, such that $$T_{1}$$ proves that $$T_{2}$$ is sound. Now suppose $$A_{1}$$ is trying to reason in a way that is legible to $$A_{2}$$, in the sense that $$A_{2}$$ can rely on $$A_{1}$$ to reach correct conclusions. One way of doing this is for $$A_{1}$$ to restrict itself to some weaker theory $$T_{3}$$, which $$T_{2}$$ proves is sound, for the purposes of any reasoning that it wants to be legible to $$A_{2}$$. Of course, in order for this to work, not only would $$A_{1}$$ have to restrict itself to using $$T_{3}$$, but $$A_{2}$$ would to trust that $$A_{1}$$ had done so. A plausible way for that to happen is for $$A_{1}$$ to reach the decision quickly enough that $$A_{2}$$ can simulate $$A_{1}$$ making the decision to restrict itself to using $$T_{3}$$.

This is exactly the sort of thing I’ve wanted for ASP (Agent Simulates Predictor).

One problem that’s always blocked me, is how to know when to do this, rather than using it add-hoc - is there an easy way to know that there’s an agent out in the universe using a more limited reasoning system?

 Where does ADT Go Wrong? discussion post by Abram Demski 95 days ago | Jack Gallagher and Jessica Taylor like this | 1 comment

When considering an embedder $$F$$, in universe $$U$$, in response to which SADT picks policy $$\pi$$, I would be tempted to apply the following coherence condition:

$E[F(\pi)] = E[F(DDT)] = E[U]$

(all approximately of course)

I’m not sure if this would work though. This is definitely a necessary condition for reasonable counterfactuals, but not obviously sufficient.

A potentially useful augmentation is to use absolute expected difference: $E[|F(\pi) - F(DDT)|] = E[|F(DDT) - U|] = 0$

 by Paul Christiano 83 days ago | Vladimir Slepnev likes this | link | parent | on: Policy Selection Solves Most Problems Without reading closely, this seems very close to UDT2. Is there a problem that this gets right which UDT2 gets wrong (or for which there is ambiguity about the specification of UDT2?) Without thinking too carefully, I don’t believe the troll bridge argument. We have to be super careful about “sufficiently large,” and about Lob’s theorem. To see whether the proof goes through, it seems instructive to consider the case where a trader with 90% of the initial mass really wants to cross the bridge. What happens when they try?

The differences between this and UDT2:

1. This is something we can define precisely, whereas UDT2 isn’t.
2. Rather than being totally updateless, this is just mostly updateless, with the parameter $$f$$ determining how updateless it is.

I don’t think there’s a problem this gets right which we’d expect UDT2 to get wrong.

If we’re using the version of logical induction where the belief jumps to 100% as soon as something gets proved, then a weighty trader who believes crossing the bridge is good will just get knocked out immediately if the theorem prover starts proving that crossing is bad (which helps that step inside the Löbian proof go through). (I’d be surprised if the analysis turns out much different for the kind of LI which merely rapidly comes to believe things which get proved, but I can see how that distinction might block the proof.) But certainly it would be good to check this more thoroughly.

 by Stuart Armstrong 85 days ago | link | parent | on: Policy Selection Solves Most Problems policy selection converges to giving Omega the money so long as the difficulty of computing the coin exceeds the power of the market at $$f(n)$$ time. Would it be sensible to just look for muggings (and ASPs) at the very beginning of the process, and then decide immediately what to do as soon as one is detected? Come to think of that, precommitting to ignoring knowledge about the result of the coin seems to be the best strategy here; does this cash out into anything useful in this formalism?

Looking “at the very beginning” won’t work – the beliefs of the initial state of the logical inductor won’t be good enough to sensibly detect these things and decide what to do about them.

While ignoring the coin is OK as special-case reasoning, I don’t think everything falls nicely into the bucket of “information you want to ignore” vs “information you want to update on”. The more general concept which captures both is to ask “how do I want to react to thin information, in terms of my action?” – which is of course the idea of policy selection.

Policy Selection Solves Most Problems
post by Abram Demski 85 days ago | Alex Appel and Vladimir Slepnev like this | 4 comments

It seems like logically updateless reasoning is what we would want in order to solve many decision-theory problems. I show that several of the problems which seem to require updateless reasoning can instead be solved by selecting a policy with a logical inductor that’s run a small amount of time. The policy specifies how to make use of knowledge from a logical inductor which is run longer. This addresses the difficulties which seem to block logically updateless decision theory in a fairly direct manner. On the other hand, it doesn’t seem to hold much promise for the kind of insights which we would want from a real solution.

Without reading closely, this seems very close to UDT2. Is there a problem that this gets right which UDT2 gets wrong (or for which there is ambiguity about the specification of UDT2?)

Without thinking too carefully, I don’t believe the troll bridge argument. We have to be super careful about “sufficiently large,” and about Lob’s theorem. To see whether the proof goes through, it seems instructive to consider the case where a trader with 90% of the initial mass really wants to cross the bridge. What happens when they try?

Policy Selection Solves Most Problems
post by Abram Demski 85 days ago | Alex Appel and Vladimir Slepnev like this | 4 comments

It seems like logically updateless reasoning is what we would want in order to solve many decision-theory problems. I show that several of the problems which seem to require updateless reasoning can instead be solved by selecting a policy with a logical inductor that’s run a small amount of time. The policy specifies how to make use of knowledge from a logical inductor which is run longer. This addresses the difficulties which seem to block logically updateless decision theory in a fairly direct manner. On the other hand, it doesn’t seem to hold much promise for the kind of insights which we would want from a real solution.

policy selection converges to giving Omega the money so long as the difficulty of computing the coin exceeds the power of the market at $$f(n)$$ time.

Would it be sensible to just look for muggings (and ASPs) at the very beginning of the process, and then decide immediately what to do as soon as one is detected?

Come to think of that, precommitting to ignoring knowledge about the result of the coin seems to be the best strategy here; does this cash out into anything useful in this formalism?

 by Gordon Worley III 96 days ago | Alex Appel and Abram Demski like this | link | parent | on: Catastrophe Mitigation Using DRL Maybe it’s just my browser, but it look like it got cut off. Here’s the last of what it renders for me: Averaging the previous inequality over kk, we get 1N∑k=0N−1R?k≤(1−γT)∑n=0∞γnTE[E[U!n∣J!n=K, Z!nT]−E[U!n∣Z!nT]]+O(1−γTη2+τ¯(1−γ)1−γT) 1N∑k=0N−1R?k≤(1−γT)∑n=0∞γnTE[E[Un!∣Jn!=K, ZnT!]−E[Un!∣ZnT!]]+O(1−γTη2+τ¯(1−γ)1−γT) $${k=0}{N-1}R{?k} (1-^T){n=0}{nT} [[U^!_n ^!n = K, Z^!{nT}]-[U^!n Z^!{nT}]] + O(+ Indeed there is some kind of length limit in the website. I moved Appendices B and C to a separate post. reply Hyperreal Brouwer post by Scott Garrabrant 138 days ago | Vadim Kosoy and Stuart Armstrong like this | 2 comments This post explains how to view Kakutani’s fixed point theorem as a special case of Brouwer’s fixed point theorem with hyperreal numbers. This post is just math intuitions, but I found them useful in thinking about Kakutani’s fixed point theorem and many things in agent foundations. This came out of conversations with Sam Eisenstat.  continue reading » by Vadim Kosoy 92 days ago | link | on: Hyperreal Brouwer Very nice. I wonder whether this fixed point theorem also implies the various generalization of Kakutani’s fixed point theorem in the literature, such as Lassonde’s theorem about compositions of Kakutani functions. It sounds like it should because the composition of hypercontinuous functions is hypercontinuous, but I don’t see the formal argument immediately since if we have $$x \in *X,\ y \in *Y$$ with standard parts $$x_\omega,\ y_\omega$$ s.t. $$f(x)=y$$, and and $$y' \in *Y,\ z \in *Z$$ with standard parts $$y'_\omega=y_\omega,\ z_\omega$$ s.t. $$g(y')=z$$ then it’s not clear why there should be $$x'\in X,\ z'\in Z$$ s.t. with standard parts $$x'_\omega=x_\omega,\ z'_\omega=z_\omega$$ s.t. $$g(f(x'))=z'$$. reply Resolving human inconsistency in a simple model post by Stuart Armstrong 140 days ago | Abram Demski likes this | 1 comment A putative new idea for AI control; index here. This post will present a simple model of an inconsistent human, and ponder how to resolve their inconsistency. Let $$\bf{H}$$ be our agent, in a turn-based world. Let $$R^l$$ and $$R^s$$ be two simple reward functions at each turn. The reward $$R^l$$ is thought of as being a ‘long-term’ reward, while $$R^s$$ is a short-term one.  continue reading » Freezing the reward seems like the correct answer by definition, since if I am an agent following the utility function $$R$$ and I have to design a new agent now, then it is rational for me to design the new agent to follow the utility function I am following now (i.e. this action is usually rated as the best according to my current utility function). reply  by Gordon Worley III 96 days ago | Alex Appel and Abram Demski like this | link | parent | on: Catastrophe Mitigation Using DRL Maybe it’s just my browser, but it look like it got cut off. Here’s the last of what it renders for me: Averaging the previous inequality over kk, we get 1N∑k=0N−1R?k≤(1−γT)∑n=0∞γnTE[E[U!n∣J!n=K, Z!nT]−E[U!n∣Z!nT]]+O(1−γTη2+τ¯(1−γ)1−γT) 1N∑k=0N−1R?k≤(1−γT)∑n=0∞γnTE[E[Un!∣Jn!=K, ZnT!]−E[Un!∣ZnT!]]+O(1−γTη2+τ¯(1−γ)1−γT)$${k=0}{N-1}R{?k} (1-^T){n=0}{nT} [[U^!_n ^!n = K, Z^!{nT}]-[U^!n Z^!{nT}]] + O(+

Unfortunately, it’s not just your browser. The website truncates the document for some reason. I emailed Matthew about it and ey are looking into it.

The Happy Dance Problem
post by Abram Demski 96 days ago | Scott Garrabrant and Stuart Armstrong like this | 1 comment

Since the invention of logical induction, people have been trying to figure out what logically updateless reasoning could be. This is motivated by the idea that, in the realm of Bayesian uncertainty (IE, empirical uncertainty), updateless decision theory is the simple solution to the problem of reflective consistency. Naturally, we’d like to import this success to logically uncertain decision theory.

At a research retreat during the summer, we realized that updateless decision theory wasn’t so easy to define even in the seemingly simple Bayesian case. A possible solution was written up in Conditioning on Conditionals. However, that didn’t end up being especially satisfying.

Here, I introduce the happy dance problem, which more clearly illustrates the difficulty in defining updateless reasoning in the Bayesian case. I also outline Scott’s current thoughts about the correct way of reasoning about this problem.

by Wei Dai 95 days ago | Scott Garrabrant likes this | link | on: The Happy Dance Problem

We can solve the problem in what seems like the right way by introducing a basic notion of counterfactual, which I’ll write □→. This is supposed to represent “what the agent’s code will do on different inputs”. The idea is that if we have the policy of dancing when we see the money, M□→H is true even in the world where we don’t see any money.

(I’m confused about why this notation needs to be introduced. I haven’t been following all the DT discussions super closely, so I’d appreciate if someone could catch me up. Or, since I’m visiting MIRI soon, perhaps someone could catch me up in person.)

In the language of my original UDT post, I would have written this as S(‘M’)=‘H’, where S is the agent’s code (M and H in quotes here to denote that they’re input/output strings rather than events). This is a logical statement about the output of S given ‘M’ as input, which I had conjectured could be conditioned on the same way we’d condition on any other logical statement (once we have a solution to logical uncertainty). Of course, issues like Agent Simulates Predictor has since come up, so is this new idea/notation an attempt to solve some of those issues? Can you explain what advantages this notation has over the S(‘M’)=‘H’ type of notation?

It’s not clear where the beliefs about this correlation come from, so these counterfactuals are still almost as mysterious as explicitly giving conditional probabilities for everything given different policies.

Intuitively, it comes from the fact that there’s a chunk of computation in Omega that’s analyzing S, which should be logically correlated with S’s actual output. Again, this was a guess of what a correct solution to logical uncertainty would say when you run the math. (Now that we have logical induction, can we tell if it actually says this?)

Catastrophe Mitigation Using DRL

Previously we derived a regret bound for DRL which assumed the advisor is “locally sane.” Such an advisor can only take actions that don’t lose any value in the long term. In particular, if the environment contains a latent catastrophe that manifests with a certain rate (such as the possibility of an UFAI), a locally sane advisor has to take the optimal course of action to mitigate it, since every delay yields a positive probability of the catastrophe manifesting and leading to permanent loss of value. This state of affairs is unsatisfactory, since we would like to have performance guarantees for an AI that can mitigate catastrophes that the human operator cannot mitigate on their own. To address this problem, we introduce a new form of DRL where in every hypothetical environment the set of uncorrupted states is divided into “dangerous” (impending catastrophe) and “safe” (catastrophe was mitigated). The advisor is then only required to be locally sane in safe states, whereas in dangerous states certain “leaking” of long-term value is allowed. We derive a regret bound in this setting as a function of the time discount factor, the expected value of catastrophe mitigation time for the optimal policy, and the “value leak” rate (i.e. essentially the rate of catastrophe occurrence). The form of this regret bound implies that in certain asymptotic regimes, the agent attains near-optimal expected utility (and in particular mitigates the catastrophe with probability close to 1), whereas the advisor on its own fails to mitigate the catastrophe with probability close to 1.

Maybe it’s just my browser, but it look like it got cut off. Here’s the last of what it renders for me:

Averaging the previous inequality over kk, we get

1N∑k=0N−1R?k≤(1−γT)∑n=0∞γnTE[E[U!n∣J!n=K, Z!nT]−E[U!n∣Z!nT]]+O(1−γTη2+τ¯(1−γ)1−γT) 1N∑k=0N−1R?k≤(1−γT)∑n=0∞γnTE[E[Un!∣Jn!=K, ZnT!]−E[Un!∣ZnT!]]+O(1−γTη2+τ¯(1−γ)1−γT)

{k=0}{N-1}R{?k} (1-^T){n=0}{nT} [[U^!_n ^!n = K, Z^!{nT}]-[U^!n Z^!{nT}]] + O(+

 Looking for Recommendations RE UDT vs. bounded computation / meta-reasoning / opportunity cost? discussion post by David Krueger 104 days ago | 1 comment

At present, I think the main problem of logical updatelessness is something like: how can we make a principled trade-off between thinking longer to make a better decision, vs thinking less long so that we exert more logical control on the environment?

For example, in Agent Simulates Predictor, an agent who thinks for a short amount of time and then decides on a policy for how to respond to any conclusions which it comes to after thinking longer can decide “If I think longer, and see a proof that the predictor thinks I two-box, I can invalidate that proof by one-boxing. Adopting this policy makes the predictor less likely to find such a proof.” (I’m speculating; I haven’t actually written up a thing which does this, yet, but I think it would work.) An agent who thinks longer before making a decision can’t see this possibility because it has already proved that the predictor predicts two-boxing, so from the perspective of having thought longer, there doesn’t appear to be a way to invalidate the prediction – being predicted to two-box is just a fact, not a thing the agent has control over.

Similarly, in Prisoner’s Dilemma, an agent who hasn’t thought too long can adopt a strategy of first thinking longer and then doing whatever it predicts the other agent to do. This is a pretty good strategy, because it makes it so that the other agent’s best strategy is to cooperate. However, you have to think for long enough to find this particular strategy, but short enough that the hypotheticals which show that the strategy is a good idea aren’t closed off yet.

So, I think there is less conflict between UDT and bounded reasoning than you are implying. However, it’s far from clear how to negotiate the trade-offs sanely.

(However, in both cases, you still want to spend as long a time thinking as you can afford; it’s just that you want to make the policy decision, about how to use the conclusions of that thinking, as early as they can be made while remaining sensible.)

 Funding opportunity for AI alignment research link by Paul Christiano 178 days ago | Vadim Kosoy likes this | 3 comments

In the first round I’m planning to pay:

• $10k to Ryan Carey •$10k to Chris Pasek
• \$20k to Peter Scheyer

I’m excited to see what comes of this! Within a few months I’ll do another round of advertising + making decisions.

I want to emphasize that given the evaluation process, this definitely shouldn’t be read as a strong negative judgment (or endorsement) of anyone’s application.

Older

NEW DISCUSSION POSTS

[Delegative Reinforcement
 by Vadim Kosoy on Stable Pointers to Value II: Environmental Goals | 1 like

Intermediate update: The
 by Alex Appel on Further Progress on a Bayesian Version of Logical ... | 0 likes

Since Briggs [1] shows that
 by 258 on In memoryless Cartesian environments, every UDT po... | 2 likes

This doesn't quite work. The
 by Nisan Stiennon on Logical counterfactuals and differential privacy | 0 likes

I at first didn't understand
 by Sam Eisenstat on An Untrollable Mathematician | 1 like

This is somewhat related to
 by Vadim Kosoy on The set of Logical Inductors is not Convex | 0 likes

This uses logical inductors
 by Abram Demski on The set of Logical Inductors is not Convex | 0 likes

Nice writeup. Is one-boxing
 by Tom Everitt on Smoking Lesion Steelman II | 0 likes

Hi Alex! The definition of
 by Vadim Kosoy on Delegative Inverse Reinforcement Learning | 0 likes

A summary that might be
 by Alex Appel on Delegative Inverse Reinforcement Learning | 1 like

I don't believe that
 by Alex Appel on Delegative Inverse Reinforcement Learning | 0 likes

This is exactly the sort of
 by Stuart Armstrong on Being legible to other agents by committing to usi... | 0 likes

When considering an embedder
 by Jack Gallagher on Where does ADT Go Wrong? | 0 likes

The differences between this
 by Abram Demski on Policy Selection Solves Most Problems | 1 like

Looking "at the very
 by Abram Demski on Policy Selection Solves Most Problems | 0 likes