Intelligent Agent Foundations Forumsign up / log in
Existence of distributions that are expectation-reflective and know it
post by Tsvi Benson-Tilsen 650 days ago | Kaya Stechly, Abram Demski, Jessica Taylor, Nate Soares and Paul Christiano like this | discuss

We prove the existence of a probability distribution over a theory \({T}\) with the property that for certain definable quantities \({\varphi}\), the expectation of the value of a function \({E}[{\ulcorner {\varphi}\urcorner}]\) is accurate, i.e. it equals the actual expectation of \({\varphi}\); and with the property that it assigns probability 1 to \({E}\) behaving this way. This may be useful for self-verification, by allowing an agent to satisfy a reflective consistency property and at the same time believe itself or similar agents to satisfy the same property. Thanks to Sam Eisenstat for listening to an earlier version of this proof, and pointing out a significant gap in the argument. The proof presented here has not been vetted yet.

1. Problem statement

Given a distribution \({\mathbb{P}}\) coherent over a theory \(A\), and some real-valued function \(f\) on completions of \(A\), we can define the expectation \({\mathbb{E}}[f]\) of \(f\) according to \({\mathbb{P}}\). Then we can relax the probabilistic reflection principle by asking that for some class of functions \(f\), we have that \({\mathbb{E}}[{\ulcorner {E}[f] \urcorner}] = {\mathbb{E}}[f]\), where \({E}\) is a symbol in the language of \(A\) meant to represent \({\mathbb{E}}\). Note that this notion of expectation-reflection is weaker than probabilistic reflection, since our distribution is now permitted to, for example, assign a bunch of probability mass to over- and under-estimates of \({E}[f]\), as long as they balance out.

Christiano asked whether it is possible to have a distribution that satisfies this reflection principle, and also assigns probability 1 to the statement that \({E}\) satisfies this reflection principle. This was not possible for strong probabilistic reflection, but it turns out to be possible for expectation reflection, for some choice of the functions \(f\).

2. Sketch of the approach

(This is a high level description of what we are doing, so many concepts will be left vague until later.)

Christiano et al. applied Kakutani’s theorem to the space of coherent \({\mathbb{P}}\). Instead we will work in the space of expectations over some theory \({T}\), where an expectation over a theory is, roughly speaking, a function from the set of variables provably defined by that theory, into the intervals proved to bound each variable. These are essentially interchangeable with coherent probability distributions over \({T}\). The point of doing this is to make the language simpler, for example reflection statements will mention a single symbol representing an expectation, rather than a complicated formula defining the expectation in terms of probability.

We will again apply Kakutani’s theorem, now requiring that some expectation \({\mathbb{G}}\) reflects \({\mathbb{F}}\) only when \({\mathbb{G}}\) expects \({E}\) to behave like \({\mathbb{F}}\), and when \({\mathbb{G}}\) assigns some significant probability to the statement that \({E}\) is reflective. This confidence in reflection must increase the closer that \({\mathbb{F}}\) is to being reflective. Then a fixed point of this correspondence will be expectation-reflective, and will assign probability 1 to \({E}\) being expectation-reflective.

The form of our correspondence will make most of the conditions of Kakutani’s theorem straightforward. The main challenge will be to show non-emptiness, i.e. that there is some expectation that reflects a given \({\mathbb{F}}\) and believes in reflection to some extent. In the case of probabilistic reflection, this does not go through at all, since if we reflect a non-reflective probability distribution exactly, we must assign probability 0 to reflection.

However, in the case of expectations, we can mix different expectations together while maintaining expectation-reflection, by carefully balancing the mixture. The main idea will be to take a distribution \({\mathbb{G}}_{{\mathbb{H}}}\) that believes in some reflective expectation \({\mathbb{H}}\), take another distribution \({\mathbb{G}}_{{\mathbb{J}}}\) that believes in some pseudo-expectation \({\mathbb{J}}\), and mix them together. The resulting mixture will somewhat expect \({E}\) to be reflective, since \({\mathbb{G}}_{{\mathbb{H}}}\) expects this, and by a good choice of \({\mathbb{J}}\) counterbalancing \({\mathbb{H}}\), \({\mathbb{G}}\) will expect \({E}\) to behave like \({\mathbb{F}}\).

Before carrying out this approach, we need some formal notions and facts about expectations, given in Sections 3 and 4. Also, in order to be careful about what we mean by an expectation, a pseudo-expectation, and a variable, we will in Section 5 develop a base theory \({T}\) over which our distributions will be defined. Then Section 6 will give the main theorem, following the above sketch. Section 7 discusses the meaning of these results and extensions to definable reflection.

3. Basic definitions and facts about expectations

We will work with probability distributions (or, in a moment, expectations) that are coherent over some base theory \({T}\) in a language that can talk about rationals, functions, and has a symbol \({E}\).

Random variables for theories and their bounds

These notions are due to Fallenstein.

We are interested in taking expectations of quantities expressed in the language of \({T}\). This amounts to viewing a probability distribution \({\mathbb{P}}\) coherent over \({T}\) as a measure on the Stone space \(S_{{T}}\), and then asking for the expectation

\[{\mathbb{E}}[f] := \int_{S_{{T}}} f d{\mathbb{P}}\ .\]

A natural choice for the kind of random variable \(f\) to look at is those values definable over \({T}\), i.e. formulas \({\varphi}(x)\) such that \({T}{\vdash}\exists ! x \in {\mathbb{R}}: {\varphi}(x)\). Then any completion of \({T}\) will make statements of the form \(\forall r \in {\mathbb{R}}: {\varphi}(r) > a\) for various \(a \in {\mathbb{Q}}\) in a way consistent with \({\varphi}\) holding on a unique real, and perhaps we can extract a value for the random variable \({\varphi}\).

However, we have to be a little careful. If this is all that \({T}\) proves about \({\varphi}\), then there will be completions of \({T}\) which, for every \(a \in {\mathbb{Q}}\), contain the statement \(\forall r \in {\mathbb{R}}: {\varphi}(r) > a\). Then there is no real number reasonably corresponding to \({\varphi}\). Even if this is not an issue, there are distributions which assign non-negligible probabilities to a sequence of completions of \({T}\) that put quickly growing values on \({\varphi}\), such that the integral \({\mathbb{E}}[{\varphi}]\) does not exist.

Therefore we also require that \({T}\) proves some concrete bounds on the real numbers that can satisfy \({\varphi}(x)\). Then we will be able to extract values for \({\varphi}\) from completions of \({T}\) and define the expectation of \({\varphi}\) according to \({\mathbb{P}}\).

Definition

[Definition of bounded variables \({\texttt{Var}}(A)\) for \(A\).]

For any consistent theory \(A\), the set \({\texttt{Var}}(A)\) is the set of formulas \({\varphi}(x)\) such that \(A\) proves \({\varphi}(x)\) is well-defined and in some particular bounds, i.e.:

\[{\varphi}\in {\texttt{Var}}(A) \Leftrightarrow \exists a,b \in {\mathbb{Q}}: A {\vdash}[\exists !x \in {\mathbb{R}}: {\varphi}(x)]\wedge [\forall x \in {\mathbb{R}}: {\varphi}(x) \to x \in [a,b]]\ .\]

Elements of \({\texttt{Var}}(A)\) are called \(A\)-variables.\({=:}\)

Definition

[Definition of \(A\)-bounds on variables.]

For \({\varphi}\in {\texttt{Var}}(A)\), let \({[a,b]_{A,{\varphi}}}\) be the complete bound put on \({\varphi}\) by \(A\), taking into account all bounds on \({\varphi}\) proved by \(A\), i.e.

\[{[a,b]_{A,{\varphi}}} := \bigcap \{ [s,t] \mid s,t \in {\mathbb{Q}}, A {\vdash}{\varphi}\in [s,t]\}\ .\]\({=:}\)

Note that the \(A\)-bound \({[a,b]_{A,{\varphi}}}\) on a variable \({\varphi}\in {\texttt{Var}}(A)\) is a well-defined non-empty closed interval; it is the intersection of non-disjoint closed intervals all contained in some rational interval, by the definition of \({\texttt{Var}}(A)\) and the fact that \(A\) is consistent.

Expectations and pseudo-expectations over a theory

The definition of \({\texttt{Exp}}(A)\) and the theorem in Section 4 are based on a comment of Eisenstat.

Now we define expectations over a theory, analogously to probability distributions. Here linearity will play the role of coherence.

Definition

[Sum of two \(A\)-variables.] For \({\varphi},\psi \in {\texttt{Var}}(A)\), we write \({\varphi}+ \psi\) for the sum of the two variables, i.e.:

\[({\varphi}+\psi)(x) \Leftrightarrow \exists q,r \in {\mathbb{R}}: x = q+r \wedge {\varphi}(q) \wedge \psi(r)\ .\]

Then \({\varphi}+ \psi \in {\texttt{Var}}(A)\) for reasonable \(A\).\({=:}\)

Definition

[Expectations \({\texttt{Exp}}(A)\) over a theory \(A\).] An expectation over a theory \(A\) is a function \({\mathbb{E}}: {\texttt{Var}}(A) \to {\mathbb{R}}\) such that for all \({\varphi}, \psi \in {\texttt{Var}}(A)\):

  • (In \(A\)-bounds) \({\mathbb{E}}[{\varphi}] \in {[a,b]_{A,{\varphi}}}\), i.e. \({\mathbb{E}}\) takes values in the bounds proved by \(A\), and

  • (Linear) \({\mathbb{E}}[{\varphi}+ \psi] = {\mathbb{E}}[{\varphi}] + {\mathbb{E}}[\psi]\).

\({=:}\)

In order to carry out the counterbalancing argument described above, we need some rather extreme affine combinations of expectations. So extreme that they will not even be proper expectations; so we define pseudo-expectations analogously to expectations but with much looser bounds on their values.

Definition

[Pseudo-expectations \({\texttt{PseudoExp}}(A)\) over a theory \(A\).] A pseudo-expectation over a theory \(A\) is a function \({\mathbb{E}}: {\texttt{Var}}(A) \to {\mathbb{R}}\) such that for all \({\varphi}, \psi \in {\texttt{Var}}(A)\):

  • (Loosely in \(A\)-bounds) If \({\varphi}\) has \(A\)-bound \({[a,b]_{A,{\varphi}}}\), we have that \({\mathbb{E}}[{\varphi}] \in [a-{(b-a)(2^{{\ulcorner {\varphi}\urcorner}})}, b+{(b-a)(2^{{\ulcorner {\varphi}\urcorner}})}]\), and

  • (Linear) \({\mathbb{E}}[{\varphi}+ \psi] = {\mathbb{E}}[{\varphi}] + {\mathbb{E}}[\psi]\).

\({=:}\)

For any theories \(A \subset B\), we have that \({\texttt{Exp}}(B) \subset {\texttt{Exp}}(A) \subset {\texttt{PseudoExp}}(A)\) and \({\texttt{Exp}}(B) \subset {\texttt{PseudoExp}}(B) \subset {\texttt{PseudoExp}}(A)\). We are implicitly restricting elements of \({\texttt{Exp}}(B)\) and \({\texttt{PseudoExp}}(B)\) to \({\texttt{Var}}(A)\) in these comparisons, and will do so freely in what follows. We take the product topology on both \({\texttt{Exp}}(A)\) and \({\texttt{PseudoExp}}(A)\).

4. Isomorphism of expectations and probability distributions

To actually construct elements of \({\texttt{Exp}}(A)\), we will use a natural relationship between probability distributions \({\mathbb{P}}\) and expectations \({\mathbb{E}}\) over a theory, proved formally below to be an isomorphism. On the one hand we can get a probability distribution from \({\mathbb{E}}\) by taking the expectation of indicator variables for the truth of sentences; on the other hand we can get an expectation from a probability distribution by integrating a variable over the Stone space of our theory.

Definition

[The value of an \(A\)-variable.] For a complete theory \(A\), the value \(A({\varphi})\) of some \(A\)-variable \({\varphi}\) is \(\sup \{ q \in {\mathbb{Q}}\mid A {\vdash}\forall x: {\varphi}(x) \to x>q \}\). Since \({\varphi}\in {\texttt{Var}}(A)\), this value is well-defined, and \(A({\varphi}) \in {[a,b]_{A,{\varphi}}}\).\({=:}\)

Theorem

For any theory \(A\), there is a canonical isomorphism \({\iota}\) between \({\texttt{Exp}}(A)\) and the space of coherent probability distributions over \(A\), given by:

\[{\iota}: {\texttt{Exp}}(A) \to \Delta(A)\]

\[{\iota}({\mathbb{E}})({\theta}) := {\mathbb{E}}[{\texttt{Ind}}({\theta})]\ ,\]

where \({\texttt{Ind}}({\theta})\) is the 0-1 valued indicator variable for the sentence \({\theta}\), i.e. \({\texttt{Ind}}({\theta}) := (x=0 \wedge \neg {\theta}) \vee (x=1 \wedge {\theta})\). The alleged inverse \({\iota}{^{-1}}\) is given by:

\[{\iota}{^{-1}}({\mathbb{P}})[{\varphi}(x)] := \int_{A' \in S_A} A'({\varphi}(x)) d{\mathbb{P}}\ .\]

Proof. By the previous discussion, \({\iota}\) and \({\iota}{^{-1}}\) are well-defined in the sense that they return functions of the correct type.

\({\iota}{^{-1}}({\mathbb{P}}) \in {\texttt{Exp}}(A)\)

By definition of \({\texttt{Var}}(A)\), the integrals in the definition of \({\iota}{^{-1}}({\mathbb{P}})\) are defined and within \(A\)-bounds. For any \({\varphi},\psi \in {\texttt{Var}}(A)\) and any \(a,b \in {\mathbb{Q}}\), we have that \[A {\vdash}(\forall x: {\varphi}(x) \to x>a) \wedge (\forall y: \psi(y) \to y>b) \to (\forall z: ({\varphi}+ \psi)(z) \to z>a+b)\] and

\[A {\vdash}(\forall x: ({\varphi}+ \psi)(x) \to x>a) \to \exists b,c: (\forall y: {\varphi}(y) \to y>b) \wedge (\forall z: \psi(z) \to z>c)\ .\]

Thus \({\iota}{^{-1}}({\mathbb{P}})[{\varphi}+\psi] = {\iota}{^{-1}}({\mathbb{P}})[{\varphi}] + {\iota}{^{-1}}({\mathbb{P}})[\psi]\), so \({\iota}{^{-1}}({\mathbb{P}})\) is linear and hence is an expectation.

\({\iota}({\mathbb{E}}) \in \Delta(A)\)

For any \({\theta}\in A\), we have that \(A {\vdash}{\texttt{Ind}}({\theta}) = 1\), so since \({\mathbb{E}}\) is in \(A\)-bounds, \({\iota}({\mathbb{E}})({\theta}) = {\mathbb{E}}[{\texttt{Ind}}({\theta})] = 1\). Similarly, for any partition of truth into three sentences, \(A\) proves the indicators of those sentences have values summing to 1; so \({\mathbb{E}}\) assigns values to their indicators summing to 1, using linearity a few times and the fact that \({\mathbb{E}}\) assigns the same value to variables with \(A {\vdash}\forall x: {\varphi}(x) \leftrightarrow \psi(x)\).

This last fact follows by considering the \(A\)-bound of \([0,0]\) on the variable \({\varphi}(x)+(-\psi(x))\). Linearity gives that \(0 = {\mathbb{E}}[{\varphi}(x)+(-\psi(x))] = {\mathbb{E}}[{\varphi}(x)] + {\mathbb{E}}[-\psi(x)]\), so \({\mathbb{E}}[{\varphi}(x)] = -{\mathbb{E}}[-\psi(x)]\). If \({\varphi}\equiv \psi\) this gives \({\mathbb{E}}[\psi(x)] = -{\mathbb{E}}[-\psi(x)]\), so that in general \({\mathbb{E}}[{\varphi}(x)] = {\mathbb{E}}[\psi(x)]\), as desired.

\({\iota}\circ {\iota}{^{-1}}\) is identity

For any \({\mathbb{P}}\in \Delta(A)\) and any sentence \({\theta}\), we have

\[{\iota}\circ {\iota}{^{-1}}({\mathbb{P}})({\theta}) = {\iota}{^{-1}}({\mathbb{P}})[{\texttt{Ind}}({\theta})] = \int_{A' \in S_A} A'({\texttt{Ind}}({\theta})) d{\mathbb{P}}= {\mathbb{P}}({\theta})\ ,\]

since any completion of \(A\) with \(A{\vdash}{\theta}\) also has \(A {\vdash}{\texttt{Ind}}({\theta}) = 1\), and any completion of \(A\) with \(A{\vdash}\neg{\theta}\) also has \(A {\vdash}{\texttt{Ind}}({\theta}) = 0\).

\({\iota}\) is continuous

Take a \({\theta}\) sub-basis open subset of \(\Delta(A)\), the set of distributions assigning probability in \((a,b)\) to \({\theta}\). The preimage of this set is the set of expectations with \({\mathbb{E}}[{\texttt{Ind}}({\theta})] \in (a,b)\), which is an open subset of \({\texttt{Exp}}(A)\).

\({\iota}{^{-1}}\circ {\iota}\) is identity

Take any \({\mathbb{E}}\in {\texttt{Exp}}(A)\). We want to show that

\[{\mathbb{E}}[{\varphi}(x)] = \int_{A' \in S_A} A'({\varphi}(x)) d({\iota}{\mathbb{E}})\]

for all \({\varphi}(x) \in {\texttt{Var}}(A)\). In the following we will repeatedly apply linearity and the fact shown above that \({\mathbb{E}}\) respects provable equivalence of variables. Take such a \({\varphi}(x)\) and assume for clarity that the \(A\)-bound of \({\varphi}(x)\) is \({[0,1]}\). Then for any \(n \in {\mathbb{N}}\), we have that

\[{\mathbb{E}}[{\varphi}] = \sum_{k \in [n]} {\mathbb{E}}{\left[ {\varphi}{\texttt{Ind}}{\left( {\varphi}\in \left[ \tfrac{k}{n}, \tfrac{k+1}{n} \right) \right)} \right]}\] \[{\mathbb{E}}[{\varphi}] = \sum_{k \in [n]} {\left( \tfrac{k}{n} \right)}{\mathbb{E}}{\left[ {\texttt{Ind}}{\left( {\varphi}\in \left[ \tfrac{k}{n}, \tfrac{k+1}{n} \right) \right)} \right]} + {\mathbb{E}}{\left[ {\left( {\varphi}-\tfrac{k}{n} \right)} {\texttt{Ind}}{\left( {\varphi}\in \left[ \tfrac{k}{n}, \tfrac{k+1}{n} \right) \right)} \right]}\ .\]

Note that the last interval in these sums is closed instead of half-open. Since \(A\) proves that \({\left( {\varphi}-\tfrac{k}{n} \right)} {\texttt{Ind}}{\left( {\varphi}\in \left[ \tfrac{k}{n}, \tfrac{k+1}{n} \right) \right)}\) is non-negative,

\[{\mathbb{E}}[{\varphi}] \ge \sum_{k \in [n]} {\left( \tfrac{k}{n} \right)}{\mathbb{E}}{\left[ {\texttt{Ind}}{\left( {\varphi}\in \left[ \tfrac{k}{n}, \tfrac{k+1}{n} \right) \right)} \right]}\ .\]

By the arguments given earlier, \[{\mathbb{E}}{\left[ {\texttt{Ind}}{\left( {\varphi}\in \left[ \tfrac{k}{n}, \tfrac{k+1}{n} \right) \right)} \right]} = \int_{A' \in S_A} A'( {\texttt{Ind}}{\left( {\varphi}\in \left[ \tfrac{k}{n}, \tfrac{k+1}{n} \right) \right)}) d({\iota}{\mathbb{E}})\ .\]

Hence \[{\mathbb{E}}[{\varphi}] \ge \sum_{k \in [n]} {\left( \tfrac{k}{n} \right)} \int_{A' \in S_A} A'{\left( {\texttt{Ind}}{\left( {\varphi}\in \left[ \tfrac{k}{n}, \tfrac{k+1}{n} \right) \right)} \right)} d({\iota}{\mathbb{E}})\] \[{\mathbb{E}}[{\varphi}] \ge \sum_{k \in [n]} {\left( \tfrac{k}{n} \right)} {\iota}{\mathbb{E}}{\left( {\varphi}\in \left[ \tfrac{k}{n}, \tfrac{k+1}{n} \right) \right)}\ .\] As \(n \to \infty\), the right is the definition of the Lebesgue integral of \({\varphi}\). Combining this with a similar argument giving an upper bound on \({\mathbb{E}}[{\varphi}]\), we have that

\[{\mathbb{E}}[{\varphi}(x)] = \int_{A' \in S_A} A'({\varphi}(x)) d({\iota}{\mathbb{E}})\] as desired.

\({\iota}{^{-1}}\) is continuous

Take a \({\varphi}(x)\) sub-basis open set in \({\texttt{Exp}}(A)\), the set of expectations assigning a value in \((a,b)\) to \({\varphi}\). Let \({\mathbb{P}}\) be a probability distribution with \({\iota}{^{-1}}({\mathbb{P}})[{\varphi}] \in (a,b)\). As in the previous section of the proof, we can cut up the bound \({[c,d]_{A,{\varphi}}}\) into finitely many very small intervals. Then any probability distribution that assigns probabilities sufficiently close to those assigned by \({\mathbb{P}}\) to the indicators for \({\varphi}\) being in those small intervals, will have an expectation for \({\varphi}\) that is also inside \((a,b)\). This works out to an open set around \({\mathbb{P}}\), so that the preimage of the \({\varphi}(x)\) sub-basis open set is a union of open sets. \({\dashv}\)

5. A base theory that accommodates reflection variables

So, we have a dictionary between distributions and expectations. This will let us build expectations by completing theories and taking expectations according to the resulting 0-1 valued distribution.

Some preparatory work remains, because in order to have the reflection principle \({\mathbb{E}}[{E}[{\ulcorner {\varphi}\urcorner}]] = {\mathbb{E}}[{\varphi}]\), we at least want \({E}[{\ulcorner {\varphi}\urcorner}]\) to be a variable whenever \({\varphi}\) is. Thus we will need a theory \({T}\) that bounds \({E}[{\ulcorner {\varphi}\urcorner}]\) whenever it bounds \({\varphi}\). However, in order to make extreme mixes of elements of \({\texttt{Exp}}({T})\) possible to reflect into an expectation over \({T}\), we will need that all elements of \({\texttt{PseudoExp}}({T})\) are valid interpretations of \({E}\) for \({T}\).

Stratified definition of the base theory \({T}\)

We start with a theory such as \({\mathsf{ZFC}}\) that is strong enough to talk about rational numbers and so on. We add to the language a symbol \({E}\) that will represent an expectation. We also add the sentence stating that \({E}\) is a partial function from \({\mathbb{N}}\) to \({\mathbb{R}}\), and that \({E}\) is linear at \({\varphi}+\psi\) if it happens to be defined on \({\varphi},\psi,\) and \({\varphi}+\psi\). This gives the theory \({T_{0}}\).

Now define inductively the theories \({T_{n+1}}\supset {T_{n}}\): \[\begin{aligned} {T_{n+1}}:=&\ {T_{n}}+ \forall {\ulcorner {\varphi}\urcorner}, k \in {\mathbb{N}}: \forall a,b \in {\mathbb{Q}}:\\ &{\left[ k \text{ witnesses } {T_{n}}{\vdash}{\left( \exists ! x\in {\mathbb{R}}: {\varphi}(x) \right)} \wedge {\left( \forall x: {\varphi}(x) \to x \in [a,b] \right)} \right]}\\ &\to {\left( \exists ! x\in {\mathbb{R}}: {E}[{\ulcorner {\varphi}\urcorner}] = x \right)}\\ &\;\;\wedge {\left( \forall x: {E}[{\ulcorner {\varphi}\urcorner}] = x \to x \in {[a - {(b-a)(2^{{\ulcorner {\varphi}\urcorner}})},b + {(b-a)(2^{{\ulcorner {\varphi}\urcorner}})}]} \right)} \end{aligned}\]

In English, this says that \({T_{n+1}}\) is \({T_{n}}\) along with the statement that whenever \({T_{n}}\) proves that some \({\varphi}\) is well-defined and bounded in some interval \([a,b]\), then it is the case that \({E}\) is defined on \({\varphi}\) and \({E}[{\ulcorner {\varphi}\urcorner}]\) is inside the much looser bound \({[a - {(b-a)(2^{{\ulcorner {\varphi}\urcorner}})},b + {(b-a)(2^{{\ulcorner {\varphi}\urcorner}})}]}\). Intuitively we are adding into \({\texttt{Var}}({T_{n+1}})\) the variable \({E}[{\ulcorner {\varphi}\urcorner}]\) whenever \({\varphi}\in {\texttt{Var}}({T_{n}})\), but we are not restricting its value very much at all. The form of the loose bound on \({E}[{\ulcorner {\varphi}\urcorner}]\) is an artifact of the metric we will later put on \({{\texttt{Exp}}}({T})\).

Finally, we define the base theory we will use in the main argument as the limit of the \({T_{n}}\), that is: \({T}:= \bigcup_{n \in {\mathbb{N}}} {T_{n}}\). Note that \({T}\) is at least (exactly?) as strong as \(({T_{0}})_\omega\), the theory \({T_{0}}\) with \(\omega\)-iterated consistency statements, since the loose bounds are the same as the true bounds when the true bound is \([a,a]\). Also note that it is important that \({T_{0}}\) is arithmetically sound, or else \({T}\) may believe in nonstandard proofs and hence put inconsistent bounds on \({E}\). I think this restriction could be avoided by making the statement in \({T_{n+1}}- {T_{n}}\) into a schema over specific standard naturals that might be proofs.

Soundness of \({T}\) over \({\texttt{PseudoExp}}({T})\)

We will be applying Kakutani’s theorem to the space \({\texttt{Exp}}({T})\), and making forays into \({\texttt{PseudoExp}}({T})\). So we want \({T}\) to at least be consistent, so that \({\texttt{Exp}}({T})\) is nonempty, and furthermore we want \({T}\) to allow for \({E}\) to be interpreted by anything in \({\texttt{PseudoExp}}({T})\).

Recall that a (pseudo)expectation over a theory \(A\) is a function \({\mathbb{E}}: {\texttt{Var}}(A) \to {\mathbb{R}}\) that is linear, and such that given \({\varphi}\) with \(A\)-bound \({[a,b]_{A,{\varphi}}}\), we have that \({\mathbb{E}}[{\varphi}] \in [a, b]\) (or \({\mathbb{E}}[{\varphi}] \in [a-{(b-a)(2^{{\ulcorner {\varphi}\urcorner}})}, b+{(b-a)(2^{{\ulcorner {\varphi}\urcorner}})}]\)). As noted before, for any theories \(A \subset B\), we have that \({\texttt{Exp}}(B) \subset {\texttt{Exp}}(A) \subset {\texttt{PseudoExp}}(A)\) and \({\texttt{Exp}}(B) \subset {\texttt{PseudoExp}}(B) \subset {\texttt{PseudoExp}}(A)\), where we are restricting elements of \({\texttt{Exp}}(B)\) and \({\texttt{PseudoExp}}(B)\) to \({\texttt{Var}}(A)\).

Lemma

For any consistent theory \(A\), \({{\texttt{Exp}}}(A)\) is nonempty.

This follows from the isomorphism \({\iota}{^{-1}}\); we take a completion of \(A\), which is a coherent probability distribution \({\mathbb{P}}\) over \(A\), and then take expectations according to \({\mathbb{P}}\). That is, \({\iota}{^{-1}}({\mathbb{P}}) \in {{\texttt{Exp}}}(A)\). \({\dashv}\)

We assume that we have some standard model for the theory over which \({T}\) was constructed. For concreteness we take that theory to be \({\mathsf{ZFC}}\), and we take the standard model to be the cumulative hierarchy \(V\).

Theorem

\({{\texttt{Exp}}}({T})\) is nonempty, and for all \({\mathbb{J}}\in {{\texttt{PseudoExp}}}({T})\), we have that \((V,{\mathbb{J}}) \models {T}\).

(To follow the proof, keep in mind the distinction between \({\mathbb{E}}\) being a (pseudo)expectation over a theory, versus \({\mathbb{E}}\) providing a model for a theory.)

Proof. The claim is true for \({T_{0}}\) in place of \({T}\), since \({T_{0}}\) is consistent and places no restrictions other than linearity on \({E}\).

Say the claim holds for \({T_{n}}\), so \({{\texttt{PseudoExp}}}({T_{n}})\) is non-empty. For any \({\mathbb{J}}\in {{\texttt{PseudoExp}}}({T_{n}})\), by hypothesis \((V,{\mathbb{J}}) \models {T_{n}}\). Also, by definition of \({{\texttt{PseudoExp}}}({T_{n}})\), \({\mathbb{J}}\) satisfies that whenever \({T_{n}}\) bounds \({\varphi}\) in \([a,b]\), also \({\mathbb{J}}[{\varphi}] \in {[a - {(b-a)(2^{{\ulcorner {\varphi}\urcorner}})},b + {(b-a)(2^{{\ulcorner {\varphi}\urcorner}})}]}\). Hence \((V,{\mathbb{J}}) \models {T_{n+1}}\). Thus \({T_{n+1}}\) is consistent. Since \({{\texttt{PseudoExp}}}({T_{n+1}}) {\subset}{{\texttt{PseudoExp}}}({T_{n}})\), this also shows that for all \({\mathbb{J}}\in {{\texttt{PseudoExp}}}({T_{n+1}})\), we have \((V,{\mathbb{J}}) \models {T_{n+1}}\).

By induction the claim holds for all \(n\), and hence \({T}\) is consistent and \({{\texttt{Exp}}}({T})\) is nonempty. Since \({{\texttt{PseudoExp}}}({T}) {\subset}{{\texttt{PseudoExp}}}({T_{n}})\) for all \(n\), for any \({\mathbb{J}}\in {{\texttt{PseudoExp}}}({T})\) we have \((V,{\mathbb{J}}) \models {T_{n}}\), and hence \((V,{\mathbb{J}}) \models {T}\). \({\dashv}\)

6. Main theorem: reflection and assigning probability 1 to reflection

We have a theory \({T}\) that is consistent, so that \({{\texttt{Exp}}}({T})\) is nonempty, and sound over all pseudo-expectations. We want an expectation that is reflective, and also believes that it is reflective. First we formalize this notion and show that there are reflective expectations.

Existence of reflective expectations

Define the sentence \[{\textsf{refl}}:= \forall n \in {\mathbb{N}}: {\left( {E}[n] \text{ defined} \right)} \to {\left( {E}[{\ulcorner {E}[n] \urcorner}] \text{ defined, and } {E}[{\ulcorner {E}[n] \urcorner}] = {E}[n] \right)}\ .\]

This says that whenever \({E}\) is defined on some variable, it expects \({E}\) to take some value on that variable, and it expects the correct value. In short, its expectations about its expectations are correct. Define \({\texttt{Refl}}({T}) {\subset}{{\texttt{Exp}}}({T})\) to be the reflective expectations over \({T}\), i.e. those that satisfy \({\textsf{refl}}\).

Some observations: the spaces \[{\texttt{Refl}}({T}) {\subset}{{\texttt{Exp}}}({T}) {\subset}{{\texttt{PseudoExp}}}({T}) {\subset}{[a - {(b-a)(2^{{\ulcorner {\varphi}\urcorner}})},b + {(b-a)(2^{{\ulcorner {\varphi}\urcorner}})}]}_{{T},{\varphi}}^{{\texttt{Var}}({T})}\] are all compact, as they are closed subsets of the product of the loose bounds on \({\texttt{Var}}({T})\), that product being a compact space. Both \({{\texttt{Exp}}}({T})\) and \({{\texttt{PseudoExp}}}({T})\) are convex, as linearity and being in bounds are preserved by convex combinations. (For the same reason, \({\texttt{Refl}}({T})\) is convex, and is in fact an affine subspace of \({{\texttt{Exp}}}({T})\).)

Lemma

\({\texttt{Refl}}({T})\) is nonempty.

Proof. We apply Kakutani’s theorem to \({{\texttt{Exp}}}({T})\) where \({\mathbb{G}}\) corresponds to \({\mathbb{F}}\) when \(\forall {\varphi}\in {\texttt{Var}}({T}): {\mathbb{G}}[{E}[{\ulcorner {\varphi}\urcorner}]] = {\mathbb{F}}[{\varphi}]\). The set of \({\mathbb{G}}\) corresponding to \({\mathbb{F}}\) is compact and convex, and the graph is closed. For any \({\mathbb{F}}\) there is a corresponding \({\mathbb{G}}\): we take an expectation over the theory

\[{T}_{\mathbb{F}}:= {T}+ \{ {E}[{\ulcorner {\varphi}\urcorner}] \in (a,b) \mid a,b \in {\mathbb{Q}}, {\mathbb{F}}[{\varphi}] \in (a,b) \}\]

stating that \({E}\) behaves according to \({\mathbb{F}}\). This theory \({T}_{\mathbb{F}}\) is consistent because \({\mathbb{F}}\) provides a model. Any completion \({T}_{\mathbb{F}}'\) has \({T}_{\mathbb{F}}'({E}[{\ulcorner {\varphi}\urcorner}]) = {\mathbb{F}}[{\varphi}]\), so the resulting expectation corresponds to \({\mathbb{F}}\). Kakutani’s theorem gives a fixed point of this correspondence, which is in \({\texttt{Refl}}\). \({\dashv}\)

The correspondence \({{\lhd_{{E}}}}\): exact reflection and assigning high probability to reflection for distributions close to reflective

We can’t simply take a correspondence \({{\lhd_{{E}}}}\) that also requires \({\mathbb{G}}\) to assign probability 1 to \({\textsf{refl}}\); in general there would not be any expectation corresponding to any \({\mathbb{F}}\in {{\texttt{Exp}}}({T}) - {\texttt{Refl}}({T})\). Instead we will soften this requirement, and only require that \({\mathbb{G}}[{\textsf{refl}}]\) approach 1 as \({\mathbb{F}}\) approaches being reflective, in order for \({\mathbb{F}}{{\lhd_{{E}}}}{\mathbb{G}}\).

Definition

Define a metric on \({{\texttt{Exp}}}({T})\) by

\[d({\mathbb{F}},{\mathbb{G}}) := \sum_{{\varphi}\in {\texttt{Var}}({T})} \frac{|{\mathbb{F}}[{\varphi}] - {\mathbb{G}}[{\varphi}]|}{2^{{\ulcorner {\varphi}\urcorner}}|{[a,b]_{{T},{\varphi}}}|}\ .\]

(If \(|{[a,b]_{{T},{\varphi}}}|=0\) then the \({\varphi}\) coordinate plays no role in the metric by fiat.)\({=:}\)

The factor of \(1/2^{{\ulcorner {\varphi}\urcorner}}\) ensures that the metric will converge, since the factor of \(1/|{[a,b]_{{T},{\varphi}}}|\) corrects the projection of \({{\texttt{Exp}}}({T})\) in each coordinate to be \({[0,1]}\).

We abbreviate \({d\langle{\mathbb{F}}\rangle} := d({\mathbb{F}}, {\texttt{Refl}}) = \min_{{\mathbb{H}}\in {\texttt{Refl}}} d({\mathbb{F}},{\mathbb{H}})\) to mean the distance from \({\mathbb{F}}\) to the nearest element of the set \({\texttt{Refl}}\). Since \({\texttt{Refl}}\) is compact, this is well-defined and continuous on \({{\texttt{Exp}}}({T})\).

Definition

For \({\mathbb{F}},{\mathbb{G}}\in {{\texttt{Exp}}}({T})\), we say that \({\mathbb{G}}\) reflects \({\mathbb{F}}\) and we write \({\mathbb{F}}{{\lhd_{{E}}}}{\mathbb{G}}\) precisely when:

  • \({\mathbb{G}}\) expects \({E}\) to behave just like \({\mathbb{F}}\), i.e. \(\forall {\varphi}\in {\texttt{Var}}({T}): {\mathbb{G}}[{E}[{\ulcorner {\varphi}\urcorner}]] = {\mathbb{F}}[{\varphi}]\), and

  • \({\mathbb{G}}\) is somewhat confident that \({E}\) is reflective, specifically \({\mathbb{G}}[{\textsf{refl}}] \ge 1-{d\langle{\mathbb{F}}\rangle}\).

\({=:}\)

Fixed points of the correspondence are reflective and believe they are reflective

Say \({\mathbb{G}}{{\lhd_{{E}}}}{\mathbb{G}}\). Then \({\mathbb{G}}\in {\texttt{Refl}}({T})\), by definition of \({{\lhd_{{E}}}}\). In particular, \({d\langle{\mathbb{G}}\rangle} = 0\), so that \({\mathbb{G}}[{\textsf{refl}}] =1\), and \({\mathbb{G}}\) the desired distribution.

Compact and convex images; closed graph

For a fixed \({\mathbb{F}}\), the conditions for \({\mathbb{F}}{{\lhd_{{E}}}}{\mathbb{G}}\) are just closed subintervals in some coordinates, so \(\{ {\mathbb{G}}\mid  {\mathbb{F}}{{\lhd_{{E}}}}{\mathbb{G}}\}\) is compact and convex.

Consider a sequence \({\mathbb{F}}_0 {{\lhd_{{E}}}}{\mathbb{G}}_0, {\mathbb{F}}_1 {{\lhd_{{E}}}}{\mathbb{G}}_1, \dots,\) converging to \({\mathbb{F}}\) and \({\mathbb{G}}\). For \({\varphi}\in {\texttt{Var}}({T})\), since \({\mathbb{G}}_n[{E}[{\ulcorner {\varphi}\urcorner}]] = {\mathbb{F}}_n[{\varphi}] \to {\mathbb{F}}[{\varphi}]\), we have \({\mathbb{G}}_n[{E}[{\ulcorner {\varphi}\urcorner}]] \to {\mathbb{G}}[{E}[{\ulcorner {\varphi}\urcorner}]] = {\mathbb{F}}[{\varphi}]\). Also, since \({d\langle{\mathbb{F}}_n\rangle} \to {d\langle{\mathbb{F}}\rangle}\), we have that the \({\mathbb{G}}_n[{\textsf{refl}}] \ge 1-{d\langle{\mathbb{F}}_n\rangle}\) converge to something at least \(1-{d\langle{\mathbb{F}}\rangle}\), so \({\mathbb{G}}[{\textsf{refl}}] \ge 1-{d\langle{\mathbb{F}}\rangle}\). Thus \({{\lhd_{{E}}}}{\subset}{{\texttt{Exp}}}({T}) {\times}{{\texttt{Exp}}}({T})\) is closed.

Images of the correspondence are nonempty: interpolating reflective and pseudo-expectations

Finally, we need to show that for any \({\mathbb{F}}\in {{\texttt{Exp}}}({T})\), there is some \({\mathbb{G}}\in {{\texttt{Exp}}}({T_{0}})\) such that \({\mathbb{F}}{{\lhd_{{E}}}}{\mathbb{G}}\). (The case distinction is just for explanatory purposes.)

Case 1. \({\mathbb{F}}\in {\texttt{Refl}}({T})\).

Recall the theory \[{T}_{\mathbb{F}}:= {T}+ \{ {E}[{\ulcorner {\varphi}\urcorner}] \in (a,b) \mid a,b \in {\mathbb{Q}}, {\mathbb{F}}[{\varphi}] \in (a,b) \}\] stating that \({E}\) behaves according to \({\mathbb{F}}\). By the theorem about \(T\), \((V,{\mathbb{F}}) \models {T}\), so along with \({\mathbb{F}}\in {\texttt{Refl}}({T})\) we also have \((V,{\mathbb{F}}) \models {T}_{\mathbb{F}}+ {\textsf{refl}}\). Thus that theory is consistent, so we can take some \({\mathbb{G}}\in {{\texttt{Exp}}}({T}_{\mathbb{F}}+ {\textsf{refl}})\). This \({\mathbb{G}}\) expects \({E}\) to behave like \({\mathbb{F}}\), and \({\mathbb{G}}[{\textsf{refl}}]=1 \ge 1-{d\langle{\mathbb{F}}\rangle} = 1\).

Case 2. \({\mathbb{F}}\notin {\texttt{Refl}}({T})\).

Pick some \({\mathbb{H}}\in {\texttt{Refl}}({T})\) with \(d({\mathbb{F}},{\mathbb{H}}) = {d\langle{\mathbb{F}}\rangle}> 0\). As in the previous case, find some \({\mathbb{G}}_{\mathbb{H}}\in {{\texttt{Exp}}}({T}_{\mathbb{H}}+ {\textsf{refl}})\), so \({\mathbb{G}}_{\mathbb{H}}\) expects \({E}\) to behave like \({\mathbb{H}}\), and \({\mathbb{G}}_{\mathbb{H}}[{\textsf{refl}}] = 1\). We will define \({\mathbb{G}}\) with \({\mathbb{F}}{{\lhd_{{E}}}}{\mathbb{G}}\) by taking a convex combination of \({\mathbb{G}}_{\mathbb{H}}\) with another \({\mathbb{G}}_{\mathbb{J}}\in {{\texttt{Exp}}}({T})\):

\[{\mathbb{G}}:= (1-{d\langle{\mathbb{F}}\rangle}) {\mathbb{G}}_{\mathbb{H}}+ {d\langle{\mathbb{F}}\rangle} {\mathbb{G}}_{\mathbb{J}}\ .\]

By convexity, \({\mathbb{G}}\in {{\texttt{Exp}}}({T})\), and since \({\mathbb{G}}_{\mathbb{J}}[{\textsf{refl}}] \in {[0,1]}\), we will have \({\mathbb{G}}[{\textsf{refl}}] \ge (1-{d\langle{\mathbb{F}}\rangle})\) as desired.

However, we also need \({\mathbb{G}}[{E}[{\ulcorner {\varphi}\urcorner}]] = {\mathbb{F}}[{\varphi}]\). That is, we need \[\begin{aligned} {\left( (1-{d\langle{\mathbb{F}}\rangle}) {\mathbb{G}}_{\mathbb{H}}+ {d\langle{\mathbb{F}}\rangle} {\mathbb{G}}_{\mathbb{J}}\right)}[{E}[{\ulcorner {\varphi}\urcorner}]] &= {\mathbb{F}}[{\varphi}]\\ {\mathbb{G}}_{\mathbb{J}}[{E}[{\ulcorner {\varphi}\urcorner}]] &= \frac{{\mathbb{F}}[{\varphi}] - (1-{d\langle{\mathbb{F}}\rangle}) {\mathbb{G}}_{\mathbb{H}}[{E}[{\ulcorner {\varphi}\urcorner}]]}{{d\langle{\mathbb{F}}\rangle}}\\ {\mathbb{J}}[{\varphi}] &:= \frac{1}{{d\langle{\mathbb{F}}\rangle}} {\mathbb{F}}[{\varphi}] + {\left( 1-\frac{1}{{d\langle{\mathbb{F}}\rangle}} \right)} {\mathbb{H}}[{\varphi}]\ ,\end{aligned}\]

where \({\mathbb{G}}_{\mathbb{J}}\) believes that \({E}\) behaves like \({\mathbb{J}}\). We take the last line to be the definition of \({\mathbb{J}}\).

In general, this function \({\mathbb{J}}\) is not in \({{\texttt{Exp}}}({T})\). It may be that \(d({\mathbb{F}},{\mathbb{H}})\) is very small, but for some large \({\varphi}\), \({\mathbb{F}}[{\varphi}]\) is large and \({\mathbb{H}}[{\varphi}]\) is small, so that \({\mathbb{J}}[{\varphi}]\) is very large and actually outside of \({[a,b]_{{T},{\varphi}}}\), and hence not an expectation. However, \({\mathbb{J}}\) is, in fact, a pseudo-expectation over \({T}\):

\[{\mathbb{J}}[{\varphi}] = {\mathbb{H}}[{\varphi}] + \frac{1}{{d\langle{\mathbb{F}}\rangle}} ({\mathbb{F}}[{\varphi}] - {\mathbb{H}}[{\varphi}])\] \[{\mathbb{J}}[{\varphi}] \in [a-K, b+K]\ ,\] where \({\mathbb{H}}[{\varphi}] \in {[a,b]_{{T},{\varphi}}}\), and \(K := \frac{1}{{d\langle{\mathbb{F}}\rangle}} ({\left| {\mathbb{F}}[{\varphi}] - {\mathbb{H}}[{\varphi}] \right|})\). That is, the claim is that \(K \le {(b-a)(2^{{\ulcorner {\varphi}\urcorner}})}\). Indeed:

\[\begin{aligned} K &= \frac{1}{{d\langle{\mathbb{F}}\rangle}} ({\left| {\mathbb{F}}[{\varphi}] - {\mathbb{H}}[{\varphi}] \right|})\\ &= \frac{{\left| {\mathbb{F}}[{\varphi}] - {\mathbb{H}}[{\varphi}] \right|}}{d({\mathbb{F}}, {\mathbb{H}})}\\ &= \frac{{\left| {\mathbb{F}}[{\varphi}] - {\mathbb{H}}[{\varphi}] \right|}}{ \sum_{\psi \in {\texttt{Var}}({T})} \frac{|{\mathbb{F}}[\psi] - {\mathbb{H}}[\psi]|}{2^{{\ulcorner \psi \urcorner}}|{[a,b]_{{T},\psi}}|}}\\ &\le \frac{{\left| {\mathbb{F}}[{\varphi}] - {\mathbb{H}}[{\varphi}] \right|}}{ \frac{|{\mathbb{F}}[{\varphi}] - {\mathbb{H}}[{\varphi}]|}{2^{{\ulcorner {\varphi}\urcorner}}|{[a,b]_{{T},{\varphi}}}|}}\\ &= 2^{{\ulcorner {\varphi}\urcorner}}|{[a,b]_{{T},{\varphi}}}|\\ &= {(b-a)(2^{{\ulcorner {\varphi}\urcorner}})}\ .\end{aligned}\]

Therefore \({\mathbb{J}}\in {{\texttt{PseudoExp}}}({T})\). By the theorem on \(T\), \((V,{\mathbb{J}}) \models {T}\), so that \({T}_{\mathbb{J}}\) is consistent and we obtain \({\mathbb{G}}_{\mathbb{J}}\in {{\texttt{Exp}}}({T})\) that expects \({E}\) to behave like \({\mathbb{J}}\). Then \({\mathbb{G}}= (1-{d\langle{\mathbb{F}}\rangle}) {\mathbb{G}}_{\mathbb{H}}+ {d\langle{\mathbb{F}}\rangle} {\mathbb{G}}_{\mathbb{J}}\) is in \({{\texttt{Exp}}}({T})\), expects \({E}\) to behave like \({\mathbb{F}}\), and has \({\mathbb{G}}[{\textsf{refl}}] \ge (1-{d\langle{\mathbb{F}}\rangle})\). That is, \({\mathbb{F}}{{\lhd_{{E}}}}{\mathbb{G}}\).

The conditions of Kakutani’s theorem are satisfied, so there is a fixed point \({\mathbb{E}}{{\lhd_{{E}}}}{\mathbb{E}}\), and therefore we have an expectation that believes \({E}\) behaves like itself, and that assigns probability 1 to \({E}\) having this property. \({\dashv}\)

Extension to belief in any generic facts about \({\texttt{Refl}}\)

The above argument goes through in exactly the same way for any statement \({\theta}\) that is satisfied by all reflective expectations; we just have \({\mathbb{G}}_{\mathbb{H}}\) also assign probability 1 to \({\theta}\), and modify \({{\lhd_{{E}}}}\) by adding a condition for \({\theta}\) analogous to that for \({\textsf{refl}}\). For example, we can have our reflective \({\mathbb{E}}\) assign probability 1 to \({E}\in {{\texttt{Exp}}}({T})\), which is analogous to an inner coherence principle.

7. Discussion

I think that if the base theory is strong enough to prove \({{\texttt{Exp}}}({T}) \cong \Delta(T)\), then this whole argument can be carried out with \({E}\) defined in terms of \({P}\), a symbol for a probability distribution, and so we get a probability distribution over the original language with the desired beliefs about itself as a probability distribution.

I think it should be possible to have a distribution that is reflective in the sense of \({{\lhd_{{E}}}}\) be definable and reflective for its definition, using the methods from this post. But it doesn’t seem as straightforward here. One strategy might be to turn the sentence in the definition of \({T_{n+1}}\), stating that \({E}\) is in the loose \({T_{n}}\)-bounds on variables, into a schema, and diagonalizing at once against all the \({T_{n}}\) refuting finite behaviors. But, the proof of soundness of \({T}\) over pseudo-expectations, and diagonalizing also against refuting finite behaviors in conjunction with \({\textsf{refl}}\), seems to require a little more work (and may be false).

It would be nice to have a good theory of logical probability. The existence proof of an expectation-reflective distribution given here shows that expectation-reflection is a desideratum that might be achievable in a broader context (i.e. in conjunction with other desiderata).

I don’t know what class of variables a \({{\lhd_{{E}}}}\)-reflective \({\mathbb{E}}\) is reflective for. Universes that use \({\mathbb{E}}\) in a way that only looks at \({\mathbb{E}}\)’s opinions on variables in \({\texttt{Var}}({T_{n}})\) for some \(n\), and are defined and uniformly bounded whenever \({\mathbb{E}}\) is in \({{\texttt{PseudoExp}}}({T_{n}})\), will be reflected accurately. If the universe looks at all of \({\mathbb{E}}\), and for instance does something crazy if \({\mathbb{E}}\) is not in \({{\texttt{Exp}}}({T})\), then \({T}\) may not be able to prove



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

Note that the problem with
by Vadim Kosoy on Open Problems Regarding Counterfactuals: An Introd... | 0 likes

Typos on page 5: *
by Vadim Kosoy on Open Problems Regarding Counterfactuals: An Introd... | 0 likes

Ah, you're right. So gain
by Abram Demski on Smoking Lesion Steelman | 0 likes

> Do you have ideas for how
by Jessica Taylor on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

I think I understand what
by Wei Dai on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

>You don’t have to solve
by Wei Dai on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

Your confusion is because you
by Vadim Kosoy on Delegative Inverse Reinforcement Learning | 0 likes

My confusion is the
by Tom Everitt on Delegative Inverse Reinforcement Learning | 0 likes

> First of all, it seems to
by Abram Demski on Smoking Lesion Steelman | 0 likes

> figure out what my values
by Vladimir Slepnev on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

I agree that selection bias
by Jessica Taylor on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

>It seems quite plausible
by Wei Dai on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

> defending against this type
by Paul Christiano on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

2. I think that we can avoid
by Paul Christiano on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

I hope you stay engaged with
by Wei Dai on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

RSS

Privacy & Terms