by Sam Eisenstat 188 days ago | Jessica Taylor likes this | link | parent Two minor comments. First, the bitstrings that you use do not all correspond to worlds, since, for example, $$\rm{Con}(\rm{PA}+\rm{Con}(\rm{PA}))$$ implies $$\rm{Con}(\rm{PA})$$, as $$\rm{PA}$$ is a subtheory of $$\rm{PA} + \rm{Con}(\rm{PA})$$. This can be fixed by instead using a tree of sentences that all diagonalize against themselves. Tsvi and I used a construction in this spirit in A limit-computable, self-reflective distribution, for example. Second, I believe that weakening #2 in this post also cannot be satisfied by any constant distribution. To sketch my reasoning, a trader can try to buy a sequence of sentences $$\phi_1, \phi_1 \wedge \phi_2, \dots$$, spending $$\2^{-n}$$ on the $$n$$th sentence $$\phi_1 \wedge \dots \wedge \phi_n$$. It should choose the sequence of sentences so that $$\phi_1 \wedge \dots \wedge \phi_n$$ has probability at most $$2^{-n}$$, and then it will make an infinite amount of money if the sentences are simultaneously true. The way to do this is to choose each $$\phi_n$$ from a list of all sentences. If at any point you notice that $$\phi_1 \wedge \dots \wedge \phi_n$$ has too high a probability, then pick a new sentence for $$\phi_n$$. We can sell all the conjunctions $$\phi_1 \wedge \dots \wedge \phi_k$$ for $$k \ge n$$ and get back the original amount payed by hypothesis. Then, if we can keep using sharper continuous tests of the probabilities of the sentences $$\phi_1 \wedge \dots \wedge \phi_n$$ over time, we will settle down to a sequence with the desired property. In order to turn this sketch into a proof, we need to be more careful about how these things are to be made continuous, but it seems routine.

### NEW DISCUSSION POSTS

[Note: This comment is three
 by Ryan Carey on A brief note on factoring out certain variables | 0 likes

There should be a chat icon
 by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes

Apparently "You must be
 by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like

There is a replacement for
 by Alex Mennen on Meta: IAFF vs LessWrong | 1 like

Regarding the physical
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think that we should expect
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think I understand your
 by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

This seems like a hack. The
 by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

After thinking some more,
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yes, I think that we're
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

My intuition is that it must
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

To first approximation, a
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Actually, I *am* including
 by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yeah, when I went back and
 by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes

> Well, we could give up on
 by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes