Intelligent Agent Foundations Forumsign up / log in
You can't beat a troll by predicting it.
discussion post by Abram Demski 1159 days ago | discuss

My trollability post showed that any computable probability distribution on logic which updates on proofs in a Bayesian way can in principle be manipulated into nearly-arbitrary beliefs by an adversary. (At least, this is true in my \(GLS\)-based treatment, and in a version which updates on theorems directly rather than on their provability.) Here, I sketch a result which suggests that it’s fairly easy to do so, blocking some potential routes around the problem. I will use notation and concepts from the original trollability post.

First, note that corollary 1 does not depend on the odds ration 1/2 which Theorem 1 implies we can obtain. A weaker troll might not have the computational power to search for proofs which so much leverage over our beliefs. All the troll needs is a sequence of theorems \(t_n\) such that the product of the likelihood ratios \(P(\Box t_n | \Diamond x) / P(\Box t_n | \neg \Diamond x)\) goes to zero. (Note: what Theorem 1 establishes is exactly that we can always find \(t\) so that this likelihood ratio is less than or equal to 1/2.) The likelihood ratio doesn’t even need to be bounded away from one. It could be that each new \(t\) is able to reduce the probability less and less (meaning the likelihood ratio is closer and closer to one), but the overall effect is still to modify the probability of \(x\) arbitrarily. This is guaranteed if the product of the ratios converges to zero.

(Analogously, we would talk about this product converging to infinity for corollary 2. I’ll deal just with the case of driving probabilities down, here; as in the previous post, the idea is that if we can’t keep probabilities from being driven arbitrarily down, we also cannot keep them from being driven arbitrarily up, so we get non-convergence.)

It seems natural to hope, then, that an agent could avoid manipulation by predictable trolls. If the agent is predicting the troll quite accurately, then it must be placing high probability on \(\Box t\) ahead of time; so, the likelihood ratio must be close to one. If the predictions converge to probability one very quickly, then perhaps the likelihood ratios can converge quickly enough to avoid the divergence result.

This turns out to be impossible for fairly simple trolls.

Before sketching the proof, I should explain what I am not doing. I am not saying that an agent cannot block a troll by beating a troll to theorems. This totally works: if the agent already knows a theorem by the time a troll presents it, then there is nothing to update on. Of course there is still a problem if the agent is beating the troll to theorems by anticipating what the troll will try next; in that case, the troll is manipulating the agent just as much as before. However, if the agent is so fast that it can beat the troll to theorems while not changing its search-order, the troll is toothless.

This is little comfort, however. My real concern in all of this is that the proof-order of the agent itself can cause non-convergence. We do not yet know that convergence is possible with any proof-ordering. The troll is only a personification of a bad proof-order in an agent’s internal theorem-prover. So, it’s sufficient to consider the case where all the theorems are being proved by the troll, and the agent acts as a passive receiver of information. If we can find any restrictions on the troll which allow convergence in that case, we are happy. The hope is that a prior which models the troll’s sequence well could converge. So, let’s generalize the theorem from last time to block that approach as much as we can.

Let \(P_n\) be the probability distribution at time \(n\) and \(t_n\) be the truth which the troll proves to the agent at that time to get it to update to \(P_{n+1}\). I’ll stick to discussing trolls which use \(t_n\) of the form \(x \rightarrow y_n\), with \(y_n\) a theorem of \(PA\), as in the proofs in the previous post. Furthermore, suppose that the troll eventually uses each theorem of \(PA\) as a \(y_n\). I assume that \(x\) is undecidable, and that probabilities are nonzero unless they are forced to be zero by coherence. What we want is:

\[\lim_{n \rightarrow \infty} \prod_{i=0}^n P_i(\Box t_i | \Diamond x) / P_i(\Box t_i | \neg \Diamond x) > 0\]

Along the lines of the proof of theorem 1, we can see that when \(t_n\) has the form \(x \rightarrow y_n\), \(P_n(\Box t_n | \Diamond x) / P_n(\Box t_n | \neg \Diamond x) = P_n(\Box t_n | \Diamond x)\). So, the desired result is equivalent to:

\[\lim_{n \rightarrow \infty} \prod_{i=0}^n P_i(\Box t_i | \Diamond x) > 0\]

Note, however, that this product is a chain of conditional probabilities giving the probability of the next \(\Box t_n\) conditioned on all the previous \(\Box t_n\) and on \(\Diamond x\). If the limit is greater than zero, then we can sample theories from \(P_1\) and have a nonzero chance of sampling all the \(t_n\). But we also have a nonzero chance of sampling \(x\). Combined with all \(t_n\), this implies all the theorems of \(PA\). Due to \(GLS\) coherence, this implies that we have a nonzero chance of sampling a complete and consistent theory. This is impossible.

The conclusion is, the construction in my previous post does not need to cleverly look for \(y\) which satisfy the 1/2 cutoff. All that’s necessary is to go through all theorems. Rather than a proof that any \(GLS\)-coherent prior is vulnerable to some troll, this gives a single troll which they all are vulnerable to.

Furthermore, we cannot immunize against any troll which eventually uses all theorems as \(y_n\). This is counter-intuitive, because it means the agent may have much more processing power than the troll, and still be incapable of predicting the troll as it conditions on the troll’s outputs. We can imagine a \(GLS\)-coherent prior which also has powerful sequence prediction distributions baked in. The troll might be enumerating theorems in quite a predictable way, following a low-order polynomial algorithm, while we use some bounded version of Solomonoff induction or something. The fact is, we will either (a) make our prior uncomputable, (b) have to weaken the sequence prediction to the point where it does not converge to predicting probability one for consistent patterns, or (c) make the probability distribution incoherent.



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

[Note: This comment is three
by Ryan Carey on A brief note on factoring out certain variables | 0 likes

There should be a chat icon
by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes

Apparently "You must be
by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like

There is a replacement for
by Alex Mennen on Meta: IAFF vs LessWrong | 1 like

Regarding the physical
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think that we should expect
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think I understand your
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

This seems like a hack. The
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

After thinking some more,
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yes, I think that we're
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

My intuition is that it must
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

To first approximation, a
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Actually, I *am* including
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yeah, when I went back and
by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes

> Well, we could give up on
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

RSS

Privacy & Terms