Intelligent Agent Foundations Forumsign up / log in
Predicting HCH using expert advice
post by Jessica Taylor 545 days ago | Ryan Carey, Patrick LaVictoire and Paul Christiano like this | 1 comment

Summary: in approximating a scheme like HCH , we would like some notion of “the best the prediction can be given available AI capabilities”. There’s a natural notion of “the best prediction of a human we should expect to get”. In general this doesn’t yield good predictions of HCH, but it does yield an HCH-like computation model that seems useful.


(thanks to Ryan Carey, Paul Christiano, and some people at the November veteran’s workshop for helping me work through these ideas)

Suppose we would like an AI system to predict what HCH would do. The AI system is limited; it doesn’t have a perfect prediction of a human. What’s the best we should expect it to do?

As a simpler sub-question, we can ask what the best prediction for a single query to a human is. Let \(H : \text{String} \rightarrow \Delta \text{String}\) be the “true human”: a stochastic function mapping a question to a distribution over answers (say, over quantum uncertainty). How “good” of a prediction function \(\hat{H} : \text{String} \rightarrow \Delta \text{String}\) should we expect to get?

The short answer is that we should expect that, for any question \(x\), \(\hat{H}(x)\) should be within \(\epsilon\) of some pretty good prediction of \(H(x)\).

Why within \(\epsilon\)?

(feel free to skip this section if you’re willing to buy the previous paragraph)

We will create an online prediction system that on each iteration \(i\) takes in a question \(X_i : \text{String}\) and outputs either a distribution over answers \(Q_i : \Delta \text{String}\), or \(\bot\) to indicate ambiguity. If outputting \(\bot\), the prediction system observes \(Y_i \sim H(X_i)\). We will construct this online prediction system from a bunch of untrusted experts \(P_1, ..., P_K : \Delta(\text{String} \rightarrow \Delta \text{String})\), each of whom is a probability distribution over the human \(H\).

Suppose one expert is “correct” in that in fact \(H \sim P_k\) for some \(k\). Then KWIK learning will succeed in creating an online prediction system such that, with high probability, for each \(i\) in which \(Q_i\) (and not \(\bot\)) is output, \[ \| Q_i - P_k(Y_i | \text{the data known at time i}) \|_1 < \epsilon.\] That is, the predictions \(Q_i\) will be close to the “correct predictions” that \(P_k\) makes by total variation distance. Furthermore, \(\bot\) must be output only \(\widetilde{O}(K/\epsilon^2)\) times; this measures the amount of training data required.
For the rest of this post we should assume that, after setting up the KWIK learner, we do active learning (finding inputs \(x\) on which the learner outputs \(\bot\)) until the KWIK learner no longer outputs \(\bot\), then getting \(\hat{H}\) using the current state of the KWIK learner. If we didn’t do this, there would be no concrete \(\hat{H}\) stochastic function because the state of the learner would keep changing over time.

The assumptions in the section (especially that one expert is correct) are pretty sketchy, but I expect the basic picture of “predictions should be good within \(\epsilon\)” to work out.

Predicting collections of humans is hard

Now that we have an approximate prediction of a human, we can use this to approximate a collection of humans. For example, we might want to predict \(H(\text{“a''}) + H(\text{“b''})\), i.e. the result of asking \(H\) the questions “a’’ and “b’’ and summing the answers. In general we can consider any function \(g : (\text{String} \rightarrow \Delta \text{String}) \times \text{String} \rightarrow \Delta \text{String}\) which computes something by querying some stochastic function a bunch of times, and consider the problem of predicting \(g(H)\).

The obvious way to predict \(g(H)\) is \(g(\hat{H})\); in this case, \(\hat{H}(\text{“a''}) + \hat{H}(\text{“b''})\). But this can be highly inaccurate even if \(\hat{H}\) is accurate!

Let us say that either \(H(\text{“a''}) = 0\) and \(H(\text{“b''}) = 1\), or \(H(\text{“a''}) = 1\) and \(H(\text{“b''}) = 0\). The AI does not have enough information to distinguish these possibilities; under this uncertainty, it is reasonable to think they are equally likely, so we have \(\hat{H}(\text{“a''}) = \hat{H}(\text{“b''}) = \text{Bernoulli}(0.5)\).

The AI has enough information to conclude that \(H(\text{“a''}) + H(\text{“b''}) = 1\). But the distribution \(\hat{H}(\text{“a''}) + \hat{H}(\text{“b''})\) will put 0.25 probability mass on 0, 0.5 on 1, and 0.25 on 2.

In general we shouldn’t expect replacing \(H\) with \(\hat{H}\) to work very well; it does not take into account any correlation between \(H(x_1)\) and \(H(x_2)\) for \(x_1 \neq x_2\).

Can we do better by taking into account the fact that the experts described in the previous section have a full joint distribution over all the \(H(x)\) values? Not really; the good expert \(H_k\) cannot be distinguished from \(\hat{H}\) by more than \(\epsilon\) with a single experiment (i.e. single question-answering round). So if some other expert predicts \(H_{k'} = \hat{H}\), this expert will not be proven wrong in a single experiment.

Now, the expert might be proven wrong if we expand the notion of “experiment” to include asking \(H\) multiple questions. But this becomes more expensive, and actually doesn’t help much when we’re considering pretty big collections of \(H\) calls (where correlations over a greater number of \(H(x)\) values are important).

I’ve basically given up on the problem of predicting large computations made out of calls to \(H\) using untrusted expert advice. There’s just too much room for an expert to mess with the predictions by changing the correlation structure in ways that are hard to detect with short experiments.

\(g(\hat{H})\) is still useful

Despite the fact that predicting large computations made of calls to \(H\) seems intractable, large computations made of calls to \(\hat{H}\) are useful in their own right. We can think of \(g(\hat{H})\) as a collaboration among many “clones” of a single human, each of whom has a personality sampled from the AI’s distribution over that human’s personality traits. That is, each call to \(\hat{H}\) is considered to be asking a question to an independent sampling of the human’s psychological parameters (sampled from the AI’s information state).

For example, if the AI does not know Bob’s favorite color, then \(\widehat{\text{Bob}}(\text{“what is your favorite color?''})\) will be stochastic. If we consider the computation \(g(H) := [H(\text{“what is your favorite color?''}) = H(\text{“what is your favorite color?''})]\) which asks for \(H\)’s favorite color twice and checks if they are equal, then \(g(\widehat{\text{Bob}})\) will return false a non-negligible percentage of the time.

If we define \(g\) such that \(g(H) = HCH\) (i.e. \(g\) asks its argument how to spawn more copies and so on), then \(g(\hat{H})\) is the equivalent of HCH for clones sampled from the AI’s information state. (See also the notation for HCH variants in this post). The issue with psychological parameters is pretty weird but doesn’t seem to present serious difficulties for most uses of HCH I can think of. I haven’t thought about it a ton, but in general it seems like it should be possible to collaborate with clones of yourself that have slightly different psychological parameters (they’ll only be slightly different if the AI knows a lot about you). I confirmed with Paul Christiano that he is optimistic about \(g(\hat{H})\) being useful and pessimistic about predictions of HCH proper that take correlation into account.

When considering very large computations \(g(\hat{H})\), we might be concerned that local errors could propagate throughout the computation. But it’s possible to mitigate this by doing something like taking multiple samples of \(\hat{H}(x)\) for some question \(x\) and taking a majority vote, as described in this post.

A note on not overestimating probabilities

(feel free to skip this section)

Paul Christiano told me about an idea to get our predictions \(\hat{H}(x)\) to not overestimate the probability of any action by more than a factor of \(\epsilon\), i.e. \[\forall y: Q_i(y) \leq (1 + \epsilon) P_k(Y_i = y | \text{the data known at time $i$})\]

Roughly, this can be done by taking the minimum probability of \(y\) according to all the credible experts, then renormalizing. This seems useful if we’re concerned about \(Q\) predicting rare bad things that \(P_k\) wouldn’t predict. It doesn’t change the nature of the analysis much, though.



by Jessica Taylor 11 days ago | link

Note: I currently think that the basic picture of getting within \(\epsilon\) of a good prediction is actually pretty sketchy. I wrote about the sample complexity here. Additional to the sample complexity issues, the requirement is for predictors to be Bayes-optimal, but Bayes-optimality is not possible for bounded reasoners. This is important because e.g. some adversarial predictor might make very good predictions on some subset of questions (because it’s spending its compute on those specifically), causing other predictors to be filtered out (if those questions are used to determine who the best predictor is). I don’t know what kind of analysis could get the \(\epsilon\)-accuracy result at this point.

reply



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

Note: I currently think that
by Jessica Taylor on Predicting HCH using expert advice | 0 likes

Counterfactual mugging
by Jessica Taylor on Doubts about Updatelessness | 0 likes

What do you mean by "in full
by David Krueger on Doubts about Updatelessness | 0 likes

It seems relatively plausible
by Paul Christiano on Maximally efficient agents will probably have an a... | 1 like

I think that in that case,
by Alex Appel on Smoking Lesion Steelman | 1 like

Two minor comments. First,
by Sam Eisenstat on No Constant Distribution Can be a Logical Inductor | 1 like

A: While that is a really
by Alex Appel on Musings on Exploration | 0 likes

> The true reason to do
by Jessica Taylor on Musings on Exploration | 0 likes

A few comments. Traps are
by Vadim Kosoy on Musings on Exploration | 1 like

I'm not convinced exploration
by Abram Demski on Musings on Exploration | 0 likes

Update: This isn't really an
by Alex Appel on A Difficulty With Density-Zero Exploration | 0 likes

If you drop the
by Alex Appel on Distributed Cooperation | 1 like

Cool! I'm happy to see this
by Abram Demski on Distributed Cooperation | 0 likes

Caveat: The version of EDT
by 258 on In memoryless Cartesian environments, every UDT po... | 2 likes

[Delegative Reinforcement
by Vadim Kosoy on Stable Pointers to Value II: Environmental Goals | 1 like

RSS

Privacy & Terms