by Sören Mindermann 238 days ago | link | parent (x-posted from Arbital ==> Goodhart’s curse) On “Conditions for Goodhart’s curse”: It seems like with AI alignment the curse happens mostly when V is defined in terms of some high-level features of the state, which are normally not easily maximized. I.e., V is something like a neural network $$V:s \mapsto V(s)$$ where $$s$$ is the state. Now suppose U’ is a neural network which outputs the AI’s estimate of these features. The AI can then manipulate the state/input to maximize these features. That’s just the standard problem of adversarial examples. So it seems like the conditions we’re looking for are generally met in the common setting were adversarial examples do work to maximize some loss function. One requirement there is that the input space is high-dimensional. So why doesn’t the 2D Gaussian example go wrong? [This is about the example from Arbital ==> Goodhart’s Curse where there is no bound $$\sqrt{n}$$ on $$V$$ and $$U$$]. There’s no high-level features to optimize by using the flexibility of the input space. On the other hand, you don’t need a flexible input space to fall prey to the winner’s curse. Instead of using the high flexibility of the input space you use the ‘high flexibility’ of the noise if you have many data points. The noise will take any possible value with enough data, causing the winner’s curse. If you care about a feature that is bounded under the real-world distribution but noise is unbounded, you will find that the most promising-looking data points are actually maximizing the noise. There’s a noise-free (i.e. no measurement errors) variant of the winner’s curse which suggests another connection to adversarial examples. If you simply have $$n$$ data points and pick the one that maximizes some outcome measure, you can conceptualize this as evolutionary optimization in the input space. Usually, adversarial examples are generated by following the gradient in the input space. Instead, the winner’s curse uses evolutionary optimization.

### NEW DISCUSSION POSTS

I found an improved version
 by Alex Appel on A Loophole for Self-Applicative Soundness | 0 likes

I misunderstood your
 by Sam Eisenstat on A Loophole for Self-Applicative Soundness | 0 likes

Caught a flaw with this
 by Alex Appel on A Loophole for Self-Applicative Soundness | 0 likes

As you say, this isn't a
 by Sam Eisenstat on A Loophole for Self-Applicative Soundness | 1 like

Note: I currently think that
 by Jessica Taylor on Predicting HCH using expert advice | 0 likes

Counterfactual mugging
 by Jessica Taylor on Doubts about Updatelessness | 0 likes

What do you mean by "in full
 by David Krueger on Doubts about Updatelessness | 0 likes

It seems relatively plausible
 by Paul Christiano on Maximally efficient agents will probably have an a... | 1 like

I think that in that case,
 by Alex Appel on Smoking Lesion Steelman | 1 like

 by Sam Eisenstat on No Constant Distribution Can be a Logical Inductor | 1 like

A: While that is a really
 by Alex Appel on Musings on Exploration | 0 likes

> The true reason to do
 by Jessica Taylor on Musings on Exploration | 0 likes