Intelligent Agent Foundations Forumsign up / log in
Thoughts on Quantilizers
post by Stuart Armstrong 317 days ago | Ryan Carey and Abram Demski like this | discuss

A putative new idea for AI control; index here.

This post will look at some of the properties of quantilizers, when they succeed and how they might fail.

Roughly speaking, let \(f\) be some true objective function that we want to maximise. We haven’t been able to specify it fully, so we have instead a proxy function \(g\). There is a cost function \(c=f-g\) which measures how much \(g\) falls short of \(f\). Then a quantilizer will choose actions (or policies) radomly from the top \(n\%\) of actions available, ranking those actions according to \(g\).

It is plausible that for standard actions or policies, \(g\) and \(f\) are pretty similar. But that when we push to maximising \(g\), then the tiny details where \(g\) and \(f\) differ will balloon, and the cost can grow very large indeed.

This could be illustrated roughly by figure I, where \(g\) and \(c\) are plotted against each other; imagine that \(c\) is on a log scale.

The blue areas are possible actions that can be taken. Note a large bunch of actions that are not particularly good for \(g\) but have low cost, a thinner tail of more optimised actions that have higher \(g\) and still have low cost, and a much thinner tail that has even higher \(g\) but high cost. The \(g\)-maximising actions with maximal cost are represented by the red star.

Figure I thus shows a situation ripe for some form of quantilization. But consider figure II:

In figure 2, the only way to get high \(g\) is to have a high \(c\). The situation is completely unsuited for quantilization: any \(g\) maximiser, even a quantilizer, will score terribly under \(f\). But that means mainly that we have chosen a terrible \(g\).

Now, back to figure I, where quantilization might work, at least in principle. The ideal would be situation Ia; here blue represents actions below the top \(n\%\) cut-off, green those above (which include the edge-case red-star actions, as before):

Here the top \(n\%\) of actions all score a good value under \(g\), and yet most of them have low cost.

But even within the the broad strokes of figure I, quantilization can fail. Figure Ib shows a first type of failure:

Here the problem is that the quantilizer lefts in too many mediocre actions, so the expectation of \(g\) (and \(f\)) is mediocre; with a smaller \(n\%\), the quantilizer would be better.

Another failure mode is figure Ic:

Here the \(n\%\) is too low: all the quantilized solutions have high cost.

Another quantilizer design

An idea I had some time ago was that, instead of of taking the top \(n\%\) of the actions, the quantilizer instead choose among the actions that are within \(n\%\) of the top \(g\)-maximising actions. Such a design would be less likely to encounter situations like Ib, but more likely to face situations like Ic.

What can be done?

So, what can be done to improve quantilizers? I’ll be posting some thoughts as they develop, but there are two ideas that spring to mind immediately. First of all, we can use CUA oracles to investigate the shape of the space of actions, at least from the perspective of \(g\) (\(c\), like \(f\), cannot be calculated explicitly).

Secondly, there’s an idea that I had around low-impact AIs. Basically, it was to ensure that there was some action the AI could take that could easily reach some approximation of its goal. For instance, have a utility function that encourages the AI to build one papeclip, and cap that utility at one. Then scatter around some basic machinery to melt steel, stretch it, give the AI some manipulator arms, etc… The idea is to ensure there is at least one safe policy that gives the AI some high expected utility. Then if there is one policy, there’s probably a large amount of similar policies in its vicinity, safe policies with high expectation. Then it seems that quantilization should work, probably best in its ‘within \(n\%\) of the maximal policy’ version (working well because we know the cap of the utility function, hence have a cap on the maximal policy).

Now, how do we know that a safe policy exists? We have to rely on human predictive abilities, which can be flawed. But the reason we’re reasonably confident in this scenario is that we believe that we could figure out how to build a paperclip, given the stuff the AI has lying around. And the AI would presumably do better than us.





When considering an embedder
by Jack Gallagher on Where does ADT Go Wrong? | 0 likes

The differences between this
by Abram Demski on Policy Selection Solves Most Problems | 0 likes

Looking "at the very
by Abram Demski on Policy Selection Solves Most Problems | 0 likes

Without reading closely, this
by Paul Christiano on Policy Selection Solves Most Problems | 1 like

>policy selection converges
by Stuart Armstrong on Policy Selection Solves Most Problems | 0 likes

Indeed there is some kind of
by Vadim Kosoy on Catastrophe Mitigation Using DRL | 0 likes

Very nice. I wonder whether
by Vadim Kosoy on Hyperreal Brouwer | 0 likes

Freezing the reward seems
by Vadim Kosoy on Resolving human inconsistency in a simple model | 0 likes

Unfortunately, it's not just
by Vadim Kosoy on Catastrophe Mitigation Using DRL | 0 likes

>We can solve the problem in
by Wei Dai on The Happy Dance Problem | 1 like

Maybe it's just my browser,
by Gordon Worley III on Catastrophe Mitigation Using DRL | 2 likes

At present, I think the main
by Abram Demski on Looking for Recommendations RE UDT vs. bounded com... | 0 likes

In the first round I'm
by Paul Christiano on Funding opportunity for AI alignment research | 0 likes

Fine with it being shared
by Paul Christiano on Funding opportunity for AI alignment research | 0 likes

I think the point I was
by Abram Demski on Predictable Exploration | 0 likes


Privacy & Terms