Intelligent Agent Foundations Forumsign up / log in
A permutation argument for comparing utility functions
discussion post by Stuart Armstrong 147 days ago | 2 comments

When doing intertheoretic utility comparisons, there is one clear and easy case: when everything is exactly symmetric.

This happens when, for instance, there exists \(u\) and \(v\) such that \(p([u])=p([v])=0.5\) and there exists a map \(\sigma : \mathbb{S}\to\mathbb{S}\), such that \(\sigma^2\) is the identity (hence \(\sigma\) is an involution) and for all \(s\in\mathbb{S}\)), \(u(s)=v(\sigma(s))\).

Note that this implies that \((u(s),v(s))=(v(\sigma(s)),u(\sigma(s))\), so \(\sigma\) is essentially a ‘reflection’.

Then, since everything is so symmetric, we can say there is no way of distinguishing \(u\) from \(v\), so the correct approach is to maximise \([u+v]\).

See the following graph, with the strategy to be followed marked in red:

Permutation

Symmetry is good as far as it goes, but is very fragile. It doesn’t say anything about what happens when \(p([u])=49/100\) and \(p([v])=51/100\), for instance.

There is an argument, however, that resolves the unequal probability cases and extends the results to non-symmetric cases.

Consider for instance the case where \(u\) and \(v\) are as follows, for \(5\) strategies in \(\mathbb{S}\):

Nothing obvious springs to mind as to what the best normalisation process is – the setup is clearly unsymmetrical, and four of the five options are on the Pareto boundary. But let’s rescale and translate the utilities:

This is still unsymmetrical, but note that \(u\) and \(v\) have the same values: \(-1\), \(0.5\), \(2\), \(3\), and \(3.5\).

Thus there is a permutation \(\rho:\mathbb{S}\to\mathbb{S}\) such that for all \(s\in\mathbb{S}\), \(u(s)=v(\rho(s))\).

Another type of uncertainty

This permutation \(\rho\) allows us to transform the uncertainty about one’s own values (which we don’t know how to handle) to other types of uncertainty (which we can).

How so? Let \(\mathbb{S}'\) be another copy of \(\mathbb{S}\), but with different labels, and let \(i:\mathbb{S}\to\mathbb{S}'\) be the identity map that re-assigns each element to its original label.

Then instead of seeing \(u\) and \(v\) as different utility functions on \(\mathbb{S}\), we can see them as both being the same utility function \(w\) on \(\mathbb{S}'\), with uncertainty over a map \(m\) from \(\mathbb{S}\) to \(\mathbb{S}'\). This map \(m\) is \(i\) with probability \(p([u])\) and \(i\circ\rho\) with probability \(p([v])\).

Thus the agent should see itself as a \(w\)-maximiser, with standard uncertainty over the map \(m\) (this could be seen as uncertainty over the consequences of choosing a strategy). This means it will maximise \([p([u])u+p([v])v]\), as long as \(u\) and \(v\) are related by a permutation on \(\mathbb{S}\).

In this particular case, if \(p([u])\) and \(p([v])\) are close to \(0.5\), this means that the red strategy will be selected:

Note that permutations includes the symmetric case where \(\rho\) is an involution, so the symmetric argument now extends to cases where \(p([u])\neq p([v])\).

Consequences

For permutations, this argument claims that the correct approach is to use Individual normalisations. Indeed, every normalisation presented here would reach the same result.

This doesn’t prove that individual normalisation is necessarily correct – you can imagine a general system that only give individual normalisations on permutation problems – but is suggestive none the less.



by Owen Cotton-Barratt 147 days ago | link

I’m not sure I’ve fully followed, but I’m suspicious that you seem to be getting something for nothing in your shift from a type of uncertainty that we don’t know how to handle to a type we do.

It seems to me like you must be making an implicit assumption somewhere. My guess is that this is where you used \(i\) to pair \(\mathbb{S}\) with \(\mathbb{S}'\). If you’d instead chosen \(j = i \circ \rho\) as the matching then you’d have uncertainty between whether \(m\) should be \(j\) or \(\rho^{-1}\circ j\). My guess is that generically this gives different recommendations from your approach.

reply

by Stuart Armstrong 147 days ago | Owen Cotton-Barratt likes this | link

Nope! That gives the same recommendation (as does the same thing if you pre-compose with any other permutation of \(\mathbb{S}\)). I thought about putting that fact in, but it took up space.

The recommendation given in both cases is just to normalise each utility function individually, using any of the methods that we know (which will always produce equivalent utility classes in this situation).

reply



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

Note that the problem with
by Vadim Kosoy on Open Problems Regarding Counterfactuals: An Introd... | 0 likes

Typos on page 5: *
by Vadim Kosoy on Open Problems Regarding Counterfactuals: An Introd... | 0 likes

Ah, you're right. So gain
by Abram Demski on Smoking Lesion Steelman | 0 likes

> Do you have ideas for how
by Jessica Taylor on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

I think I understand what
by Wei Dai on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

>You don’t have to solve
by Wei Dai on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

Your confusion is because you
by Vadim Kosoy on Delegative Inverse Reinforcement Learning | 0 likes

My confusion is the
by Tom Everitt on Delegative Inverse Reinforcement Learning | 0 likes

> First of all, it seems to
by Abram Demski on Smoking Lesion Steelman | 0 likes

> figure out what my values
by Vladimir Slepnev on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

I agree that selection bias
by Jessica Taylor on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

>It seems quite plausible
by Wei Dai on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

> defending against this type
by Paul Christiano on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

2. I think that we can avoid
by Paul Christiano on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

I hope you stay engaged with
by Wei Dai on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

RSS

Privacy & Terms