Intelligent Agent Foundations Forumsign up / log in
Change utility, reduce extortion
post by Stuart Armstrong 146 days ago | 3 comments

A putative new idea for AI control; index here.

EDIT: This method is not intended to solve extortion, just to remove the likelihood of extremely terrible outcomes (and slightly reduce the vulnerability to extortion).

A full solution to the extortion problem is sorely elusive. However, there are crude hacks that we can use to mitigate the downside.

Suppose we figured out that a friendly AI should be maximising an unbounded utility function \(U\). The extortion risk is that another AI could threaten a FAI with unbounded disutility if it didn’t go along with its plans. This gives the extorting AI – the EAI – a lot of leverage, and things could end up badly if the EAI ends up acting on its threat.

To combat this, we first have to figure out a level \(z\) of utility that is a lower bound on what \(U\) could ever reach naturally and realistically.

By “naturally” we mean that \(U\) going below \(z\) would require not just incompetence or indifference, but some AI actively and deliberately arranging the lowering of \(U\). And “realistically” just means that we’re confident that getting \(U\) lower than \(z\) by chance, or having a \(U\)-minimising AI, are exceedingly low.

Then what we can do is to cut off \(U\) at the \(z\) level, replacing \(U\) with \(U'=\max(U,U(z))\). See \(z\) indicated by the red line on this graph of \(U'\) versus \(U\):

What’s the consequence of this? First of all, it ensures that no EAI would threaten to reduce \(U\) (the utility we really care about) below \(z\), because that is not a threat to the FAI. This reduces the leverage of the EAI, and reduces the impact of it acting on its threat.

Since levels of \(U\) below \(z\) are exceedingly unlikely to happen by chance, the fact the FAI has the wrong utility below \(z\) shouldn’t affect it’s performance much. And, even in that zone, the AI is still motivated to climb \(U\) above \(z\).

But we may still feel unhappy about the flatness of that curve, and want it to still prefer higher \(U\) to exceedingly low values. If so, we can replace \(U\) with \(U''\) as follows (the blue line is at \(z-1\)):

In this case, the EAI will not seek to reduce \(U\) below \(z-1\) (in fact, it will specifically target that value), while the FAI has the correct ordering of lower values of \(U\). The utility is weird around \(z\), granted, but this is a place where the FAI would not want to be and would almost certainly not reach by accident.

Though this method does not eliminate the threat of extortion, it does seem to reduce its impact.





NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

Note that the problem with
by Vadim Kosoy on Open Problems Regarding Counterfactuals: An Introd... | 0 likes

Typos on page 5: *
by Vadim Kosoy on Open Problems Regarding Counterfactuals: An Introd... | 0 likes

Ah, you're right. So gain
by Abram Demski on Smoking Lesion Steelman | 0 likes

> Do you have ideas for how
by Jessica Taylor on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

I think I understand what
by Wei Dai on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

>You don’t have to solve
by Wei Dai on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

Your confusion is because you
by Vadim Kosoy on Delegative Inverse Reinforcement Learning | 0 likes

My confusion is the
by Tom Everitt on Delegative Inverse Reinforcement Learning | 0 likes

> First of all, it seems to
by Abram Demski on Smoking Lesion Steelman | 0 likes

> figure out what my values
by Vladimir Slepnev on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

I agree that selection bias
by Jessica Taylor on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

>It seems quite plausible
by Wei Dai on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

> defending against this type
by Paul Christiano on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

2. I think that we can avoid
by Paul Christiano on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

I hope you stay engaged with
by Wei Dai on Autopoietic systems and difficulty of AGI alignmen... | 0 likes

RSS

Privacy & Terms