Change utility, reduce extortion post by Stuart Armstrong 30 days ago | 3 comments EDIT: This method is not intended to solve extortion, just to remove the likelihood of extremely terrible outcomes (and slightly reduce the vulnerability to extortion). A full solution to the extortion problem is sorely elusive. However, there are crude hacks that we can use to mitigate the downside. Suppose we figured out that a friendly AI should be maximising an unbounded utility function $$U$$. The extortion risk is that another AI could threaten a FAI with unbounded disutility if it didn’t go along with its plans. This gives the extorting AI – the EAI – a lot of leverage, and things could end up badly if the EAI ends up acting on its threat. To combat this, we first have to figure out a level $$z$$ of utility that is a lower bound on what $$U$$ could ever reach naturally and realistically. By “naturally” we mean that $$U$$ going below $$z$$ would require not just incompetence or indifference, but some AI actively and deliberately arranging the lowering of $$U$$. And “realistically” just means that we’re confident that getting $$U$$ lower than $$z$$ by chance, or having a $$U$$-minimising AI, are exceedingly low. Then what we can do is to cut off $$U$$ at the $$z$$ level, replacing $$U$$ with $$U'=\max(U,U(z))$$. See $$z$$ indicated by the red line on this graph of $$U'$$ versus $$U$$: What’s the consequence of this? First of all, it ensures that no EAI would threaten to reduce $$U$$ (the utility we really care about) below $$z$$, because that is not a threat to the FAI. This reduces the leverage of the EAI, and reduces the impact of it acting on its threat. Since levels of $$U$$ below $$z$$ are exceedingly unlikely to happen by chance, the fact the FAI has the wrong utility below $$z$$ shouldn’t affect it’s performance much. And, even in that zone, the AI is still motivated to climb $$U$$ above $$z$$. But we may still feel unhappy about the flatness of that curve, and want it to still prefer higher $$U$$ to exceedingly low values. If so, we can replace $$U$$ with $$U''$$ as follows (the blue line is at $$z-1$$): In this case, the EAI will not seek to reduce $$U$$ below $$z-1$$ (in fact, it will specifically target that value), while the FAI has the correct ordering of lower values of $$U$$. The utility is weird around $$z$$, granted, but this is a place where the FAI would not want to be and would almost certainly not reach by accident. Though this method does not eliminate the threat of extortion, it does seem to reduce its impact.

### NEW DISCUSSION POSTS

The "benign induction
 by David Krueger on Why I am not currently working on the AAMLS agenda | 0 likes

This comment is to explain
 by Alex Mennen on Formal Open Problem in Decision Theory | 0 likes

Thanks for writing this -- I
 by Daniel Dewey on AI safety: three human problems and one AI issue | 1 like

I think it does do the double
 by Stuart Armstrong on Acausal trade: double decrease | 0 likes

>but the agent incorrectly
 by Stuart Armstrong on CIRL Wireheading | 0 likes

I think the double decrease
 by Owen Cotton-Barratt on Acausal trade: double decrease | 0 likes

The problem is that our
 by Scott Garrabrant on Cooperative Oracles: Nonexploited Bargaining | 1 like

Yeah. The original generator
 by Scott Garrabrant on Cooperative Oracles: Nonexploited Bargaining | 0 likes

I don't see how it would. The
 by Scott Garrabrant on Cooperative Oracles: Nonexploited Bargaining | 1 like

Does this generalise to
 by Stuart Armstrong on Cooperative Oracles: Nonexploited Bargaining | 0 likes

>Every point in this set is a
 by Stuart Armstrong on Cooperative Oracles: Nonexploited Bargaining | 0 likes

This seems a proper version
 by Stuart Armstrong on Cooperative Oracles: Nonexploited Bargaining | 0 likes

This doesn't seem to me to
 by Stuart Armstrong on Change utility, reduce extortion | 0 likes

[_Regret Theory with General
 by Abram Demski on Generalizing Foundations of Decision Theory II | 0 likes

It's not clear whether we
 by Paul Christiano on Infinite ethics comparisons | 1 like