by Stuart Armstrong 11 days ago | link | parent but the agent incorrectly observes the action It’s a bit annoying that this has to rely on an incorrect observation. Why not replace the human action, in state $$s_2$$, with a simple automated system that chooses $$a_1^H$$? It’s an easy mistake to make while programming, and the agent has no fundamental understanding of the difference between the human and an imperfect automated system. Basically, if the human acts in perfect accordance with their preferences, and if the agent correctly observes and learns this, the agent will converge on the right values. You put wireheading by removing “correctly observes”, but I think removing “human acts in perfect accordance with their preferences” is a better example for wireheading.

NEW DISCUSSION POSTS

The "benign induction
 by David Krueger on Why I am not currently working on the AAMLS agenda | 0 likes

This comment is to explain
 by Alex Mennen on Formal Open Problem in Decision Theory | 0 likes

Thanks for writing this -- I
 by Daniel Dewey on AI safety: three human problems and one AI issue | 1 like

I think it does do the double
 by Stuart Armstrong on Acausal trade: double decrease | 0 likes

>but the agent incorrectly
 by Stuart Armstrong on CIRL Wireheading | 0 likes

I think the double decrease
 by Owen Cotton-Barratt on Acausal trade: double decrease | 0 likes

The problem is that our
 by Scott Garrabrant on Cooperative Oracles: Nonexploited Bargaining | 1 like

Yeah. The original generator
 by Scott Garrabrant on Cooperative Oracles: Nonexploited Bargaining | 0 likes

I don't see how it would. The
 by Scott Garrabrant on Cooperative Oracles: Nonexploited Bargaining | 1 like

Does this generalise to
 by Stuart Armstrong on Cooperative Oracles: Nonexploited Bargaining | 0 likes

>Every point in this set is a
 by Stuart Armstrong on Cooperative Oracles: Nonexploited Bargaining | 0 likes

This seems a proper version
 by Stuart Armstrong on Cooperative Oracles: Nonexploited Bargaining | 0 likes

This doesn't seem to me to
 by Stuart Armstrong on Change utility, reduce extortion | 0 likes

[_Regret Theory with General
 by Abram Demski on Generalizing Foundations of Decision Theory II | 0 likes

It's not clear whether we
 by Paul Christiano on Infinite ethics comparisons | 1 like