Intelligent Agent Foundations Forumsign up / log in
by Stuart Armstrong 635 days ago | link | parent

Ok, I think we need to distinguish several things:

  1. In general, \(U\) vs \(V\) or \(U - 1000\) vs \(V\) is a problem when comparing utility functions; there should be some sort of normalisation process before any utility functions are compared.

  2. Within a compound utility function, the AI is exactly choosing the branch where the utility is easiest to satisfy.

  3. Is there some normalisation procedure that would also normalise between branches of compound utility functions? If we pick a normalisation for comparing distinct utilities, it might also allow normalisation between branches of compound utilities.



by Jessica Taylor 633 days ago | link

  1. Note that IRL is invariant to translating a possible utility function by a constant. So this kind of normalization doesn’t have to be baked into the algorithm.
  2. This is true.
  3. The most natural normalization procedure is to look at how the human is trying or not trying to affect the event X (as I said in the second part of my comment). If the human never tries to affect X either way, then the AI will normalize the utility functions so that the AI has no incentive to affect X either.

reply



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

I found an improved version
by Alex Appel on A Loophole for Self-Applicative Soundness | 0 likes

I misunderstood your
by Sam Eisenstat on A Loophole for Self-Applicative Soundness | 0 likes

Caught a flaw with this
by Alex Appel on A Loophole for Self-Applicative Soundness | 0 likes

As you say, this isn't a
by Sam Eisenstat on A Loophole for Self-Applicative Soundness | 1 like

Note: I currently think that
by Jessica Taylor on Predicting HCH using expert advice | 0 likes

Counterfactual mugging
by Jessica Taylor on Doubts about Updatelessness | 0 likes

What do you mean by "in full
by David Krueger on Doubts about Updatelessness | 0 likes

It seems relatively plausible
by Paul Christiano on Maximally efficient agents will probably have an a... | 1 like

I think that in that case,
by Alex Appel on Smoking Lesion Steelman | 1 like

Two minor comments. First,
by Sam Eisenstat on No Constant Distribution Can be a Logical Inductor | 1 like

A: While that is a really
by Alex Appel on Musings on Exploration | 0 likes

> The true reason to do
by Jessica Taylor on Musings on Exploration | 0 likes

A few comments. Traps are
by Vadim Kosoy on Musings on Exploration | 1 like

I'm not convinced exploration
by Abram Demski on Musings on Exploration | 0 likes

Update: This isn't really an
by Alex Appel on A Difficulty With Density-Zero Exploration | 0 likes

RSS

Privacy & Terms