Intelligent Agent Foundations Forumsign up / log in
by Sam Eisenstat 75 days ago | Jack Gallagher and Abram Demski like this | link | parent

In counterfactual mugging with a logical coin, AsDT always uses a logical inductor’s best-estimate of the utility it would get right now, so it sees the coin as already determined, and sees no benefit from giving Omega money in the cases where Omega asks for money.

The way I would think about what’s going on is that if the coin is already known at the time that the expectations are evaluated, then the problem isn’t convergent in the sense of AsDT. The agent that pays up whenever asked has a constant action, but it doesn’t receive a constant expected utility. You can think of the averaging as introducing artificial logical uncertainty to make more things convergent, which is why it’s more updateless. (My understanding is that this is pretty close to how you’re thinking of it already.)



by Abram Demski 67 days ago | Sam Eisenstat and Jack Gallagher like this | link

I think AsDT has a limited notion of convergent problem. It can only handle situations where the optimal strategy is to make the same move each time. Tail-dependence opens this up, partly by looking at the limit of average payoff rather than the limit of raw payoff. This allows us to deal with problems where the optimal strategy is complicated (and even somewhat dependent on what’s done in earlier instances in the sequence).

I wasn’t thinking of it as introducing artificial logical uncertainty, but I can see it that way.

reply

by Sam Eisenstat 66 days ago | link

Yeah, I like tail dependence.

There’s this question of whether for logical uncertainty we should think of it more as trying to “un-update” from a more logically informed perspective rather than trying to use some logical prior that exists at the beginning of time. Maybe you’ve heard such ideas from Scott? I’m not sure if that’s the right perspective, but it’s what I’m alluding to when I say you’re introducing artificial logical uncertainty.

reply

by Abram Demski 65 days ago | link

I don’t think it’s much like un-updating. Un-updating takes a specific fact we’d like to pretend we don’t know. Plus, the idea there is to back up the inductor. Here I’m looking at average performance as estimated by the latest stage of the inductor. The artificial uncertainty is more like pretending you don’t know which problem in the sequence you’re at.

reply



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

I have stopped working on
by Scott Garrabrant on Cooperative Oracles: Introduction | 0 likes

The only assumptions about
by Vadim Kosoy on Delegative Inverse Reinforcement Learning | 0 likes

So this requires the agent's
by Tom Everitt on Delegative Inverse Reinforcement Learning | 0 likes

If the agent always delegates
by Vadim Kosoy on Delegative Inverse Reinforcement Learning | 0 likes

Hi Vadim! So basically the
by Tom Everitt on Delegative Inverse Reinforcement Learning | 0 likes

Hi Tom! There is a
by Vadim Kosoy on Delegative Inverse Reinforcement Learning | 0 likes

Hi Alex! I agree that the
by Vadim Kosoy on Cooperative Oracles: Stratified Pareto Optima and ... | 0 likes

That is a good question. I
by Tom Everitt on CIRL Wireheading | 0 likes

Adversarial examples for
by Tom Everitt on CIRL Wireheading | 0 likes

"The use of an advisor allows
by Tom Everitt on Delegative Inverse Reinforcement Learning | 0 likes

If we're talking about you,
by Wei Dai on Current thoughts on Paul Christano's research agen... | 0 likes

Suppose that I, Paul, use a
by Paul Christiano on Current thoughts on Paul Christano's research agen... | 0 likes

When you wrote "suppose I use
by Wei Dai on Current thoughts on Paul Christano's research agen... | 0 likes

> but that kind of white-box
by Paul Christiano on Current thoughts on Paul Christano's research agen... | 0 likes

>Competence can be an
by Wei Dai on Current thoughts on Paul Christano's research agen... | 0 likes

RSS

Privacy & Terms