Intelligent Agent Foundations Forumsign up / log in
by Sam Eisenstat 19 days ago | link | parent

It’s hard to analyze the dynamics of logical inductors too precisely, so we often have to do things that feel like worst-case analysis, like considering an adversarial trader with sufficient wealth. I think that problems that show up from this sort of analysis can be expected to correspond to real problems in superintelligent agents, but that is a difficult question. The malignancy of the universal prior is part of the reason.

As to your proposed solution, I don’t see how it would work. Scott is proposing that the trader makes conditional contracts, which are in effect voided if the event that they are conditional on doesn’t happen, so the trader doesn’t actually lose anything is this case. (It isn’t discussed in this post, but conditional contracts can be built out of the usual sort of bets, with payoffs given by the definition of conditional probability.) So, in order to make the trader lose money, the events need to happen sometimes, not just be expect to happen with some nonnegligable probability by the market.



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

The AI defers to anything
by Paul Christiano on Corrigibility thoughts II: the robot operator | 0 likes

Thus anything that can
by Stuart Armstrong on Corrigibility thoughts II: the robot operator | 0 likes

Ah, thanks! That seems more
by Stuart Armstrong on Loebian cooperation in the tiling agents problem | 0 likes

It doesn't mean computation
by Vladimir Slepnev on Loebian cooperation in the tiling agents problem | 1 like

I'm not sure this would work,
by Stuart Armstrong on Loebian cooperation in the tiling agents problem | 0 likes

>How can the short term
by Stuart Armstrong on Humans are not agents: short vs long term | 0 likes

I expect a workable approach
by Paul Christiano on Corrigibility thoughts II: the robot operator | 0 likes

Not sure what your argument
by Stuart Armstrong on Corrigibility thoughts II: the robot operator | 0 likes

It is ‘a preference for
by Stuart Armstrong on Humans are not agents: short vs long term | 0 likes

Note that we don't need to
by Paul Christiano on ALBA requires incremental design of good long-term... | 0 likes

If I want my boat to travel
by Paul Christiano on Corrigibility thoughts II: the robot operator | 0 likes

I don't think it's much like
by Abram Demski on An Approach to Logically Updateless Decisions | 0 likes

Yeah, I like tail dependence.
by Sam Eisenstat on An Approach to Logically Updateless Decisions | 0 likes

This is basically the
by Paul Christiano on Cooperative Oracles: Stratified Pareto Optima and ... | 1 like

I think AsDT has a limited
by Abram Demski on An Approach to Logically Updateless Decisions | 2 likes

RSS

Privacy & Terms