Intelligent Agent Foundations Forumsign up / log in
by Alex Mennen 25 days ago | link | parent

When modeling the incentives to change probabilities of events, it probably makes sense to model the payoff of changing probabilities of events and the cost of changing probabilities of events separately. You’d expect someone to alter the probabilities if they gain more in expectation from the bets than the cost to them of altering the probabilities. If someone bets on an event and changes the probability that it occurs from \(p\) to \(q\), then their expected payoff is \(\frac{q}{p}-1\) times their investment, so if, in a prediction market in which there are \(n\) possible outcomes, the expected payoff you can get from changing the probability distribution from \((p_1,...,p_n)\) to \((q_1,...,q_n)\) is proportional to \(\max_i(\frac{q_i}{p_i})-1\).

Modeling the cost of changing a probability distribution seems harder to model, but the Fisher information metric might be a good crude estimate of how difficult you should expect it to be to change the probability distribution over outcomes from one distribution to another.



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

The AI defers to anything
by Paul Christiano on Corrigibility thoughts II: the robot operator | 0 likes

Thus anything that can
by Stuart Armstrong on Corrigibility thoughts II: the robot operator | 0 likes

Ah, thanks! That seems more
by Stuart Armstrong on Loebian cooperation in the tiling agents problem | 0 likes

It doesn't mean computation
by Vladimir Slepnev on Loebian cooperation in the tiling agents problem | 1 like

I'm not sure this would work,
by Stuart Armstrong on Loebian cooperation in the tiling agents problem | 0 likes

>How can the short term
by Stuart Armstrong on Humans are not agents: short vs long term | 0 likes

I expect a workable approach
by Paul Christiano on Corrigibility thoughts II: the robot operator | 0 likes

Not sure what your argument
by Stuart Armstrong on Corrigibility thoughts II: the robot operator | 0 likes

It is ‘a preference for
by Stuart Armstrong on Humans are not agents: short vs long term | 0 likes

Note that we don't need to
by Paul Christiano on ALBA requires incremental design of good long-term... | 0 likes

If I want my boat to travel
by Paul Christiano on Corrigibility thoughts II: the robot operator | 0 likes

I don't think it's much like
by Abram Demski on An Approach to Logically Updateless Decisions | 0 likes

Yeah, I like tail dependence.
by Sam Eisenstat on An Approach to Logically Updateless Decisions | 0 likes

This is basically the
by Paul Christiano on Cooperative Oracles: Stratified Pareto Optima and ... | 1 like

I think AsDT has a limited
by Abram Demski on An Approach to Logically Updateless Decisions | 2 likes

RSS

Privacy & Terms