Intelligent Agent Foundations Forumsign up / log in
Futarchy, Xrisks, and near misses
discussion post by Stuart Armstrong 26 days ago | Abram Demski likes this | discuss

All the clever ways of getting betting markets to take xrisks into account suffer from one big flaw: the rational xrisk bettor only makes money if the xrisk actually happens.

Now, the problem isn’t because “when everyone is dead, no-one can collect bets”. Robin Hanson has suggested some interesting ideas involving tickets for refuges (shelters from the disaster), and many xrisks will be either survivable (they are called xrisks, after all) or will take some time to reach extinction (such as a nuclear winter leading to a cascade of failures). Even if markets are likely to collapse after the event, they are not certain to collapse, and in theory we can also price in efforts to increase the resilience of markets and see how changes in that resilience changes the prices of refuge tickets.

The main problem, however, is just how irrational people are about xrisks, and how little discipline the market can bring to them. Anyone who strongly over-estimates the probability of an xrisk can expect to gradually lose all their money if they act on that belief. But someone who under-estimates xrisk probability will not suffer until an xrisk actually happens. And even then, they will only suffer in a few specific cases (refuge prices are actually honoured and those without them suffer worse fates). This is, in a way, the ultimate Talebian about black swan: huge market crashes are far more common and understandable than xrisks.

Since that’s the case, it might be better to set up a market in near misses (an idea I’ve heard before, but can’t source right now). A large meteor that shoots between the Earth and the Moon; conventional wars involving nuclear powers; rates of nuclear or biotech accidents. All these are survivable, and repeated, so the market should be much better at converging, with the overoptimistic repeatedly chastised as well as the overpessimistic.



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

The AI defers to anything
by Paul Christiano on Corrigibility thoughts II: the robot operator | 0 likes

Thus anything that can
by Stuart Armstrong on Corrigibility thoughts II: the robot operator | 0 likes

Ah, thanks! That seems more
by Stuart Armstrong on Loebian cooperation in the tiling agents problem | 0 likes

It doesn't mean computation
by Vladimir Slepnev on Loebian cooperation in the tiling agents problem | 1 like

I'm not sure this would work,
by Stuart Armstrong on Loebian cooperation in the tiling agents problem | 0 likes

>How can the short term
by Stuart Armstrong on Humans are not agents: short vs long term | 0 likes

I expect a workable approach
by Paul Christiano on Corrigibility thoughts II: the robot operator | 0 likes

Not sure what your argument
by Stuart Armstrong on Corrigibility thoughts II: the robot operator | 0 likes

It is ‘a preference for
by Stuart Armstrong on Humans are not agents: short vs long term | 0 likes

Note that we don't need to
by Paul Christiano on ALBA requires incremental design of good long-term... | 0 likes

If I want my boat to travel
by Paul Christiano on Corrigibility thoughts II: the robot operator | 0 likes

I don't think it's much like
by Abram Demski on An Approach to Logically Updateless Decisions | 0 likes

Yeah, I like tail dependence.
by Sam Eisenstat on An Approach to Logically Updateless Decisions | 0 likes

This is basically the
by Paul Christiano on Cooperative Oracles: Stratified Pareto Optima and ... | 1 like

I think AsDT has a limited
by Abram Demski on An Approach to Logically Updateless Decisions | 2 likes

RSS

Privacy & Terms