Intelligent Agent Foundations Forumsign up / log in
Futarchy, Xrisks, and near misses
discussion post by Stuart Armstrong 413 days ago | Abram Demski likes this | discuss

All the clever ways of getting betting markets to take xrisks into account suffer from one big flaw: the rational xrisk bettor only makes money if the xrisk actually happens.

Now, the problem isn’t because “when everyone is dead, no-one can collect bets”. Robin Hanson has suggested some interesting ideas involving tickets for refuges (shelters from the disaster), and many xrisks will be either survivable (they are called xrisks, after all) or will take some time to reach extinction (such as a nuclear winter leading to a cascade of failures). Even if markets are likely to collapse after the event, they are not certain to collapse, and in theory we can also price in efforts to increase the resilience of markets and see how changes in that resilience changes the prices of refuge tickets.

The main problem, however, is just how irrational people are about xrisks, and how little discipline the market can bring to them. Anyone who strongly over-estimates the probability of an xrisk can expect to gradually lose all their money if they act on that belief. But someone who under-estimates xrisk probability will not suffer until an xrisk actually happens. And even then, they will only suffer in a few specific cases (refuge prices are actually honoured and those without them suffer worse fates). This is, in a way, the ultimate Talebian about black swan: huge market crashes are far more common and understandable than xrisks.

Since that’s the case, it might be better to set up a market in near misses (an idea I’ve heard before, but can’t source right now). A large meteor that shoots between the Earth and the Moon; conventional wars involving nuclear powers; rates of nuclear or biotech accidents. All these are survivable, and repeated, so the market should be much better at converging, with the overoptimistic repeatedly chastised as well as the overpessimistic.



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

There should be a chat icon
by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes

Apparently "You must be
by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like

There is a replacement for
by Alex Mennen on Meta: IAFF vs LessWrong | 1 like

Regarding the physical
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think that we should expect
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think I understand your
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

This seems like a hack. The
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

After thinking some more,
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yes, I think that we're
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

My intuition is that it must
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

To first approximation, a
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Actually, I *am* including
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yeah, when I went back and
by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes

> Well, we could give up on
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

> For another thing, consider
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

RSS

Privacy & Terms