Intelligent Agent Foundations Forumsign up / log in
Futarchy, Xrisks, and near misses
discussion post by Stuart Armstrong 75 days ago | Abram Demski likes this | discuss

All the clever ways of getting betting markets to take xrisks into account suffer from one big flaw: the rational xrisk bettor only makes money if the xrisk actually happens.

Now, the problem isn’t because “when everyone is dead, no-one can collect bets”. Robin Hanson has suggested some interesting ideas involving tickets for refuges (shelters from the disaster), and many xrisks will be either survivable (they are called xrisks, after all) or will take some time to reach extinction (such as a nuclear winter leading to a cascade of failures). Even if markets are likely to collapse after the event, they are not certain to collapse, and in theory we can also price in efforts to increase the resilience of markets and see how changes in that resilience changes the prices of refuge tickets.

The main problem, however, is just how irrational people are about xrisks, and how little discipline the market can bring to them. Anyone who strongly over-estimates the probability of an xrisk can expect to gradually lose all their money if they act on that belief. But someone who under-estimates xrisk probability will not suffer until an xrisk actually happens. And even then, they will only suffer in a few specific cases (refuge prices are actually honoured and those without them suffer worse fates). This is, in a way, the ultimate Talebian about black swan: huge market crashes are far more common and understandable than xrisks.

Since that’s the case, it might be better to set up a market in near misses (an idea I’ve heard before, but can’t source right now). A large meteor that shoots between the Earth and the Moon; conventional wars involving nuclear powers; rates of nuclear or biotech accidents. All these are survivable, and repeated, so the market should be much better at converging, with the overoptimistic repeatedly chastised as well as the overpessimistic.



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

I have stopped working on
by Scott Garrabrant on Cooperative Oracles: Introduction | 0 likes

The only assumptions about
by Vadim Kosoy on Delegative Inverse Reinforcement Learning | 0 likes

So this requires the agent's
by Tom Everitt on Delegative Inverse Reinforcement Learning | 0 likes

If the agent always delegates
by Vadim Kosoy on Delegative Inverse Reinforcement Learning | 0 likes

Hi Vadim! So basically the
by Tom Everitt on Delegative Inverse Reinforcement Learning | 0 likes

Hi Tom! There is a
by Vadim Kosoy on Delegative Inverse Reinforcement Learning | 0 likes

Hi Alex! I agree that the
by Vadim Kosoy on Cooperative Oracles: Stratified Pareto Optima and ... | 0 likes

That is a good question. I
by Tom Everitt on CIRL Wireheading | 0 likes

Adversarial examples for
by Tom Everitt on CIRL Wireheading | 0 likes

"The use of an advisor allows
by Tom Everitt on Delegative Inverse Reinforcement Learning | 0 likes

If we're talking about you,
by Wei Dai on Current thoughts on Paul Christano's research agen... | 0 likes

Suppose that I, Paul, use a
by Paul Christiano on Current thoughts on Paul Christano's research agen... | 0 likes

When you wrote "suppose I use
by Wei Dai on Current thoughts on Paul Christano's research agen... | 0 likes

> but that kind of white-box
by Paul Christiano on Current thoughts on Paul Christano's research agen... | 0 likes

>Competence can be an
by Wei Dai on Current thoughts on Paul Christano's research agen... | 0 likes

RSS

Privacy & Terms