Intelligent Agent Foundations Forumsign up / log in
by Abram Demski 777 days ago | link | parent

In my current way of thinking about futarchy, it seems like the right way to do this is through good adjudication. It passes the buck, just like my assumption in a recent post that a logical inductor had a correct logical counterfactual in its underlying deductive system. But for a futarchy, the situation isn’t quite as bad. We could rely on human judgement somehow.

But another alternative for an underlying adjudication system occurred to me today. Maybe the market could be adjudicated via models. My intuition is that a claim of existential risk (if made in the underlying adjudication system rather than as a bet) must be accompanied by a plausible model - a relatively short computer program which fits the data so far. A counter-claim would have to give an alternative plausible model which shows no risk. These models would lead to payouts.

This could address your problem as well, since a counterfactual claim of doom could be (partially) adjudicated as false by giving a casual model. (I don’t intend this proposal to help for logical counterfactuals; it just allows regular causal counterfactuals, described in some given formalism.) But I haven’t thought through how this would work yet.



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

[Note: This comment is three
by Ryan Carey on A brief note on factoring out certain variables | 0 likes

There should be a chat icon
by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes

Apparently "You must be
by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like

There is a replacement for
by Alex Mennen on Meta: IAFF vs LessWrong | 1 like

Regarding the physical
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think that we should expect
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think I understand your
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

This seems like a hack. The
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

After thinking some more,
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yes, I think that we're
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

My intuition is that it must
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

To first approximation, a
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Actually, I *am* including
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yeah, when I went back and
by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes

> Well, we could give up on
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

RSS

Privacy & Terms