Intelligent Agent Foundations Forumsign up / log in
Index of some decision theory posts
discussion post by Tsvi Benson-Tilsen 775 days ago | Ryan Carey, Jack Gallagher, Jessica Taylor and Scott Garrabrant like this | discuss

What this is

Edit: this is now a general index for agent-foundations-related decision theory research. It is most-recent-first. I’ll add summaries for other posts if anyone wants.

Index

Forum post: https://agentfoundations.org/item?id=1304

Generalizing Foundations of Decision Theory

Forum post: https://agentfoundations.org/item?id=1302

Prediction Based Robust Cooperation

Forum post: https://agentfoundations.org/item?id=1295

Entangled Equilibria and the Twin Prisoners’ Dilemma

Forum post: https://agentfoundations.org/item?id=1279

On motivations for MIRI’s highly reliable agent design research

Forum post: https://agentfoundations.org/item?id=1220

Open problem: very thin logical priors

An open problem relevant to decision theory and to understanding bounded reasoning: is there a very easily computable prior over logical facts that, when updated on the results of computations, performs well in some sense?

Forum post: https://agentfoundations.org/item?id=1206

postCDT: Decision Theory using post-selected Bayes nets

Forum post: https://agentfoundations.org/item?id=1077

Updatelessness and Son of X

Forum post: https://agentfoundations.org/item?id=1073

A failed attempt at Updatelessness using Universal Inductors

Forum post: https://agentfoundations.org/item?id=1071

Training Garrabrant inductors to predict counterfactuals

A proposal for training UGIs to predict action- and policy-counterfactuals by learning from the consequences of actions taken by similar (“logically previous”) agents.

Forum post: https://agentfoundations.org/item?id=1054

Github pdf: https://github.com/tsvibt/public-pdfs/blob/master/decision-theory/training-counterfactuals/main.pdf

Desiderata for decision theory

A list of desiderata for a theory of optimal decision-making for bounded rational agents in general environments.

Forum post: https://agentfoundations.org/item?id=1053

Github pdf: https://github.com/tsvibt/public-pdfs/blob/master/decision-theory/desiderata/main.pdf

Transitive negotiations with counterfactual agents

Forum post: https://agentfoundations.org/item?id=1047

Attacking the grain of truth problem using Bayes-Savage agents

Forum post: https://agentfoundations.org/item?id=1046

Notation for induction and decision theory

A reference for notation that might be useful for using (universal) Garrabrant inductors as models for bounded reasoning, and some notation for modelling agents. (Not on the forum because tables.)

Github pdf: https://github.com/tsvibt/public-pdfs/blob/master/decision-theory/notation/main.pdf

Index of some decision theory posts

As advertised.

Forum post: https://agentfoundations.org/item?id=1026

Github pdf: https://github.com/tsvibt/public-pdfs/blob/master/decision-theory/index/main.pdf



NEW LINKS

NEW POSTS

NEW DISCUSSION POSTS

RECENT COMMENTS

[Note: This comment is three
by Ryan Carey on A brief note on factoring out certain variables | 0 likes

There should be a chat icon
by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes

Apparently "You must be
by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like

There is a replacement for
by Alex Mennen on Meta: IAFF vs LessWrong | 1 like

Regarding the physical
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think that we should expect
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

I think I understand your
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

This seems like a hack. The
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

After thinking some more,
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yes, I think that we're
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

My intuition is that it must
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

To first approximation, a
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Actually, I *am* including
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

Yeah, when I went back and
by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes

> Well, we could give up on
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes

RSS

Privacy & Terms