Index of some decision theory posts
discussion post by Tsvi Benson-Tilsen 199 days ago | Ryan Carey, Jack Gallagher, Jessica Taylor and Scott Garrabrant like this | discuss

## What this is

Edit: this is now a general index for agent-foundations-related decision theory research. It is most-recent-first. I’ll add summaries for other posts if anyone wants.

## Index

Forum post: https://agentfoundations.org/item?id=1304

### Generalizing Foundations of Decision Theory

Forum post: https://agentfoundations.org/item?id=1302

### Prediction Based Robust Cooperation

Forum post: https://agentfoundations.org/item?id=1295

### Entangled Equilibria and the Twin Prisoners’ Dilemma

Forum post: https://agentfoundations.org/item?id=1279

### On motivations for MIRI’s highly reliable agent design research

Forum post: https://agentfoundations.org/item?id=1220

### Open problem: very thin logical priors

An open problem relevant to decision theory and to understanding bounded reasoning: is there a very easily computable prior over logical facts that, when updated on the results of computations, performs well in some sense?

Forum post: https://agentfoundations.org/item?id=1206

### postCDT: Decision Theory using post-selected Bayes nets

Forum post: https://agentfoundations.org/item?id=1077

### Updatelessness and Son of X

Forum post: https://agentfoundations.org/item?id=1073

### A failed attempt at Updatelessness using Universal Inductors

Forum post: https://agentfoundations.org/item?id=1071

### Training Garrabrant inductors to predict counterfactuals

A proposal for training UGIs to predict action- and policy-counterfactuals by learning from the consequences of actions taken by similar (“logically previous”) agents.

Forum post: https://agentfoundations.org/item?id=1054

### Desiderata for decision theory

A list of desiderata for a theory of optimal decision-making for bounded rational agents in general environments.

Forum post: https://agentfoundations.org/item?id=1053

### Transitive negotiations with counterfactual agents

Forum post: https://agentfoundations.org/item?id=1047

### Attacking the grain of truth problem using Bayes-Savage agents

Forum post: https://agentfoundations.org/item?id=1046

### Notation for induction and decision theory

A reference for notation that might be useful for using (universal) Garrabrant inductors as models for bounded reasoning, and some notation for modelling agents. (Not on the forum because tables.)

### Index of some decision theory posts

Forum post: https://agentfoundations.org/item?id=1026

### NEW DISCUSSION POSTS

This isn't too related to
 by Sam Eisenstat on Generalizing Foundations of Decision Theory II | 0 likes

I also commented there last
 by Daniel Dewey on Where's the first benign agent? | 0 likes

(I replied last weekend, but
 by Paul Christiano on Where's the first benign agent? | 0 likes

$g$ can be a fiber of $f$,
 by Alex Mennen on Formal Open Problem in Decision Theory | 0 likes

>It seems like that can be
 by Stuart Armstrong on ALBA: can you be "aligned" at increased "capacity"... | 0 likes

I disagree. I'm arguing that
 by Stuart Armstrong on ALBA: can you be "aligned" at increased "capacity"... | 0 likes

But this could happen even if
 by Paul Christiano on ALBA: can you be "aligned" at increased "capacity"... | 0 likes

 by Daniel Dewey on ALBA: can you be "aligned" at increased "capacity"... | 0 likes

I like this suggestion of a
 by Patrick LaVictoire on Proposal for an Implementable Toy Model of Informe... | 0 likes

>It may generalize
 by Stuart Armstrong on ALBA: can you be "aligned" at increased "capacity"... | 0 likes

I don't know what you really
 by Paul Christiano on ALBA: can you be "aligned" at increased "capacity"... | 0 likes

>“is trying its best to do
 by Stuart Armstrong on ALBA: can you be "aligned" at increased "capacity"... | 0 likes

In practice, I'd run your
 by Stuart Armstrong on ALBA: can you be "aligned" at increased "capacity"... | 0 likes

>that is able to give
 by Stuart Armstrong on ALBA: can you be "aligned" at increased "capacity"... | 0 likes

> good in practice, but has
 by Paul Christiano on ALBA: can you be "aligned" at increased "capacity"... | 0 likes