This post is more about articulating motivations than about presenting anything new, but I think readers may learn something about the foundations of classical (evidential) decision theory as they stand.
The Project
Most people interested in decision theory know about the VNM theorem and the Dutch Book argument, and not much more. The VNM theorem shows that if we have to make decisions over gambles which follow the laws of probability, and our preferences obey four plausible postulates of rationality (the VNM axioms), then our preferences over gambles can be represented as an expected utility function. On the other hand, the Dutch Book argument assumes that we make decisions by expected utility, but perhaps with a nonprobabilistic belief function. It then proves that any violation of probability theory implies a willingness to take sureloss gambles. (Reverse Dutch Book arguments show that indeed, following the laws of probability eliminates these sureloss bets.)
So we can more or less argue for expected utility theory starting from probability theory, and argue for probability theory starting from expected utility theory; but clearly, this is not enough to provide good reason to endorse Bayesian decision theory overall. Subsequent investigations which I will summarize have attempted to address this gap.
But first, why care?
 Logical Induction can be seen as resulting from a small tweak to the Dutch Book setup, relaxing it enough that it could apply to mathematical uncertainty. Although we were initially optimistic that Logical Induction would allow significant progress in decision theory, it has proven difficult to get a satisfying logicalinduction DT. Perhaps it would be useful to instead understand the argument for DT as a whole, and try to relax the foundations of DT in “the same way” we relaxed the foundations of probability theory.
 It seems likely to me that such a reexamination of the foundations would automatically provide justification for reflectively consistent decision theories like UDT. Hopefully I can make my intuitions clear as I describe things.
 Furthermore, the foundations of DT seem like they aren’t that solid. Perhaps we’ve put blinders on by not investigating these arguments for DT in full. Even without the kind of modification to the assumptions which I’m proposing, we may find significant generalizations of DT are given just by dropping unjustified axioms in the existing foundations. We can already see one such generalization, the use of infinitesimal probability, by studying the history; I’ll explain this more.
Longer History
Justifying Probability Theory
Before going into the attempts to justify Bayesian decision theory in its entirety, it’s worth mentioning Cox’s theorem, which is another way of justifying probability alone. Unlike the Dutch Book argument, it doesn’t rely on a connection between beliefs and decisions; instead, Cox makes a series of plausible assumptions about the nature of subjective belief, and concludes that any approach must either violate those assumptions or be essentially equivalent to probability theory.
There has been some controversy about holes in Cox’s argument. Like other holes in the foundations which I will discuss later, it seems one conclusion we can draw by dropping unjustified assumptions is that there is no good reason to rule out infinitesimal probabilities. I haven’t understood the issues with Cox’s theorem yet, though, so I won’t remark on this further.
This is an opinionated summary of the foundations of decision theory, so I’ll remark on the relative quality of the justifications provided by the Dutch Book vs Cox. The Dutch Book argument provides what could be called consequentialist constraints on rationality: if you don’t follow them, something bad happens. I’ll treat this as the “highest tier” of argument. Cox’s argument relies on more deontological constraints: if you don’t follow them, it seems intuitively as if you’ve done something wrong. I’ll take this to be the second tier of justification.
Justifying Decision Theory
VNM
Before we move on to attempts to justify decision theory in full, let’s look at the VNM axioms in a little detail.
The setup is that we’ve got a set of outcomes \(\mathcal{O}\), and we consider lotteries over outcomes which associate a probability \(p_i\) with each outcome (such that \(0 \leq p_i \leq 1\) and \(\sum_i p_i = 1\)). We have a preference relation over outcomes, \(\preceq\), which must obey the following properties:
 (Completeness.) For any two lotteries \(A, B\), either \(A \prec B\), or \(B \prec A\), or neither, written \(A \sim B\). (“\(A \prec B\) or \(A \sim B\)” will be abbreviated as “\(A \preceq B\)” as usual.)
 (Transitivity.) If \(A \preceq B\) and \(B \preceq C\), then \(A \preceq C\).
 (Continuity.) If \(A \preceq B \preceq C\), then there exists \(p \in [0,1]\) such that a gamble \(D\) assigning probability \(p\) to \(A\) and \((1p)\) to \(B\) satisfies \(B \sim D\).
 (Independence.) If \(A \prec B\), then for any \(C\) and \(p \in (0,1]\), we have \(p A + (1p) C \prec p B + (1p) C\).
Transitivity is often considered to be justified by the moneypump argument. Suppose that you violate transitivity for some \(A, B, C\); that is, \(A \preceq B\) and \(B \preceq C\), but \(C \prec A\). Then you’ll be willing to trade away \(A\) for \(B\) and then \(B\) for \(C\) (perhaps in exchange for a trivial amount of money). But, then, you’ll have \(C\); and since \(C \prec A\), you’ll gladly pay (a nontrivial amount) to switch back to \(A\). I can keep sending you through this loop to get more money out of you until you’re broke.
The moneypump argument seems similar in nature to the Dutch Book argument; both require a slightly unnatural setup (making the assumption that utility is always exchangeable with money), but resulting in strong consequentialist justifications for rationality axioms. So, I place the moneypump argument (and thus transitivity) in my “first tier” along with Dutch Book.
Completeness is less clear. According to the SEP, “most decision theorists suggest that rationality requires that preferences be coherently extendible. This means that even if your preferences are not complete, it should be possible to complete them without violating any of the conditions that are rationally required, in particular Transitivity.” So, I suggest we place this in a third tier, the socalled structural axioms: those which are not really justified at all, except that assuming them allows us to prove our results.
“Structural axioms” are a somewhat curious artefact found in almost all of the axiomsets which we will look at. These axioms usually have something to do with requiring that the domain is rich enough for the intended proof to go through. Completeness is not usually referred to as structural, but if we agree with the quotation above, I think we have to regard it as such.
I take the axiom of independence to be tier two: an intuitively strong rationality principle, but not one that’s enforced by nasty things that happen if we violate it. It surprises me that I’ve only seen this kind of justification for one of the four VNM axioms. Actually, I suspect that independence could be justified in a tierone way; it’s just that I haven’t seen it. (Developing a framework in which an argument for independence can be made just as well as the moneypump and dutchbook arguments is part of my goal.)
I think many people would put continuity at tier two, a strong intuitive principle. I don’t see why, personally. For me, it seems like an assumption which only makes sense if we already have the intuition that expected utility is going to be the right way of doing things. This puts it in tier 3 for me; another structural axiom. (The analogs of continuity in the rest of the decision theories I’ll mention come off as very structural.)
Savage
Leonard Savage took on the task of providing simultaneous justification of the entire Bayesian decision theory, grounding subjective probability and expected utility in one set of axioms. I won’t describe the entire framework, as it’s fairly complicated; see the SEP section. I will note several features of it, though:
 Savage makes the somewhat peculiar move of separating the objects of belief (“states”) and objects of desire (“outcomes”). How we go about separating parts of the world into one or the other seems quite unclear.
 He replaces the gambles from VNM with “acts”: an act is a function from states to outcomes (he’s practically begging us to make terrible puns about his “savage acts”). Just as the VNM theorem requires us to assume that the agent has preferences on all lotteries, Savage’s theorem requires the agent to have preferences over all acts; that is, all functions from states to outcomes. Some of these may be quite absurd.
 As the paper Actualist Rationality complains, Savage’s justification for his axioms is quite deontological; he is primarily saying that if you noticed any violation of the axioms in yourself, you would feel there’s something wrong with your thinking and you would want to correct it somehow. This doesn’t mean we can’t put some of his axioms in tier 1; after all, he’s got a transitivity axiom like everyone else. However, on Savage’s account, it’s all what I’d call tiertwo justification.
 Savage certainly has what I’d call tierthree axioms, as well. The SEP article identifies P5 and P6 as such. His axiom P6 requires that there exist worldstates which are sufficiently improbable so as to make even the worst possible consequences negligible. Surely it can’t be a “requirement of rationality” that the statespace be complex enough to contain negligible possibilities; this is just something he needs to prove his theorem. P6 is Savage’s analog of the continuity axiom.
 Savage chooses not to define probabilities on a sigmaalgebra. I haven’t seen any decisiontheorist who prefers to use sigmaalgebras yet. Similarly, he only derives finite additivity, not countable additivity; this also seems common among decision theorists.
 Savage’s representation theorem shows that if his axioms are followed, there exists a unique probability distribution and a utility function which is unique up to a linear transformation, such that the preference relation on acts is also the ordering with respect to expected utility.
JeffreyBolker Axioms
In contrast to Savage, Jeffrey’s decision theory makes the objects of belief and the objects of desire the same. Both belief and desire are functions of logical propositions.
The most common axiomatization is Bolker’s. We assume that there is a boolean field, with a preference relation \(\prec\), following these axioms:
 \(\prec\) is transitive and complete. \(\prec\) is defined on all elements of the field except \(\bot\). (Jeffrey does not wish to require preferences over propositions which the agent believes to be impossible, in contrast to Savage.)
 The boolean field is complete and atomless. More specifically:
 An upper bound of a (possibly infinite) set of propositions is a proposition implied by every proposition in that set. The supremum of is an upper bound which implies every upper bound. Define lower bound and infimum analogously. A complete Boolean algebra is one in which every set of propositions has a supremum and an infimum.
 An atom is a proposition other than \(\bot\) which is implied by itself and \(\bot\), but by no other propositions. An atomless Boolean algebra has no atoms.
 (Law of Averaging.) If \(A \wedge B = \bot\),
 If \(A \prec B\), then \(A \prec A \vee B \prec B\)
 If \(A \sim B\), then \(A \sim A \vee B \sim B\)
 (Impartiality.) If \(A \wedge B = \bot\) and \(A \sim B\), then if \(A \vee C \sim B \vee C\) for some \(C\) where \(AC = BC = \bot\) and not \(C \sim A\), then \(A \vee C \sim B \vee C\) for every such \(C\).
 (Continuity.) Suppose that \(X\) is the supremum (infimum) of a set of propositions \(\mathcal{S}\), and \(A \prec X \prec B\). Then there exists \(C \in \mathcal{S}\) such that if \(D \in \mathcal{S}\) is implied by \(C\) (or where \(X\) is the infimum, implies \(C\)), then \(A \prec D \prec B\).
The central axiom to Jeffrey’s decision theory is the law of averaging. This can be seen as a kind of consequentialism. If I violate this axiom, I would either value some gamble \(A \vee B\) less than both its possible outcomes \(A\) and \(B\), or value it more. In the first case, we could charge an agent for switching from the gamble \(A \vee B\) to \(A\); this would worsen the agent’s situation, since one of \(A\) or \(B\) was true already, \(A \preceq B\), and the agent has just lost money. In the other case, we can set up a proper money pump: charge the agent to keep switching to the gamble \(A \vee B\), which it will happily do whichever of \(A\) or \(B\) come out true.
So, I tentatively put axiom 3 in my first tier (pending better formalization of that argument).
I’ve already dealt with axiom 1, since it’s just the first two axioms of VNM rolled into one: I count transitivity as tier one, and completeness as tier two.
Axioms two and five are clearly structural, so I place them in my third tier. Bolker is essentially setting things up so that there will be an isomorphism to the real numbers when he derives the existence of a probability and utility distribution from the axioms.
Axiom 4 has to be considered structural in the sense I’m using here, as well. Jeffrey admits that there is no intuitive motivation for it unless you already think of propositions as having some kind of measure which determines their relative contribution to expected utility. If you do have such an intuition, axiom 4 is just saying that propositions whose weight is equal in one context must have equal weight in all contexts. (Savage needs a similar axiom which says that probabilities do not change in different contexts.)
Unlike Savage’s, Bolker’s representation theorem does not give us a unique probability distribution. Instead, we can trade between utility and probability via a certain formula. Probability zero events are not distinguishable from events which cause the utilities of all subevents to be constant.
JeffreyDomotor Axioms
Zoltan Domotor provides an alternative set of axioms for Jeffrey’s decision theory. Domotor points out that Bolker’s axioms are sufficient, but not necessary, for his representation theorem. He sets out to construct a necessary and sufficient axiomatization. This necessitates dealing with finite and incomplete boolean fields. The result is a representation theorem which allows nonstandard reals; we can have infinitesimal probabilities, and infinitesimal or infinite utilities. So, we have a second point of evidence in favor of that.
Although looking for necessary and sufficient conditions seems promising as a way of eliminating structural assumptions like completeness and atomlessness, it ends up making all axioms structural. In fact, Domotor gives essentially one significant axiom: his axiom J2. J2 is totally inscrutable without a careful reading of the notation introduced in his paper; it would be pointless to reproduce it here. The axiom is chosen to exactly state the conditions for the existence of a probability and utility function, and can’t be justified in any other way – at least not without providing a full justification for Jeffrey’s decision theory by other means!
Another consequence of Domotor’s axiomatization is that the representation becomes wildly nonunique. This has to be true for a representation theorem dealing with finite situations, since there is a lot of wiggle room in what probabilities and utilities represent preferences over finite domains. It gets even worse with the addition of infinitesimals, though; the choice of nonstandardreal field confronts us as well.
Conditional Probability as Primitive
Hajek
In What Conditional Probabilities Could Not Be, Alan Hajek argues that conditional probability cannot possibly be defined by Bayes’ famous formula, due primarily to its inadequacy when conditioning on events of probability zero. He also takes issue with other proposed definitions, arguing that conditional probability should instead be taken as primitive.
The most popular way of doing this are Popper’s axioms of conditional probability. In Learning the Impossible (Vann McGee, 1994), it’s shown that conditional probability functions following Popper’s axioms and nonstandardreal probability functions with conditionals defined according to Bayes’ theorem are intertranslatable. Hajek doesn’t like the infinitesimal approach because of the resulting nonuniqueness of representation; but, for those who don’t see this as a problem but who put some stock in Hajek’s other arguments, this would be another point in favor of infinitesimal probability.
Richard Bradley
In A unified Bayesian decision theory, Richard Bradley shows that Savage’s and Jeffrey’s decision theories can be seen as special cases of a more general decision theory which takes conditional probabilities as a basic element. Bradley’s theory groups all the “structural” assumptions together, as axioms which postulate a rich set of “neutral” propositions (essentially, postulating a sufficiently rich set of coinflips to measure the probabilities of other propositions against). He needs to specifically make an archimedean assumption to rule out nonstandard numbers, which could easily be dropped. He manages to derive a unique probability distribution in his representation theorem, as well.
OK, So What?
In general, I have hope that most of the tiertwo axioms could become tierone; that is, it seems possible to create a generalization of dutchbook/moneypump arguments which covers most of what decision theorists consider to be principles of rationality. I have an incomplete attempt which I’ll develop for a future post. I don’t expect tierthree axioms to be justifiable in this way.
With such a formalism in hand, the next step would be to try to derive a representation theorem: how can we understand the preferences of an agent which doesn’t fall into these generalized traps? I’m not sure what generalizations to expect beyond infinitesimal probability. It’s not even clear that such an agent’s preferences will always be representable as a probability function and utility function pair; some more complicated structure may be implicated (in which case it will likely be difficult to find!). This would tell us something new about what agents look like in general.
The generalized dutchbook would likely disallow preference functions which put agents in situations they’ll predictably regret. This sounds like a temporal consistency constraint; so, it might also justify updatelessness automatically or with a little modification. That would certainly be interesting.
And, as I said before, if we have this kind of foundation we can attempt to “do the same thing we did with logical induction” to get a decision theory which is appropriate for situations of logical uncertainty as well.
