Recently, a new style of argument has emerged that seeks to establish probabilism, a putative epistemic norm that says that the set of one’s degrees of belief (or partial beliefs) ought to satisfy the axioms of the probability calculus (Joyce 1998), (Joyce 2009), (Predd, et al. 2009), (Schervish et al 2009). These arguments are sometimes called accuracy domination arguments and they proceed by showing that if the set of one’s partial beliefs violates the axioms of probability then there is another set of partial beliefs that satisfies those axioms and that is more ‘accurate’ than one’s original set however the world turns out—more detail is given below. These new arguments identify a purely epistemic failing in an agent who violates probabilism. Thus, they are superior to Dutch book arguments and other justifications that appeal to the pragmatic failings of such an agent, which they claim are inevitable on behaviourist grounds. However, they have a feature that many will find undesirable. If they work, these arguments seem to show not only that it is a necessary condition on rationality that an agent’s set of partial beliefs are probabilistic, but further that this is also sufficient. That is, they have the consequence that any probabilistic sets of partial beliefs are rational. However, the extreme subjectivism of this conclusion has been resisted by many who are nonetheless attracted to probabilism, and who have proposed a number of stronger norms that place further restrictions on rational sets of partial beliefs. Some of these stronger norms share a common structure. They each suggest that an agent should defer to some sort of expert for the degree to which she believes a proposition. For some, the expert degrees of belief are given by the objective chances: this is what David Lewis and his followers intend to capture in the Principal Principle in its many versions (Lewis 1980), (Hall 1994), (Ismael 2009). For others, the expert is the agent’s future self: this is the content of Bas van Fraassen’s Reflection Principle (van Fraassen 1984). The overall aim of the current project is to understand how we can adapt the accuracy domination arguments for probabilism to give analogous arguments for these stronger norms that each attempt to prescribe a certain sort of epistemic deference.
Thus, the preliminary research questions of the project are these:
1. What is the structure of the accuracy domination arguments for probabilism?
2. Which features of these arguments might be altered to produce analogous arguments for different norms?
3. Can we alter these features to produce analogous arguments for norms of epistemic deference: in particular, the Principal Principle and the Reflection Principle?
In research I have already carried out, I have answered these questions. Accuracy domination arguments have three stages:
i. First, we articulate a notion of vindication for sets of partial beliefs. That is, we say what it is that stands to partial beliefs as truth stands to full beliefs.
ii. Next, we say that a set of partial beliefs is better, epistemically speaking, the closer it is to vindication, and we lay down conditions on measures of distance from vindication (sometimes called scoring rules or inaccuracy measures).
iii. Finally, we appeal to the norm of dominance from rational choice theory to show that, if we value sets of partial beliefs more the closer they are to vindication, then we ought to satisfy a certain epistemic norm, such as probabilism, since a set of partial beliefs that violates that norm is always dominated by a set that obeys it, while a set that obeys it is never dominated by anything else.
In this structure, there are a number of features that could be specified in a variety of different ways to give different arguments. For instance, while accuracy domination arguments for probabilism take partial beliefs to be vindicated just in case they agree with the truth values, we could take them to be vindicated when they agree with the objective chances, or our own future partial beliefs, or the opinions of some other sort of expert. Also, we could specify the class of legitimate measures of distance from vindication in many different ways. And we could appeal to various different norms of rational choice in the final stage.
In preliminary work on this project, I have shown that, if we say that partial beliefs are vindicated if they agree with the objective chances, and if we make certain assumptions about objective chances, then we obtain an accuracy domination argument for the Principal Principle. And, again under certain assumptions, if we say that partial beliefs are vindicated if they agree with the partial beliefs of one of our future selves, then we obtain an argument for the Reflection Principle. And I have also shown that there are independent reasons to say each of these things.
I would spend the period of the Fellowship investigating the philosophical underpinnings of these new accuracy domination arguments for the Principal Principle and the Reflection Principle. Each raises many interesting questions. These fall under three headings:
4. What is the correct notion of vindication for partial beliefs?
4.1. Why is vindication not agreement with truth, as in the standard accuracy domination arguments and the case of full beliefs? Perhaps the truth-values of particular propositions are not defined at the time at which the vindication must take place: for instance, if the proposition concerns future contingent events. Do these arguments point to vindication as agreement with objective chance, or agreement with future selves, or some other source of vindication? (Hájek ms)
4.2. Is this appeal to the indeterminacy of future contingents available to those who give a reductionist analysis of objective chance, since their reduction typically requires that the truth-values of all propositions are determined at all times in order that the truth-values of propositions about objective chances are determined at all times? Does this point to the possibility that, while non-reductionists about chance are able to use an accuracy domination argument to justify the Principal Principle (or one of its relatives), the reductionist cannot? This would reverse the conventional wisdom, which holds that the non-reductionist cannot justify the Principal Principle, while the reductionist can.
4.3. Which version of the Principal Principle is justified by the accuracy domination argument? Lewis’ original Principal Principle (Lewis 1980), the New Principle (Hall 1994), (Joyce 2007), or Ismael’s Generalized Principal Principle (Ismael 2009)?
4.4. Under what circumstances would we take agreement with our future selves to constitute vindication? Do these coincide with the circumstances in which the Reflection Principle has been thought plausible? (Arntzenius 2003), (Briggs 2009b)
4.5. As mentioned above, there are certain assumptions about objective chances and the sets of partial beliefs of future selves that are required in order to run these new accuracy domination arguments. For instance, they must obey the probability axioms. Are these assumptions plausible? Can we provide further justification for them?
5. What are the legitimate measures of distance from vindication (or scoring rules or inaccuracy measures)?
5.1. A number of characterizations of the legitimate measures have been given (Joyce 1998), (Joyce 2009), (Predd, et al. 2009), (Schervish et al 2009), (D’Agostino and Dardanoni 2009), (Leitgeb and Pettigrew 2010). Which are compelling? Are there superior alternatives?
5.2. Can we assume that legitimate scoring rules must be proper? That is, must they be such that any probabilistic set of partial beliefs expects itself to be closer to vindication than it expects any other set to be? This is a standard assumption in the literature on probabilism, but it faces serious objections (Pettigrew forthcoming). Can these be overcome in the arguments for the Principal Principle or the Reflection Principle?
5.3. Many of the results pertaining to proper scoring rules can only be mobilized when the scoring rule is assumed to be additive: that is, the distance of a whole set of partial beliefs from vindication is obtained by measuring the distance of each partial belief from vindication and then summing up these individual distances. This is equivalent to a further property called separability. Is separability a desirable feature of a scoring rule in this context (Fitelson and Buchak ms)?
6. What is the normative force of accuracy domination arguments?
6.1. It has been objected that the accuracy domination argument lacks normative force (Bronfman ms), (Fitelson and Easwaran ms). Can these objections be answered when they are raised against the new accuracy domination arguments for the Principal Principle and the Reflection Principle?
Recently, a new style of argument has emerged that seeks to establish probabilism, a putative epistemic norm that says that the set of one’s degrees of belief (or partial beliefs) ought to satisfy the axioms of the probability calculus (Joyce 1998), (Joyce 2009), (Predd, et al. 2009), (Schervish et al 2009). These arguments are sometimes called accuracy domination arguments and they proceed by showing that if the set of one’s partial beliefs violates the axioms of probability then there is another set of partial beliefs that satisfies those axioms and that is more ‘accurate’ than one’s original set however the world turns out—more detail is given below. These new arguments identify a purely epistemic failing in an agent who violates probabilism. Thus, they are superior to Dutch book arguments and other justifications that appeal to the pragmatic failings of such an agent, which they claim are inevitable on behaviourist grounds. However, they have a feature that many will find undesirable. If they work, these arguments seem to show not only that it is a necessary condition on rationality that an agent’s set of partial beliefs are probabilistic, but further that this is also sufficient. That is, they have the consequence that any probabilistic sets of partial beliefs are rational. However, the extreme subjectivism of this conclusion has been resisted by many who are nonetheless attracted to probabilism, and who have proposed a number of stronger norms that place further restrictions on rational sets of partial beliefs. Some of these stronger norms share a common structure. They each suggest that an agent should defer to some sort of expert for the degree to which she believes a proposition. For some, the expert degrees of belief are given by the objective chances: this is what David Lewis and his followers intend to capture in the Principal Principle in its many versions (Lewis 1980), (Hall 1994), (Ismael 2009). For others, the expert is the agent’s future self: this is the content of Bas van Fraassen’s Reflection Principle (van Fraassen 1984). The overall aim of the current project is to understand how we can adapt the accuracy domination arguments for probabilism to give analogous arguments for these stronger norms that each attempt to prescribe a certain sort of epistemic deference.