One of the fundamental problems of epistemology is to say when the evidence in an agent's possession justies the beliefs she holds. In this paper and its prequel, we defend the Bayesian solution to this problem by appealing to the following fundamental norm: ACCURACY An epistemic agent ought to minimize the inaccuracy of her partial beliefs. In the prequel, we made this norm mathematically precise; in this paper, we derive its consequences. We show that the two core tenets of Bayesianism follow from the norm, while the characteristic claim of the Objectivist Bayesian follows from the norm along with an extra assumption. Finally, we consider Richard Jerey's proposed generalization of conditionalization. We show not only that his rule cannot be derived from the norm, unless the requirement of Rigidity is imposed from the start, but further that the norm reveals it to be illegitimate. We end by deriving an alternative updating rule for those cases in which Jeffrey's is usually supposed to apply.
|Translated title of the contribution||An Objective Justification of Bayesianism II: The Consequences of Minimizing Inaccuracy|
|Pages (from-to)||236 - 272|
|Number of pages||37|
|Journal||Philosophy of Science|
|Publication status||Published - Apr 2010|
- Centre for Science and Philosophy
Leitgeb, H., & Pettigrew, R. G. (2010). An Objective Justification of Bayesianism II: The Consequences of Minimizing Inaccuracy. Philosophy of Science, 77(2), 236 - 272. https://doi.org/10.1086/651318